WorldWideScience

Sample records for compressed massive nuclear

  1. Compressed Baryonic Matter of Astrophysics

    OpenAIRE

    Guo, Yanjun; Xu, Renxin

    2013-01-01

    Baryonic matter in the core of a massive and evolved star is compressed significantly to form a supra-nuclear object, and compressed baryonic matter (CBM) is then produced after supernova. The state of cold matter at a few nuclear density is pedagogically reviewed, with significant attention paid to a possible quark-cluster state conjectured from an astrophysical point of view.

  2. Massive data compression for parameter-dependent covariance matrices

    Science.gov (United States)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  3. Massive-MIMO Sparse Uplink Channel Estimation Using Implicit Training and Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Babar Mansoor

    2017-01-01

    Full Text Available Massive multiple-input multiple-output (massive-MIMO is foreseen as a potential technology for future 5G cellular communication networks due to its substantial benefits in terms of increased spectral and energy efficiency. These advantages of massive-MIMO are a consequence of equipping the base station (BS with quite a large number of antenna elements, thus resulting in an aggressive spatial multiplexing. In order to effectively reap the benefits of massive-MIMO, an adequate estimate of the channel impulse response (CIR between each transmit–receive link is of utmost importance. It has been established in the literature that certain specific multipath propagation environments lead to a sparse structured CIR in spatial and/or delay domains. In this paper, implicit training and compressed sensing based CIR estimation techniques are proposed for the case of massive-MIMO sparse uplink channels. In the proposed superimposed training (SiT based techniques, a periodic and low power training sequence is superimposed (arithmetically added over the information sequence, thus avoiding any dedicated time/frequency slots for the training sequence. For the estimation of such massive-MIMO sparse uplink channels, two greedy pursuits based compressed sensing approaches are proposed, viz: SiT based stage-wise orthogonal matching pursuit (SiT-StOMP and gradient pursuit (SiT-GP. In order to demonstrate the validity of proposed techniques, a performance comparison in terms of normalized mean square error (NCMSE and bit error rate (BER is performed with a notable SiT based least squares (SiT-LS channel estimation technique. The effect of channels’ sparsity, training-to-information power ratio (TIR and signal-to-noise ratio (SNR on BER and NCMSE performance of proposed schemes is thoroughly studied. For a simulation scenario of: 4 × 64 massive-MIMO with a channel sparsity level of 80 % and signal-to-noise ratio (SNR of 10 dB , a performance gain of 18 dB and 13 d

  4. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  5. Optimization of multi-phase compressible lattice Boltzmann codes on massively parallel multi-core systems

    NARCIS (Netherlands)

    Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Schifano, S.F.; Toschi, F.; Tripiccione, R.

    2011-01-01

    We develop a Lattice Boltzmann code for computational fluid-dynamics and optimize it for massively parallel systems based on multi-core processors. Our code describes 2D multi-phase compressible flows. We analyze the performance bottlenecks that we find as we gradually expose a larger fraction of

  6. Searches for massive neutrinos in nuclear beta decay

    International Nuclear Information System (INIS)

    Jaros, J.A.

    1992-10-01

    The status of searches for massive neutrinos in nuclear beta decay is reviewed. The claim by an ITEP group that the electron antineutrino mass > 17eV has been disputed by all the subsequent experiments. Current measurements of the tritium beta spectrum limit m bar νe < 10 eV. The status of the 17 keV neutrino is reviewed. The strong null results from INS Tokyo and Argonne, and deficiencies in the experiments which reported positive effects, make it unreasonable to ascribe the spectral distortions seen by Simpson, Hime, and others to a 17keV neutrino. Several new ideas on how to search for massive neutrinos in nuclear beta decay are discussed

  7. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  8. Efficient Bayesian Compressed Sensing-based Channel Estimation Techniques for Massive MIMO-OFDM Systems

    OpenAIRE

    Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza

    2017-01-01

    Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...

  9. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-03-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.

  10. Are Nuclear Star Clusters the Precursors of Massive Black Holes?

    Directory of Open Access Journals (Sweden)

    Nadine Neumayer

    2012-01-01

    Full Text Available We present new upper limits for black hole masses in extremely late type spiral galaxies. We confirm that this class of galaxies has black holes with masses less than 106M⊙, if any. We also derive new upper limits for nuclear star cluster masses in massive galaxies with previously determined black hole masses. We use the newly derived upper limits and a literature compilation to study the low mass end of the global-to-nucleus relations. We find the following. (1 The MBH-σ relation cannot flatten at low masses, but may steepen. (2 The MBH-Mbulge relation may well flatten in contrast. (3 The MBH-Sersic n relation is able to account for the large scatter in black hole masses in low-mass disk galaxies. Outliers in the MBH-Sersic n relation seem to be dwarf elliptical galaxies. When plotting MBH versus MNC we find three different regimes: (a nuclear cluster dominated nuclei, (b a transition region, and (c black hole-dominated nuclei. This is consistent with the picture, in which black holes form inside nuclear clusters with a very low-mass fraction. They subsequently grow much faster than the nuclear cluster, destroying it when the ratio MBH/MNC grows above 100. Nuclear star clusters may thus be the precursors of massive black holes in galaxy nuclei.

  11. A data compression algorithm for nuclear spectrum files

    International Nuclear Information System (INIS)

    Mika, J.F.; Martin, L.J.; Johnston, P.N.

    1990-01-01

    The total space occupied by computer files of spectra generated in nuclear spectroscopy systems can lead to problems of storage, and transmission time. An algorithm is presented which significantly reduces the space required to store nuclear spectra, without loss of any information content. Testing indicates that spectrum files can be routinely compressed by a factor of 5. (orig.)

  12. Mathematical analysis of compressive/tensile molecular and nuclear structures

    Science.gov (United States)

    Wang, Dayu

    Mathematical analysis in chemistry is a fascinating and critical tool to explain experimental observations. In this dissertation, mathematical methods to present chemical bonding and other structures for many-particle systems are discussed at different levels (molecular, atomic, and nuclear). First, the tetrahedral geometry of single, double, or triple carbon-carbon bonds gives an unsatisfying demonstration of bond lengths, compared to experimental trends. To correct this, Platonic solids and Archimedean solids were evaluated as atoms in covalent carbon or nitrogen bond systems in order to find the best solids for geometric fitting. Pentagonal solids, e.g. the dodecahedron and icosidodecahedron, give the best fit with experimental bond lengths; an ideal pyramidal solid which models covalent bonds was also generated. Second, the macroscopic compression/tension architectural approach was applied to forces at the molecular level, considering atomic interactions as compressive (repulsive) and tensile (attractive) forces. Two particle interactions were considered, followed by a model of the dihydrogen molecule (H2; two protons and two electrons). Dihydrogen was evaluated as two different types of compression/tension structures: a coaxial spring model and a ring model. Using similar methods, covalent diatomic molecules (made up of C, N, O, or F) were evaluated. Finally, the compression/tension model was extended to the nuclear level, based on the observation that nuclei with certain numbers of protons/neutrons (magic numbers) have extra stability compared to other nucleon ratios. A hollow spherical model was developed that combines elements of the classic nuclear shell model and liquid drop model. Nuclear structure and the trend of the "island of stability" for the current and extended periodic table were studied.

  13. Compression modes and the nuclear matter incompressibility ...

    Indian Academy of Sciences (India)

    We review the current status of the nuclear matter ( = and no Coulomb interaction) incompressibility coefficient, , and describe the theoretical and the experimental methods used to determine from properties of compression modes in nuclei. In particular we consider the long standing problem of the conflicting ...

  14. Compression of the Right Pulmonary Artery by a Massive Defects on Pulmonary Scintigraphy

    Energy Technology Data Exchange (ETDEWEB)

    Makis, William [Brandon Regional Health Centre, Brandon (Canada); Derbekyan, Vilma [McGill Univ. Health Centre, Montreal (Canada)

    2012-03-15

    A 67 year old woman, who presented with a 2 month history of dyspnea, had a vectilation and perfusion lung scan that showed absent perfusion of the entire right lung scan that showed absent perfusion of the entire right lung with normal ventilation, as well as a rounded matched defect in the left lower lung adjacent to mialine, suspicious for an aortic aneurysm or dissection. CT pulmonary angiography revealed a massive descending aortic aneurysm compressing the right pulmonary artery as well as the left lung parenchyma, accounting for the bilateral perfusion scan defects. We present the Xe 133 ventilation, Tc 99m MAA perfusion and CT pulmonary angiography imaging findings of this rare case.

  15. A method of loss free compression for the data of nuclear spectrum

    International Nuclear Information System (INIS)

    Sun Mingshan; Wu Shiying; Chen Yantao; Xu Zurun

    2000-01-01

    A new method of loss free compression based on the feature of the data of nuclear spectrum is provided, from which a practicable algorithm is successfully derived. A compression rate varying from 0.50 to 0.25 is obtained and the distribution of the processed data becomes even more suitable to be reprocessed by another compression such as Huffman Code to improve the compression rate

  16. Compressed-air and backup nitrogen systems in nuclear power plants

    International Nuclear Information System (INIS)

    Hagen, E.W.

    1982-07-01

    This report reviews and evaluates the performance of the compressed-air and pressurized-nitrogen gas systems in commercial nuclear power units. The information was collected from readily available operating experiences, licensee event reports, system designs in safety analysis reports, and regulatory documents. The results are collated and analyzed for significance and impact on power plant safety performance. Under certain circumstances, the fail-safe philosophy for a piece of equipment or subsystem of the compressed-air systems initiated a series of actions culminating in reactor transient or unit scram. However, based on this study of prevailing operating experiences, reclassifying the compressed-gas systems to a higher safety level will neither prevent (nor mitigate) the reoccurrences of such happenings nor alleviate nuclear power plant problems caused by inadequate maintenance, operating procedures, and/or practices. Conversely, because most of the problems were derived from the sources listed previously, upgrading of both maintenance and operating procedures will not only result in substantial improvement in the performance and availability of the compressed-air (and backup nitrogen) systems but in improved overall plant performance

  17. The formation of massive molecular filaments and massive stars triggered by a magnetohydrodynamic shock wave

    Science.gov (United States)

    Inoue, Tsuyoshi; Hennebelle, Patrick; Fukui, Yasuo; Matsumoto, Tomoaki; Iwasaki, Kazunari; Inutsuka, Shu-ichiro

    2018-05-01

    Recent observations suggest an that intensive molecular cloud collision can trigger massive star/cluster formation. The most important physical process caused by the collision is a shock compression. In this paper, the influence of a shock wave on the evolution of a molecular cloud is studied numerically by using isothermal magnetohydrodynamics simulations with the effect of self-gravity. Adaptive mesh refinement and sink particle techniques are used to follow the long-time evolution of the shocked cloud. We find that the shock compression of a turbulent inhomogeneous molecular cloud creates massive filaments, which lie perpendicularly to the background magnetic field, as we have pointed out in a previous paper. The massive filament shows global collapse along the filament, which feeds a sink particle located at the collapse center. We observe a high accretion rate \\dot{M}_acc> 10^{-4} M_{⊙}yr-1 that is high enough to allow the formation of even O-type stars. The most massive sink particle achieves M > 50 M_{⊙} in a few times 105 yr after the onset of the filament collapse.

  18. Observation of Compressive Deformation Behavior of Nuclear Graphite by Digital Image Correlation

    International Nuclear Information System (INIS)

    Kim, Hyunju; Kim, Eungseon; Kim, Minhwan; Kim, Yongwan

    2014-01-01

    Polycrystalline nuclear graphite has been proposed as a fuel element, moderator and reflector blocks, and core support structures in a very high temperature gas-cooled reactor. During reactor operation, graphite core components and core support structures are subjected to various stresses. It is therefore important to understand the mechanism of deformation and fracture of nuclear graphites, and their significance to structural integrity assessment methods. Digital image correlation (DIC) is a powerful tool to measure the full field displacement distribution on the surface of the specimens. In this study, to gain an understanding of compressive deformation characteristic, the formation of strain field during a compression test was examined using a commercial DIC system. An examination was made to characterize the compressive deformation behavior of nuclear graphite by a digital image correlation. The non-linear load-displacement characteristic prior to the peak load was shown to be mainly dominated by the presence of localized strains, which resulted in a permanent displacement. Young's modulus was properly calculated from the measured strain

  19. Nuclear transmutation by flux compression

    International Nuclear Information System (INIS)

    Seifritz, W.

    2001-01-01

    A new idea for the transmutation of minor actinides, long (and even short) lived fission products is presented. It is based an the property of neutron flux compression in nuclear (fast and/or thermal) reactors possessing spatially non-stationary critical masses. An advantage factor for the burn-up fluence of the elements to be transmuted in the order of magnitude of 100 and more is obtainable compared with the classical way of transmutation. Three typical examples of such transmuters (a subcritical ringreactor with a rotating reflector, a sub-critical ring reactor with a rotating spallation source, the socalled ''pulsed energy amplifier'', and a fast burn-wave reactor) are presented and analysed with regard to this purpose. (orig.) [de

  20. THE VERY MASSIVE STAR CONTENT OF THE NUCLEAR STAR CLUSTERS IN NGC 5253

    Energy Technology Data Exchange (ETDEWEB)

    Smith, L. J. [Space Telescope Science Institute and European Space Agency, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Crowther, P. A. [Department of Physics and Astronomy, University of Sheffield, Sheffield S3 7RH (United Kingdom); Calzetti, D. [Department of Astronomy, University of Massachusetts—Amherst, Amherst, MA 01003 (United States); Sidoli, F., E-mail: lsmith@stsci.edu [London Centre for Nanotechnology, University College London, London WC1E 6BT (United Kingdom)

    2016-05-20

    The blue compact dwarf galaxy NGC 5253 hosts a very young starburst containing twin nuclear star clusters, separated by a projected distance of 5 pc. One cluster (#5) coincides with the peak of the H α emission and the other (#11) with a massive ultracompact H ii region. A recent analysis of these clusters shows that they have a photometric age of 1 ± 1 Myr, in apparent contradiction with the age of 3–5 Myr inferred from the presence of Wolf-Rayet features in the cluster #5 spectrum. We examine Hubble Space Telescope ultraviolet and Very Large Telescope optical spectroscopy of #5 and show that the stellar features arise from very massive stars (VMSs), with masses greater than 100 M {sub ⊙}, at an age of 1–2 Myr. We further show that the very high ionizing flux from the nuclear clusters can only be explained if VMSs are present. We investigate the origin of the observed nitrogen enrichment in the circumcluster ionized gas and find that the excess N can be produced by massive rotating stars within the first 1 Myr. We find similarities between the NGC 5253 cluster spectrum and those of metal-poor, high-redshift galaxies. We discuss the presence of VMSs in young, star-forming galaxies at high redshift; these should be detected in rest-frame UV spectra to be obtained with the James Webb Space Telescope . We emphasize that population synthesis models with upper mass cutoffs greater than 100 M {sub ⊙} are crucial for future studies of young massive star clusters at all redshifts.

  1. The evolution of American nuclear doctrine 1945-1980: from massive retaliation to limited nuclear war

    International Nuclear Information System (INIS)

    Richani, N.

    1983-01-01

    This thesis attempts to demonstrate the evolutionary character of American nuclear doctrine from the beginning of the nuclear age in 1945 until 1980. It also aims at disclosing some of the most important factors that contributed to the doctrine's evolution, namely, technological progress and developments in weaponry and the shifts that were taking place in the correlation of forces between the two superpowers, the Soviet Union and the United States. The thesis tries to establish the relation, if any, between these two variables (technology and balance of forces) and the evolution of the doctrine from Massive Retaliation to limited nuclear war. There are certainly many other factors which influenced military doctrine, but this thesis focuses on the above mentioned factors. touching on others when it was thought essential.The thesis concludes by trying to answer the question of whether the purpose of the limited nuclear war doctrine is to keep the initiative in US hands, that is putting itself on the side with the positive purpose, or not. Refs

  2. The evolution of American nuclear doctrine 1945-1980: from massive retaliation to limited nuclear war

    Energy Technology Data Exchange (ETDEWEB)

    Richani, N [Public Administration Dpt. American Univ. of Beirut (Lebanon)

    1983-12-31

    This thesis attempts to demonstrate the evolutionary character of American nuclear doctrine from the beginning of the nuclear age in 1945 until 1980. It also aims at disclosing some of the most important factors that contributed to the doctrine`s evolution, namely, technological progress and developments in weaponry and the shifts that were taking place in the correlation of forces between the two superpowers, the Soviet Union and the United States. The thesis tries to establish the relation, if any, between these two variables (technology and balance of forces) and the evolution of the doctrine from Massive Retaliation to limited nuclear war. There are certainly many other factors which influenced military doctrine, but this thesis focuses on the above mentioned factors. touching on others when it was thought essential.The thesis concludes by trying to answer the question of whether the purpose of the limited nuclear war doctrine is to keep the initiative in US hands, that is putting itself on the side with the positive purpose, or not. Refs.

  3. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. The nuclear equation of state

    International Nuclear Information System (INIS)

    Kahana, S.

    1986-01-01

    The role of the nuclear equation of state in determining the fate of the collapsing cores of massive stars is examined in light of both recent theoretical advances in this subject and recent experimental measurements with relativistic heavy ions. The difficulties existing in attempts to bring the softer nuclear matter apparently required by the theory of Type II supernovae into consonance with the heavy ion data are discussed. Relativistic mean field theory is introduced as a candidate for derivation of the equation of state, and a simple form for the saturation compressibility is obtained. 28 refs., 4 figs., 1 tab

  5. The nuclear equation of state

    Energy Technology Data Exchange (ETDEWEB)

    Kahana, S.

    1986-01-01

    The role of the nuclear equation of state in determining the fate of the collapsing cores of massive stars is examined in light of both recent theoretical advances in this subject and recent experimental measurements with relativistic heavy ions. The difficulties existing in attempts to bring the softer nuclear matter apparently required by the theory of Type II supernovae into consonance with the heavy ion data are discussed. Relativistic mean field theory is introduced as a candidate for derivation of the equation of state, and a simple form for the saturation compressibility is obtained. 28 refs., 4 figs., 1 tab.

  6. Proliferation of massive destruction weapons: fantasy or reality?

    International Nuclear Information System (INIS)

    Duval, M.

    2001-01-01

    This article evaluates the threat of massive destruction weapons (nuclear, chemical, biological) for Europe and recalls the existing safeguards against the different forms of nuclear proliferation: legal (non-proliferation treaty (NPT), comprehensive nuclear test ban treaty (CTBT), fissile material cut off treaty (FMCT) etc..), technical (fabrication of fissile materials, delays). However, all these safeguards can be overcome as proven by the activities of some countries. The situation of proliferation for the other type of massive destruction weapons is presented too. (J.S.)

  7. Feasibility of Ericsson type isothermal expansion/compression gas turbine cycle for nuclear energy use

    International Nuclear Information System (INIS)

    Shimizu, Akihiko

    2007-01-01

    A gas turbine with potential demand for the next generation nuclear energy use such as HTGR power plants, a gas cooled FBR, a gas cooled nuclear fusion reactor uses helium as working gas and with a closed cycle. Materials constituting a cycle must be set lower than allowable temperature in terms of mechanical strength and radioactivity containment performance and so expansion inlet temperature is remarkably limited. For thermal efficiency improvement, isothermal expansion/isothermal compression Ericsson type gas turbine cycle should be developed using wet surface of an expansion/compressor casing and a duct between stators without depending on an outside heat exchanger performing multistage re-heat/multistage intermediate cooling. Feasibility of an Ericsson cycle in comparison with a Brayton cycle and multi-stage compression/expansion cycle was studied and technologies to be developed were clarified. (author)

  8. The investigation on compressed air quality analysis results of nuclear power plants

    International Nuclear Information System (INIS)

    Sung, K. B.; Kim, H. K.; Kim, W. S.

    2000-01-01

    The compressed air system of nuclear power plants provides pneumatic power for both operation and control of various plant equipment, tools, and instrumentation. Included in the air supply systems are the compressors, coolers, moisture separators, dryers, filters and air receiver tanks that make up the major items of equipment. The service air system provides oil-free compressed air for general plant and maintenance use and the instrument air system provides dry, oil-free, compressed air for both nonessential and essential components and instruments. NRC recommended the periodic checks on GL88-14 'Instrument air supply system problems affecting safety-related equipment'. To ensure that the quality of the instrument air is equivalent to or exceeds the requirement s of ISA-S7.3(1975), air samples are taken at every refueling outage and analyzed for moisture, oil and particulate content. The over all results are satisfied the requirements of ISA-S7.3

  9. Negative linear compressibility and massive anisotropic thermal expansion in methanol monohydrate.

    Science.gov (United States)

    Fortes, A Dominic; Suard, Emmanuelle; Knight, Kevin S

    2011-02-11

    The vast majority of materials shrink in all directions when hydrostatically compressed; exceptions include certain metallic or polymer foam structures, which may exhibit negative linear compressibility (NLC) (that is, they expand in one or more directions under hydrostatic compression). Materials that exhibit this property at the molecular level--crystalline solids with intrinsic NLC--are extremely uncommon. With the use of neutron powder diffraction, we have discovered and characterized both NLC and extremely anisotropic thermal expansion, including negative thermal expansion (NTE) along the NLC axis, in a simple molecular crystal (the deuterated 1:1 compound of methanol and water). Apically linked rhombuses, which are formed by the bridging of hydroxyl-water chains with methyl groups, extend along the axis of NLC/NTE and lead to the observed behavior.

  10. Biliary-duodenal anastomosis using magnetic compression following massive resection of small intestine due to strangulated ileus after living donor liver transplantation: a case report.

    Science.gov (United States)

    Saito, Ryusuke; Tahara, Hiroyuki; Shimizu, Seiichi; Ohira, Masahiro; Ide, Kentaro; Ishiyama, Kohei; Kobayashi, Tsuyoshi; Ohdan, Hideki

    2017-12-01

    Despite the improvements of surgical techniques and postoperative management of patients with liver transplantation, biliary complications are one of the most common and important adverse events. We present a first case of choledochoduodenostomy using magnetic compression following a massive resection of the small intestine due to strangulated ileus after living donor liver transplantation. The 54-year-old female patient had end-stage liver disease, secondary to liver cirrhosis, due to primary sclerosing cholangitis with ulcerative colitis. Five years earlier, she had received living donor liver transplantation using a left lobe graft, with resection of the extrahepatic bile duct and Roux-en-Y anastomosis. The patient experienced sudden onset of intense abdominal pain. An emergency surgery was performed, and the diagnosis was confirmed as strangulated ileus due to twisting of the mesentery. Resection of the massive small intestine, including choledochojejunostomy, was performed. Only 70 cm of the small intestine remained. She was transferred to our hospital with an external drainage tube from the biliary cavity and jejunostomy. We initiated total parenteral nutrition, and percutaneous transhepatic biliary drainage was established to treat the cholangitis. Computed tomography revealed that the biliary duct was close to the duodenum; hence, we planned magnetic compression anastomosis of the biliary duct and the duodenum. The daughter magnet was placed in the biliary drainage tube, and the parent magnet was positioned in the bulbus duodeni using a fiberscope. Anastomosis between the left hepatic duct and the duodenum was accomplished after 25 days, and the biliary drainage stent was placed over the anastomosis to prevent re-stenosis. Contributions to the successful withdrawal of parenteral nutrition were closure of the ileostomy in the adaptive period, preservation of the ileocecal valve, internal drainage of bile, and side-to-side anastomosis

  11. Massive-scale RDF Processing Using Compressed Bitmap Indexes

    Energy Technology Data Exchange (ETDEWEB)

    Madduri, Kamesh; Wu, Kesheng

    2011-05-26

    The Resource Description Framework (RDF) is a popular data model for representing linked data sets arising from the web, as well as large scienti c data repositories such as UniProt. RDF data intrinsically represents a labeled and directed multi-graph. SPARQL is a query language for RDF that expresses subgraph pattern- nding queries on this implicit multigraph in a SQL- like syntax. SPARQL queries generate complex intermediate join queries; to compute these joins e ciently, we propose a new strategy based on bitmap indexes. We store the RDF data in column-oriented structures as compressed bitmaps along with two dictionaries. This paper makes three new contributions. (i) We present an e cient parallel strategy for parsing the raw RDF data, building dictionaries of unique entities, and creating compressed bitmap indexes of the data. (ii) We utilize the constructed bitmap indexes to e ciently answer SPARQL queries, simplifying the join evaluations. (iii) To quantify the performance impact of using bitmap indexes, we compare our approach to the state-of-the-art triple-store RDF-3X. We nd that our bitmap index-based approach to answering queries is up to an order of magnitude faster for a variety of SPARQL queries, on gigascale RDF data sets.

  12. Analysis on Japan's long-term energy outlook considering massive deployment of variable renewable energy under nuclear energy scenario

    International Nuclear Information System (INIS)

    Komiyama, Ryoichi; Fujii, Yasumasa

    2012-01-01

    This paper investigates Japan's long-term energy outlook to 2050 considering massive deployment of solar photovoltaic (PV) system and wind power generation under nuclear energy scenario. The extensive introduction of PV system and wind power system are expected to play an important role in enhancing electricity supply security after Fukushima Nuclear Power Accident which has increased the uncertainty of future additional construction of nuclear power plant in Japan. On these backgrounds, we develop integrated energy assessment model comprised of both econometric energy demand and supply model and optimal power generation mix model. The latter model is able to explicitly analyze the impact of output fluctuation in variable renewable in detailed time resolution at 10 minutes on consecutive 365 days, incorporating the role of stationary battery technology. Simulation results reveal that intermittent fluctuation derived from high penetration level of those renewables is controlled by quick load following operation by natural gas combined cycle power plant, pumped-storage hydro power, stationary battery technology and the output suppression of PV and wind power. The results show as well that massive penetration of the renewables does not necessarily require the comparable scale of stationary battery capacity. Additionally, on the scenario which assumes the decommissioning of nuclear power plants which lifetime are over 40 years, required PV capacity in 2050 amounts to more than double of PV installment potential in both building and abandoned farmland area. (author)

  13. Thermal stress control using waste steel fibers in massive concretes

    Science.gov (United States)

    Sarabi, Sahar; Bakhshi, Hossein; Sarkardeh, Hamed; Nikoo, Hamed Safaye

    2017-11-01

    One of the important subjects in massive concrete structures is the control of the generated heat of hydration and consequently the potential of cracking due to the thermal stress expansion. In the present study, using the waste turnery steel fibers in the massive concretes, the amount of used cement was reduced without changing the compressive strength. By substituting a part of the cement with waste steel fibers, the costs and the generated hydration heat were reduced and the tensile strength was increased. The results showed that by using 0.5% turnery waste steel fibers and consequently, reducing to 32% the cement content, the hydration heat reduced to 23.4% without changing the compressive strength. Moreover, the maximum heat gradient reduced from 18.5% in the plain concrete sample to 12% in the fiber-reinforced concrete sample.

  14. Time of flight measurements of unirradiated and irradiated nuclear graphite under cyclic compressive load

    Energy Technology Data Exchange (ETDEWEB)

    Bodel, W., E-mail: william.bodel@hotmail.com [Nuclear Graphite Research Group, The University of Manchester (United Kingdom); Atkin, C. [Health and Safety Laboratory, Buxton (United Kingdom); Marsden, B.J. [Nuclear Graphite Research Group, The University of Manchester (United Kingdom)

    2017-04-15

    The time-of-flight technique has been used to investigate the stiffness of nuclear graphite with respect to the grade and grain direction. A loading rig was developed to collect time-of-flight measurements during cycled compressive loading up to 80% of the material's compressive strength and subsequent unloading of specimens along the axis of the applied stress. The transmission velocity (related to Young's modulus), decreased with increasing applied stress; and depending on the graphite grade and orientation, the modulus then increased, decreased or remained constant upon unloading. These tests were repeated while observing the microstructure during the load/unload cycles. Initial decreases in transmission velocity with compressive load are attributed to microcrack formation within filler and binder phases. Three distinct types of behaviour occur on unloading, depending on the grade, irradiation, and loading direction. These different behaviours can be explained in terms of the material microstructure observed from the microscopy performed during loading.

  15. Intelligent transportation systems data compression using wavelet decomposition technique.

    Science.gov (United States)

    2009-12-01

    Intelligent Transportation Systems (ITS) generates massive amounts of traffic data, which posts : challenges for data storage, transmission and retrieval. Data compression and reconstruction technique plays an : important role in ITS data procession....

  16. Massive hydraulic fracturing gas stimulation project

    International Nuclear Information System (INIS)

    Appledorn, C.R.; Mann, R.L.

    1977-01-01

    The Rio Blanco Massive Hydraulic Fracturing Project was fielded in 1974 as a joint Industry/ERDA demonstration to test the relative formations that were stimulated by the Rio Blanco Nuclear fracturing experiment. The project is a companion effort to and a continuation of the preceding nuclear stimulation project, which took place in May 1973. 8 figures

  17. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Christiansen, Anders Roy; Cording, Patrick Hagge

    2017-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  18. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2016-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  19. A hybrid data compression approach for online backup service

    Science.gov (United States)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  20. Deep venous thrombosis due to massive compression by uterine myoma

    Directory of Open Access Journals (Sweden)

    Aleksandra Brucka

    2010-10-01

    Full Text Available A 42-year-old woman, gravida 3, para 3 was admitted to hospital because of painful oedema of her right lower extremity. Initial physical examination revealed a gross, firm tumour filling the entire peritoneal cavity. Doppler ultrasound scan revealed a thrombus in the right common iliac vein, extending to the right femur and popliteal veins, and partially into the calf deep vein. Computed tomography confirmed the existence of an abdominal tumour probably deriving from the genital organs and the presence of a thrombus in the said veins.The patient underwent hysterectomy where a myomatous uterine was removed. She was put on subcutaneous enoxaparine and compressive therapy of the lower extremities. Such symptoms as pain and oedema receded. Control Doppler scan showed fibrinolysis, partial organization of the thrombus and final vein recanalisation. After exclusion of other risk factors of deep vein thrombosis apart from stasis, we conclude that the described pathology was the effect of compression of regional pelvic structures by a uterine myoma.

  1. Giant negative linear compression positively coupled to massive thermal expansion in a metal-organic framework.

    Science.gov (United States)

    Cai, Weizhao; Katrusiak, Andrzej

    2014-07-04

    Materials with negative linear compressibility are sought for various technological applications. Such effects were reported mainly in framework materials. When heated, they typically contract in the same direction of negative linear compression. Here we show that this common inverse relationship rule does not apply to a three-dimensional metal-organic framework crystal, [Ag(ethylenediamine)]NO3. In this material, the direction of the largest intrinsic negative linear compression yet observed in metal-organic frameworks coincides with the strongest positive thermal expansion. In the perpendicular direction, the large linear negative thermal expansion and the strongest crystal compressibility are collinear. This seemingly irrational positive relationship of temperature and pressure effects is explained and the mechanism of coupling of compressibility with expansivity is presented. The positive coupling between compression and thermal expansion in this material enhances its piezo-mechanical response in adiabatic process, which may be used for designing new artificial composites and ultrasensitive measuring devices.

  2. [Common types of massive intraoperative haemorrhage, treatment philosophy and operating skills in pelvic cancer surgery].

    Science.gov (United States)

    Wang, Gang-cheng; Han, Guang-sen; Ren, Ying-kun; Xu, Yong-chao; Zhang, Jian; Lu, Chao-min; Zhao, Yu-zhou; Li, Jian; Gu, Yan-hui

    2013-10-01

    To explore the common types of massive intraoperative bleeding, clinical characteristics, treatment philosophy and operating skills in pelvic cancer surgery. We treated massive intraoperative bleeding in 19 patients with pelvic cancer in our department from January 2003 to March 2012. Their clinical data were retrospectively analyzed. The clinical features of massive intraoperative bleeding were analyzed, the treatment experience and lessons were summed up, and the operating skills to manage this serious issue were analyzed. In this group of 19 patients, 7 cases were of presacral venous plexus bleeding, 5 cases of internal iliac vein bleeding, 6 cases of anterior sacral venous plexus and internal iliac vein bleeding, and one cases of internal and external iliac vein bleeding. Six cases of anterior sacral plexus bleeding and 4 cases of internal iliac vein bleeding were treated with suture ligation to stop the bleeding. Six cases of anterior sacral and internal iliac vein bleeding, one cases of anterior sacral vein bleeding, and one case of internal iliac vein bleeding were managed with transabdominal perineal incision or transabdominal cotton pad compression hemostasis. One case of internal and external iliac vein bleeding was treated with direct ligation of the external iliac vein and compression hemostasis of the internal iliac vein. Among the 19 patients, 18 cases had effective hemostasis. Their blood loss was 400-1500 ml, and they had a fair postoperative recovery. One patient died due to massive intraoperative bleeding of ca. 4500 ml. Most of the massive intraoperative bleeding during pelvic cancer surgery is from the presacral venous plexus and internal iliac vein. The operator should go along with the treatment philosophy to save the life of the patient above all, and to properly perform suture ligation or compression hemostasis according to the actual situation, and with mastered crucial operating hemostatic skills.

  3. Element Production in the S-Cl Region During Carbon Burning in Massive Stars. Using Computer Systems for Modeling of the Nuclear-Reaction Network

    CERN Document Server

    Szalanski, P; Marganeic, A; Gledenov, Yu M; Sedyshev, P V; Machrafi, R; Oprea, A; Padureanu, I; Aranghel, D

    2002-01-01

    This paper presents results of calculations for nuclear network in S-Cl region during helium burning in massive stars (25 M_{\\odot}) using integrated mathematical systems. The authors also examine other application of presented method in different physical tasks.

  4. Fueling-Controlled the Growth of Massive Black Holes

    Science.gov (United States)

    Escala, A.

    2009-05-01

    We study the relation between nuclear massive black holes and their host spheroid gravitational potential. Using AMR numerical simulations, we analyze how gas is transported into the nuclear (central kpc) regions of galaxies. We study gas fueling onto the inner accretion disk (sub-pc scale) and star formation in a massive nuclear disk like those generally found in proto-spheroids (ULIRGs, SCUBA Galaxies). These sub-pc resolution simulations of gas fueling, which is mainly depleted by star formation, naturally satisfy the `M_BH-M_{virial}' relation, with a scatter considerably less than that observed. We find that a generalized version of the Kennicutt-Schmidt Law for starbursts is satisfied, in which the total gas depletion rate (dot M_gas=dot M_BH + M_SF scales as M_gas/t_orbital. See Escala (2007) for more details about this work.

  5. Element production in the S - Cl region during carbon burning in massive stars. Using computer systems for modeling of the nuclear-reaction network

    International Nuclear Information System (INIS)

    Szalanski, P.; Stepinski, M.; Marganiec, A.; Gledenov, Yu.M.; Sedyshev, P.V.; Machrafi, R.; Oprea, A.; Padureanu, I.; Aranghel, D.

    2002-01-01

    This paper presents results of calculations for nuclear network in S - Cl region during helium burning in massive stars (25 solar mass) using integrated mathematical systems. The authors also examine other application of the presented method in different physical tasks. (author)

  6. Compressed beam directed particle nuclear energy generator

    International Nuclear Information System (INIS)

    Salisbury, W.W.

    1985-01-01

    This invention relates to the generation of energy from the fusion of atomic nuclei which are caused to travel towards each other along collision courses, orbiting in common paths having common axes and equal radii. High velocity fusible ion beams are directed along head-on circumferential collision paths in an annular zone wherein beam compression by electrostatic focusing greatly enhances head-on fusion-producing collisions. In one embodiment, a steady radial electric field is imposed on the beams to compress the beams and reduce the radius of the spiral paths for enhancing the particle density. Beam compression is achieved through electrostatic focusing to establish and maintain two opposing beams in a reaction zone

  7. An Experimental Investigation On Minimum Compressive Strength Of Early Age Concrete To Prevent Frost Damage For Nuclear Power Plant Structures In Cold Climates

    International Nuclear Information System (INIS)

    Koh, Kyungtaek; Kim, Dogyeum; Park, Chunjin; Ryu, Gumsung; Park, Jungjun; Lee, Janghwa

    2013-01-01

    Concrete undergoing early frost damage in cold weather will experience significant loss of not only strength, but also of permeability and durability. Accordingly, concrete codes like ACI-306R prescribe a minimum compressive strength and duration of curing to prevent frost damage at an early age and secure the quality of concrete. Such minimum compressive strength and duration of curing are mostly defined based on the strength development of concrete. However, concrete subjected to frost damage at early age may not show a consistent relationship between its strength and durability. Especially, since durability of concrete is of utmost importance in nuclear power plant structures, this relationship should be imperatively clarified. Therefore, this study verifies the feasibility of the minimum compressive strength specified in the codes like ACI-306R by evaluating the strength development and the durability preventing the frost damage of early age concrete for nuclear power plant. The results indicate that the value of 5 MPa specified by the concrete standards like ACI-306R as the minimum compressive strength to prevent the early frost damage is reasonable in terms of the strength development, but seems to be inappropriate in the viewpoint of the resistance to chloride ion penetration and freeze-thaw. Consequently, it is recommended to propose a minimum compressive strength preventing early frost damage in terms of not only the strength development, but also in terms of the durability to secure the quality of concrete for nuclear power plants in cold climates

  8. An Experimental Investigation On Minimum Compressive Strength Of Early Age Concrete To Prevent Frost Damage For Nuclear Power Plant Structures In Cold Climates

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kyungtaek; Kim, Dogyeum; Park, Chunjin; Ryu, Gumsung; Park, Jungjun; Lee, Janghwa [Korea Institute Construction Technology, Goyang (Korea, Republic of)

    2013-06-15

    Concrete undergoing early frost damage in cold weather will experience significant loss of not only strength, but also of permeability and durability. Accordingly, concrete codes like ACI-306R prescribe a minimum compressive strength and duration of curing to prevent frost damage at an early age and secure the quality of concrete. Such minimum compressive strength and duration of curing are mostly defined based on the strength development of concrete. However, concrete subjected to frost damage at early age may not show a consistent relationship between its strength and durability. Especially, since durability of concrete is of utmost importance in nuclear power plant structures, this relationship should be imperatively clarified. Therefore, this study verifies the feasibility of the minimum compressive strength specified in the codes like ACI-306R by evaluating the strength development and the durability preventing the frost damage of early age concrete for nuclear power plant. The results indicate that the value of 5 MPa specified by the concrete standards like ACI-306R as the minimum compressive strength to prevent the early frost damage is reasonable in terms of the strength development, but seems to be inappropriate in the viewpoint of the resistance to chloride ion penetration and freeze-thaw. Consequently, it is recommended to propose a minimum compressive strength preventing early frost damage in terms of not only the strength development, but also in terms of the durability to secure the quality of concrete for nuclear power plants in cold climates.

  9. Opportunities and challenges in applying the compressive sensing framework to nuclear science and engineering

    International Nuclear Information System (INIS)

    Mille, Matthew; Su, Lin; Yazici, Birsen; Xu, X. George

    2011-01-01

    Compressive sensing is a 5-year old theory that has already resulted in an extremely large number of publications in the literature and that has the potential to impact every field of engineering and applied science that has to do with data acquisition and processing. This paper introduces the mathematics, presents a simple demonstration of radiation dose reduction in x-ray CT imaging, and discusses potential application in nuclear science and engineering. (author)

  10. Computational fluid dynamics on a massively parallel computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    A finite difference code was implemented for the compressible Navier-Stokes equations on the Connection Machine, a massively parallel computer. The code is based on the ARC2D/ARC3D program and uses the implicit factored algorithm of Beam and Warming. The codes uses odd-even elimination to solve linear systems. Timings and computation rates are given for the code, and a comparison is made with a Cray XMP.

  11. ORIGIN AND GROWTH OF NUCLEAR STAR CLUSTERS AROUND MASSIVE BLACK HOLES

    International Nuclear Information System (INIS)

    Antonini, Fabio

    2013-01-01

    The centers of stellar spheroids less luminous than ∼10 10 L ☉ are often marked by the presence of nucleated central regions, called 'nuclear star clusters' (NSCs). The origin of NSCs is still unclear. Here we investigate the possibility that NSCs originate from the migration and merger of stellar clusters at the center of galaxies where a massive black hole (MBH) may sit. We show that the observed scaling relation between NSC masses and the velocity dispersion of their host spheroids cannot be reconciled with a purely 'in situ' dissipative formation scenario. On the other hand, the observed relation appears to be in agreement with the predictions of the cluster merger model. A dissipationless formation model also reproduces the observed relation between the size of NSCs and their total luminosity, R∝√(L NSC ). When an MBH is included at the center of the galaxy, such dependence becomes substantially weaker than the observed correlation, since the size of the NSC is mainly determined by the fixed tidal field of the MBH. We evolve through dynamical friction a population of stellar clusters in a model of a galactic bulge taking into account dynamical dissolution due to two-body relaxation, starting from a power-law cluster initial mass function and adopting an initial total mass in stellar clusters consistent with the present-day cluster formation efficiency of the Milky Way (MW). The most massive clusters reach the center of the galaxy and merge to form a compact nucleus; after 10 10 years, the resulting NSC has properties that are consistent with the observed distribution of stars in the MW NSC. When an MBH is included at the center of a galaxy, globular clusters are tidally disrupted during inspiral, resulting in NSCs with lower densities than those of NSCs forming in galaxies with no MBHs. We suggest this as a possible explanation for the lack of NSCs in galaxies containing MBHs more massive than ∼10 8 M ☉ . Finally, we investigate the orbital

  12. ORIGIN AND GROWTH OF NUCLEAR STAR CLUSTERS AROUND MASSIVE BLACK HOLES

    Energy Technology Data Exchange (ETDEWEB)

    Antonini, Fabio, E-mail: antonini@cita.utoronto.ca [Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto, Ontario M5S 3H8 (Canada)

    2013-01-20

    The centers of stellar spheroids less luminous than {approx}10{sup 10} L {sub Sun} are often marked by the presence of nucleated central regions, called 'nuclear star clusters' (NSCs). The origin of NSCs is still unclear. Here we investigate the possibility that NSCs originate from the migration and merger of stellar clusters at the center of galaxies where a massive black hole (MBH) may sit. We show that the observed scaling relation between NSC masses and the velocity dispersion of their host spheroids cannot be reconciled with a purely 'in situ' dissipative formation scenario. On the other hand, the observed relation appears to be in agreement with the predictions of the cluster merger model. A dissipationless formation model also reproduces the observed relation between the size of NSCs and their total luminosity, R{proportional_to}{radical}(L{sub NSC}). When an MBH is included at the center of the galaxy, such dependence becomes substantially weaker than the observed correlation, since the size of the NSC is mainly determined by the fixed tidal field of the MBH. We evolve through dynamical friction a population of stellar clusters in a model of a galactic bulge taking into account dynamical dissolution due to two-body relaxation, starting from a power-law cluster initial mass function and adopting an initial total mass in stellar clusters consistent with the present-day cluster formation efficiency of the Milky Way (MW). The most massive clusters reach the center of the galaxy and merge to form a compact nucleus; after 10{sup 10} years, the resulting NSC has properties that are consistent with the observed distribution of stars in the MW NSC. When an MBH is included at the center of a galaxy, globular clusters are tidally disrupted during inspiral, resulting in NSCs with lower densities than those of NSCs forming in galaxies with no MBHs. We suggest this as a possible explanation for the lack of NSCs in galaxies containing MBHs more massive

  13. Bill project related to the struggle against the proliferation of arms of massive destruction and their vectors; Projet de Loi relatif a la lutte contre la proliferation des armes de destruction massive et de leurs vecteurs

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    This bill project addresses several issues: the struggle against proliferation of arms of massive destruction (nuclear weapons, nuclear materials, biological weapons, and chemical weapons), the struggle against proliferation of vectors of arms of massive destruction, double-use goods, the use of these weapons and vectors in acts of terrorism

  14. The Fate of Massive Black Holes in Gas-Rich Galaxy Mergers

    Science.gov (United States)

    Escala, A.; Larson, R. B.; Coppi, P. S.; Mardones, D.

    2006-06-01

    Using SPH numerical simulations, we investigate the effects of gas on the inspiral and merger of a massive black hole binary. This study is motivated by the very massive nuclear gas disks observed in the central regions of merging galaxies. Here we present results that expand on the treatment in previous works (Escala et al. 2004, 2005), by studying the evolution of a binary with different black holes masses in a massive gas disk.

  15. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  16. Bill project related to the struggle against the proliferation of arms of massive destruction and their vectors

    International Nuclear Information System (INIS)

    2011-01-01

    This bill project addresses several issues: the struggle against proliferation of arms of massive destruction (nuclear weapons, nuclear materials, biological weapons, and chemical weapons), the struggle against proliferation of vectors of arms of massive destruction, double-use goods, the use of these weapons and vectors in acts of terrorism

  17. Luminous Infrared Galaxies. III. Multiple Merger, Extended Massive Star Formation, Galactic Wind, and Nuclear Inflow in NGC 3256

    Science.gov (United States)

    Lípari, S.; Díaz, R.; Taniguchi, Y.; Terlevich, R.; Dottori, H.; Carranza, G.

    2000-08-01

    -line ratios (N II/Hα, S II/Hα, S II/S II), and FWHM (Hα) maps for the central region (30''×30'' rmax~22''~4 kpc), with a spatial resolution of 1". In the central region (r~5-6 kpc) we detected that the nuclear starburst and the extended giant H II regions (in the spiral arms) have very similar properties, i.e., high metallicity and low-ionization spectra, with Teff=35,000 K, solar abundance, a range of Te~6000-7000 K, and Ne~100-1000 cm-3. The nuclear and extended outflow shows properties typical of galactic wind/shocks, associated with the nuclear starburst. We suggest that the interaction between dynamical effects, the galactic wind (outflow), low-energy cosmic rays, and the molecular+ionized gas (probably in the inflow phase) could be the possible mechanism that generate the ``similar extended properties in the massive star formation, at a scale of 5-6 kpc!'' We have also studied the presence of the close merger/interacting systems NGC 3256C (at ~150 kpc, ΔV=-100 km s-1) and the possible association between the NGC 3256 and 3263 groups of galaxies. In conclusion, these results suggest that NGC 3256 is the product of a multiple merger, which generated an extended massive star formation process with an associated galactic wind plus a nuclear inflow. Therefore, NGC 3256 is another example in which the relation between mergers and extreme starburst (and the powerful galactic wind, ``multiple'' Type II supernova explosions) play an important role in the evolution of galaxies (the hypothesis of Rieke et al., Joseph et al., Terlevich et al., Heckman et al., and Lípari et al.). Based on observations obtained at the Hubble Space Telescope (HST; Wide Field Planetary Camera 2 [WFPC2] and NICMOS) satellite; International Ultraviolet Explorer (IUE) satellite; European Southern Observatory (ESO, NTT); Chile, Cerro Tololo Inter-American Observatory (CTIO), Chile; Complejo Astronómico el Leoncito (CASLEO), Argentina; Estación Astrofísica de Bosque Alegre (BALEGRE), Argentina.

  18. Nuclear data compression and reconstruction via discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Park, Young Ryong; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    Discrete Wavelet Transforms (DWTs) are recent mathematics, and begin to be used in various fields. The wavelet transform can be used to compress the signal and image due to its inherent properties. We applied the wavelet transform compression and reconstruction to the neutron cross section data. Numerical tests illustrate that the signal compression using wavelet is very effective to reduce the data saving spaces. 7 refs., 4 figs., 3 tabs. (Author)

  19. Nuclear data compression and reconstruction via discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Park, Young Ryong; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Discrete Wavelet Transforms (DWTs) are recent mathematics, and begin to be used in various fields. The wavelet transform can be used to compress the signal and image due to its inherent properties. We applied the wavelet transform compression and reconstruction to the neutron cross section data. Numerical tests illustrate that the signal compression using wavelet is very effective to reduce the data saving spaces. 7 refs., 4 figs., 3 tabs. (Author)

  20. Nonpainful wide-area compression inhibits experimental pain.

    Science.gov (United States)

    Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena

    2016-09-01

    Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.

  1. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  2. Techniques for data compression in experimental nuclear physics problems

    International Nuclear Information System (INIS)

    Byalko, A.A.; Volkov, N.G.; Tsupko-Sitnikov, V.M.

    1984-01-01

    Techniques and ways for data compression during physical experiments are estimated. Data compression algorithms are divided into three groups: the first one includes the algorithms based on coding and which posses only average indexes by data files, the second group includes algorithms with data processing elements, the third one - algorithms for converted data storage. The greatest promise for the techniques connected with data conversion is concluded. The techniques possess high indexes for compression efficiency and for fast response, permit to store information close to the source one

  3. Modeling basic creep in concrete at early-age under compressive and tensile loading

    Energy Technology Data Exchange (ETDEWEB)

    Hilaire, Adrien, E-mail: adrien.hilaire@ens-cachan.fr [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France); Benboudjema, Farid; Darquennes, Aveline; Berthaud, Yves [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France); Nahas, Georges [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France); Institut de radioprotection et de sureté nucléaire, Fontenay-aux-Roses (France)

    2014-04-01

    A numerical model has been developed to predict early age cracking for massive concrete structures, and especially concrete nuclear containment vessels. Major phenomena are included: hydration, heat diffusion, autogenous and thermal shrinkage, creep and cracking. Since studied structures are massive, drying is not taken into account. Such modeling requires the identification of several material parameters. Literature data is used to validate the basic creep model. A massive wall, representative of a concrete nuclear containment, is simulated; predicted cracking is consistent with observation and is found highly sensitive to the creep phenomenon.

  4. The numerical study of the compressible rising of nuclear fireball at low altitude

    International Nuclear Information System (INIS)

    Wang Lin; Zheng Yi; Cheng Xianyou

    2010-01-01

    To study the evolution of nuclear fireball during the phase of compressible rising, the pressure and density were computed with numerical method. It can be concluded that the distribution of parameters of fireball changed during its rising the pressure in the upper part of fireball increased while the one of lower part decreased. the dilute area lied in the middle of fireball initially moved upward, on the other hand, the gradient of density in the upper and side part increased and is contrary to the changing of density beneath the fireball. The computational results of density agreed with experimental shadow graphs very well. (authors)

  5. Parallel Tensor Compression for Large-Scale Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara G. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ballard, Grey [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Austin, Woody Nathan [Univ. of Texas, Austin, TX (United States)

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  6. Coming to grips with nuclear winter

    International Nuclear Information System (INIS)

    Scherr, S.J.

    1985-01-01

    This editorial examines the politics related to the concept of nuclear winter which is a term used to describe temperature changes brought on by the injection of smoke into the atmosphere by the massive fires set off by nuclear explosions. The climate change alone could cause crop failures and lead to massive starvation. The author suggests that the prospect of a nuclear winter should be a deterrent to any nuclear exchange

  7. Photonic compressive sensing enabled data efficient time stretch optical coherence tomography

    Science.gov (United States)

    Mididoddi, Chaitanya K.; Wang, Chao

    2018-03-01

    Photonic time stretch (PTS) has enabled real time spectral domain optical coherence tomography (OCT). However, this method generates a torrent of massive data at GHz stream rate, which requires capturing as per Nyquist principle. If the OCT interferogram signal is sparse in Fourier domain, which is always true for samples with limited number of layers, it can be captured at lower (sub-Nyquist) acquisition rate as per compressive sensing method. In this work we report a data compressed PTS-OCT system based on photonic compressive sensing with 66% compression with low acquisition rate of 50MHz and measurement speed of 1.51MHz per depth profile. A new method has also been proposed to improve the system with all-optical random pattern generation, which completely avoids electronic bottleneck in traditional binary pseudorandom binary sequence (PRBS) generators.

  8. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  9. Nuclear compression effects on pion production in nuclear collisions

    International Nuclear Information System (INIS)

    Sano, M.; Gyulassy, M.; Wakai, M.; Kitazoe, Y.

    1984-11-01

    The pion multiplicity produced in nuclear collisions between 0.2 and 2 AGeV is calculated assuming shock formation. We also correct the procedure of extracting the nuclear equation of state as proposed by Stock et al. The nuclear equation of state would have to be extremely stiff for this model to reproduce the observed multiplicities. The assumptions of this model are critically analyzed. (author)

  10. Early-age behaviour of concrete in massive structures, experimentation and modelling

    International Nuclear Information System (INIS)

    Zreiki, J.; Bouchelaghem, F.; Chaouche, M.

    2010-01-01

    This study is focused on the behaviour of concrete at early-age in massive structures, in relation with the prediction of both cracking risk and residual stresses, which is still a challenging task. In this paper, a 3D thermo-chemo-mechanical model has been developed, on the basis of complete material characterization experiments, in order to predict the early-age development of strains and residual stresses, and in order to assess the risk of cracking in massive concrete structures. The parameters of the proposed model were identified on two different concretes, High Performance Concrete and Fibrous Self-Compacted Concrete - from simple experiments in the laboratory: uniaxial tension and compression tests, dynamic Young's modulus measurements, free and autogenous shrinkages, semi-adiabatic calorimetry. The proposed model has been implemented in a Finite Element code, and the numerical simulations of the laboratory tests have proved the model consistency. Furthermore, early-age experiments conducted on massive structures have also been simulated, in order to investigate the predictive capability of the model, and to assess the model performance in practical situations where varying temperatures are involved.

  11. The SINS/zC-SINF survey of z ∼ 2 galaxy kinematics: Evidence for powerful active galactic nucleus-driven nuclear outflows in massive star-forming galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Förster Schreiber, N. M.; Genzel, R.; Kurk, J. D.; Lutz, D.; Tacconi, L. J.; Wuyts, S.; Bandara, K.; Buschkamp, P.; Davies, R.; Eisenhauer, F.; Lang, P. [Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Newman, S. F. [Department of Astronomy, Hearst Field Annex, University of California, Berkeley, CA 94720 (United States); Burkert, A. [Universitäts-Sternwarte, Ludwig-Maximilians-Universität, Scheinerstrasse 1, D-81679 München (Germany); Carollo, C. M.; Lilly, S. J. [Institute for Astronomy, Department of Physics, Eidgenössische Technische Hochschule, 8093-CH Zürich (Switzerland); Cresci, G. [Istituto Nazionale di Astrofisica—Osservatorio Astronomico di Bologna, Via Ranzani 1, I-40127 Bologna (Italy); Daddi, E. [CEA Saclay, DSM/IRFU/SAp, F-91191 Gif-sur-Yvette (France); Hicks, E. K. S. [Department of Astronomy, University of Washington, P.O. Box 351580, Seattle, WA 98195-1580 (United States); Mainieri, V. [European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching (Germany); Mancini, C. [Istituto Nazionale di Astrofisica—Osservatorio Astronomico di Padova, Vicolo dell' Osservatorio 5, I-35122 Padova (Italy); and others

    2014-05-20

    We report the detection of ubiquitous powerful nuclear outflows in massive (≥10{sup 11} M {sub ☉}) z ∼ 2 star-forming galaxies (SFGs), which are plausibly driven by an active galactic nucleus (AGN). The sample consists of the eight most massive SFGs from our SINS/zC-SINF survey of galaxy kinematics with the imaging spectrometer SINFONI, six of which have sensitive high-resolution adaptive optics-assisted observations. All of the objects are disks hosting a significant stellar bulge. The spectra in their central regions exhibit a broad component in Hα and forbidden [N II] and [S II] line emission, with typical velocity FWHM ∼ 1500 km s{sup –1}, [N II]/Hα ratio ≈ 0.6, and intrinsic extent of 2-3 kpc. These properties are consistent with warm ionized gas outflows associated with Type 2 AGN, the presence of which is confirmed via independent diagnostics in half the galaxies. The data imply a median ionized gas mass outflow rate of ∼60 M {sub ☉} yr{sup –1} and mass loading of ∼3. At larger radii, a weaker broad component is detected but with lower FWHM ∼485 km s{sup –1} and [N II]/Hα ≈ 0.35, characteristic for star formation-driven outflows as found in the lower-mass SINS/zC-SINF galaxies. The high inferred mass outflow rates and frequent occurrence suggest that the nuclear outflows efficiently expel gas out of the centers of the galaxies with high duty cycles and may thus contribute to the process of star formation quenching in massive galaxies. Larger samples at high masses will be crucial in confirming the importance and energetics of the nuclear outflow phenomenon and its connection to AGN activity and bulge growth.

  12. Seeding magnetic fields for laser-driven flux compression in high-energy-density plasmas.

    Science.gov (United States)

    Gotchev, O V; Knauer, J P; Chang, P Y; Jang, N W; Shoup, M J; Meyerhofer, D D; Betti, R

    2009-04-01

    A compact, self-contained magnetic-seed-field generator (5 to 16 T) is the enabling technology for a novel laser-driven flux-compression scheme in laser-driven targets. A magnetized target is directly irradiated by a kilojoule or megajoule laser to compress the preseeded magnetic field to thousands of teslas. A fast (300 ns), 80 kA current pulse delivered by a portable pulsed-power system is discharged into a low-mass coil that surrounds the laser target. A >15 T target field has been demonstrated using a hot spot of a compressed target. This can lead to the ignition of massive shells imploded with low velocity-a way of reaching higher gains than is possible with conventional ICF.

  13. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  14. The discovery of nuclear compression phenomena in relativistic heavy-ion collisions

    International Nuclear Information System (INIS)

    Schmidt, H.R.

    1991-01-01

    This article has attempted to review more than 15 years of research on shock compression phenomena, which is closely related to the goal of determining the nuclear EOS. Exciting progress has been made in this field over the last years and the fundamental physics of relativistic heavy ion-collisions has been well established. Overwhelming experimental evidence for the existence of shock compression has been extracted from the data. While early, inclusive measurements had been rather inconclusive, the advent of 4π-detectors like the GSI-LBL Plastic Ball had enabled the outstanding discovery of collective flow effects, as they were predicted by fluid-dynamical calculations. The particular case of conical Mach shock waves, anticipated for asymmetric collisions, has not been observed. What are the reasons? Surprisingly, the maximum energy of 2.1 GeV/nucleon for heavy ions at the BEVALAC had been found to be too low for Mach shock waves to occur. The small 20 Ne-nucleus is stopped in the heavy Au target. A Mach cone, however, if it had developed in the early stage of the collision will be wiped out by thermal motion in the process of slowing the projectile down to rest. A comparison of the data with models hints towards a rather hard EOS, although a soft one cannot be excluded definitively. A quantitative extraction is aggravated by a number in-medium and final-state effects which influence the calculated observables in a similar fashion as different choices of an EOS. Thus, as of now, the precise knowledge of the EOS of hot and dense matter is still an open question and needs further investigation. (orig.)

  15. Prediction of concrete compressive strength considering humidity and temperature in the construction of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seung Hee; Jang, Kyung Pil [Department of Civil and Environmental Engineering, Myongji University, Yongin (Korea, Republic of); Bang, Jin-Wook [Department of Civil Engineering, Chungnam National University, Daejeon (Korea, Republic of); Lee, Jang Hwa [Structural Engineering Research Division, Korea Institute of Construction Technology (Korea, Republic of); Kim, Yun Yong, E-mail: yunkim@cnu.ac.kr [Structural Engineering Research Division, Korea Institute of Construction Technology (Korea, Republic of)

    2014-08-15

    Highlights: • Compressive strength tests for three concrete mixes were performed. • The parameters of the humidity-adjusted maturity function were determined. • Strength can be predicted considering temperature and relative humidity. - Abstract: This study proposes a method for predicting compressive strength developments in the early ages of concretes used in the construction of nuclear power plants. Three representative mixes with strengths of 6000 psi (41.4 MPa), 4500 psi (31.0 MPa), and 4000 psi (27.6 MPa) were selected and tested under various curing conditions; the temperature ranged from 10 to 40 °C, and the relative humidity from 40 to 100%. In order to consider not only the effect of the temperature but also that of humidity, an existing model, i.e. the humidity-adjusted maturity function, was adopted and the parameters used in the function were determined from the test results. A series of tests were also performed in the curing condition of a variable temperature and constant humidity, and a comparison between the measured and predicted strengths were made for the verification.

  16. Agmatine inhibits nuclear factor-κB nuclear translocation in acute spinal cord compression injury rat model

    Directory of Open Access Journals (Sweden)

    Doaa M. Samy

    2016-09-01

    Full Text Available Secondary damage after acute spinal cord compression injury (SCCI exacerbates initial insult. Nuclear factor kappa-B (NF-κB-p65 activation is involved in SCCI deleterious effects. Agmatine (Agm showed neuroprotection against various CNS injuries. However, Agm impact on NF-κB signaling in acute SCCI remains to be investigated. The present study compared the effectiveness of Agm therapy and decompression laminectomy (DL in functional recovery, oxidative stress, inflammatory and apoptotic responses, and modulation of NF-κB activation in acute SCCI rat model. Rats were either sham-operated or subjected to SCCI at T8–9, using 2-Fr. catheter. SCCI rats were randomly treated with DL at T8–9, intraperitoneal Agm (100 mg/kg/day, combined (DL/Agm treatment or saline (n = 16/group. After 28-days of neurological follow-up, spinal cords were either subjected to biochemical measurement of oxidative stress and inflammatory markers or histopathology and immuno-histochemistry for NF-κB-p65 and caspase-3 expression (n = 8/group. Agm was comparable to DL in facilitating neurological functions recovery, reducing inflammation (TNF-α/interleukin-6, and apoptosis. Agm was distinctive in combating oxidative stress. Agm neuroprotective effects were paralleled with inhibition of NF-κB-p65 nuclear translocation. Combined pharmacological and surgical interventions were proved superior in functional recovery. In conclusion, present research suggested a new mechanism for Agm neuroprotection in rats SCCI through inhibition of NF-κB activation.

  17. Massive branes

    International Nuclear Information System (INIS)

    Bergshoeff, E.; Ortin, T.

    1998-01-01

    We investigate the effective world-volume theories of branes in a background given by (the bosonic sector of) 10-dimensional massive IIA supergravity (''''massive branes'''') and their M-theoretic origin. In the case of the solitonic 5-brane of type IIA superstring theory the construction of the Wess-Zumino term in the world-volume action requires a dualization of the massive Neveu-Schwarz/Neveu-Schwarz target space 2-form field. We find that, in general, the effective world-volume theory of massive branes contains new world-volume fields that are absent in the massless case, i.e. when the mass parameter m of massive IIA supergravity is set to zero. We show how these new world-volume fields can be introduced in a systematic way. (orig.)

  18. Early-age behaviour of concrete in massive structures, experimentation and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Zreiki, J., E-mail: zreiki@lmt.ens-cachan.f [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France); Bouchelaghem, F. [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France); UPMC Univ Paris 06 (France); Chaouche, M. [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France)

    2010-10-15

    This study is focused on the behaviour of concrete at early-age in massive structures, in relation with the prediction of both cracking risk and residual stresses, which is still a challenging task. In this paper, a 3D thermo-chemo-mechanical model has been developed, on the basis of complete material characterization experiments, in order to predict the early-age development of strains and residual stresses, and in order to assess the risk of cracking in massive concrete structures. The parameters of the proposed model were identified on two different concretes, High Performance Concrete and Fibrous Self-Compacted Concrete - from simple experiments in the laboratory: uniaxial tension and compression tests, dynamic Young's modulus measurements, free and autogenous shrinkages, semi-adiabatic calorimetry. The proposed model has been implemented in a Finite Element code, and the numerical simulations of the laboratory tests have proved the model consistency. Furthermore, early-age experiments conducted on massive structures have also been simulated, in order to investigate the predictive capability of the model, and to assess the model performance in practical situations where varying temperatures are involved.

  19. The life and death of massive stars revealed by the observation of nuclear gamma-ray lines with the Integral/SPI spectrometer

    International Nuclear Information System (INIS)

    Martin, P.

    2008-11-01

    The aim of this research thesis is to bring up observational constraints on the mechanisms which govern life and death of massive stars, i.e. stars having an initial mass greater than eight times the Sun's mass, and smaller than 120 to 150 solar masses. Thus, it aims at detecting the vestiges of recent and close supernovae in order to find out the traces of the dynamics of their first instants. The author has explored the radiation of three radio-isotopes accessible to the nuclear gamma astronomy ( 44 Ti, 60 Fe, 26 Al) using observations performed with high resolution gamma spectrometer (SPI) on the INTEGRAL international observatory. After an overview of the present knowledge on the massive star explosion mechanism, the author presents the specificities and potential of the investigated radio-isotopes. He describes the data treatment methods and a population synthesis programme for the prediction of decay gamma streaks, and then reports its work on the inner dynamics of Cassiopeia A explosion, the stellar activity of the galaxy revealed by the radioisotope observation, the nucleo-synthetic activity of the Swan region

  20. FIRST INVESTIGATION OF THE COMBINED IMPACT OF IONIZING RADIATION AND MOMENTUM WINDS FROM A MASSIVE STAR ON A SELF-GRAVITATING CORE

    International Nuclear Information System (INIS)

    Ngoumou, Judith; Hubber, David; Dale, James E.; Burkert, Andreas

    2015-01-01

    Massive stars shape the surrounding interstellar matter (ISM) by emitting ionizing photons and ejecting material through stellar winds. To study the impact of the momentum from the wind of a massive star on the surrounding neutral or ionized material, we implemented a new HEALPix-based momentum-conserving wind scheme in the smoothed particle hydrodynamics (SPH) code SEREN. A qualitative study of the impact of the feedback from an O7.5-like star on a self-gravitating sphere shows that on its own, the transfer of momentum from a wind onto cold surrounding gas has both a compressing and dispersing effect. It mostly affects gas at low and intermediate densities. When combined with a stellar source's ionizing ultraviolet (UV) radiation, we find the momentum-driven wind to have little direct effect on the gas. We conclude that during a massive star's main sequence, the UV ionizing radiation is the main feedback mechanism shaping and compressing the cold gas. Overall, the wind's effects on the dense gas dynamics and on the triggering of star formation are very modest. The structures formed in the ionization-only simulation and in the combined feedback simulation are remarkably similar. However, in the combined feedback case, different SPH particles end up being compressed. This indicates that the microphysics of gas mixing differ between the two feedback simulations and that the winds can contribute to the localized redistribution and reshuffling of gas

  1. Manufacture of stabilized phosphogypsum blocks for underwater massives in the Gulf of Gabès

    Directory of Open Access Journals (Sweden)

    Koubaa Lobna

    2014-04-01

    Studies show that the treatment of PH in the crushed sand and cement or cement and lime gives the best results in terms of ultrasonic speed and compressive strength. Also, they indicate that the addition of cement and lime can absorb huge amounts of PH (92 %. Resistance obtained is sufficient for possible use blocks of PH made for the construction of massive underwater.

  2. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  3. arXiv Isothermal compressibility of hadronic matter formed in relativistic nuclear collisions

    CERN Document Server

    Mukherjee, Maitreyee; Chatterjee, Arghya; Chatterjee, Sandeep; Adhya, Souvik Priyam; Thakur, Sanchari; Nayak, Tapan K.

    We present the first estimates of isothermal compressibility (\\kT) of hadronic matter formed in relativistic nuclear collisions (\\sNN=7.7~GeV to 2.76~TeV) using experimentally observed quantities. \\kT~is related to the fluctuation in particle multiplicity, temperature and volume of the system formed in the collisions. Multiplicity fluctuations are obtained from the event-by-event distributions of charged particle multiplicities in narrow centrality bins. The dynamical components of the fluctuations are extracted by removing the contributions to the fluctuations from the number of participating nucleons. From the available experimental data, a constant value of \\kT~has been observed as a function of collision energy. The results are compared with calculations from UrQMD, AMPT and EPOS event generators, and estimations of \\kT~are made for Pb-Pb collisions at the CERN Large Hadron Collider. A hadron resonance gas (HRG) model has been used to calculate \\kT~as a function of collision energy. Our results show a dec...

  4. The proliferation of massive destruction weapons and ballistic missiles

    International Nuclear Information System (INIS)

    Schmitt, M.

    1996-01-01

    The author studies the actual situation of nuclear deterrence policies, the possibilities of use chemical weapons as massive destructions weapons for non nuclear governments. The situation of non proliferation of nuclear weapons took a new interest with the disintegration of the communism block, but it seems that only few nuclear matter disappeared towards proliferating countries. The denuclearization of Bielorussia, Ukraine and Kazakhstan makes progress with the START I treaty; China has signed the Non proliferation treaty in 1992, it conducts an export policy in matter of equipment and know-how, towards Iran, Pakistan, North Korea, Saudi Arabia and Syria. In a future of ten years, countries such, Iran, North Korea could catch up with Israel, India and Pakistan among non declared nuclear countries. For chemical weapon, Libya, Iran and Syria could catch up with Iraq. (N.C.)

  5. Rio Blanco massive hydraulic fracture: project definition

    International Nuclear Information System (INIS)

    1976-01-01

    A recent Federal Power Commission feasibility study assessed the possibility of economically producing gas from three Rocky Mountain basins. These basins have potentially productive horizons 2,000 to 4,000 feet thick containing an estimated total of 600 trillion cubic feet of gas in place. However, the producing sands are of such low permeability and heterogeneity that conventional methods have failed to develop these basins economically. The Natural Gas Technology Task Force, responsible for preparing the referenced feasibility study, determined that, if effective well stimulation methods for these basins can be developed, it might be possible to recover 40 to 50 percent of the gas in place. The Task Force pointed out two possible underground fracturing methods: Nuclear explosive fracturing, and massive hydraulic fracturing. They argued that once technical viability has been demonstrated, and with adequate economic incentives, there should be no reason why one or even both of these approaches could not be employed, thus making a major contribution toward correcting the energy deficiency of the Nation. A joint Government-industry demonstration program has been proposed to test the relative effectiveness of massive hydraulic fracturing of the same formation and producing horizons that were stimulated by the Rio Blanco nuclear project

  6. Application of digital compression techniques to optical surveillance systems

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1991-01-01

    There are many benefits to handling video images electronically, however, the amount of digital data in a normal video image is a major obstacle. The solution is to remove the high frequency and redundant information in a process that is referred to as compression. Compression allows the number of digital bits required for a given image to be reduced for more efficient storage or transmission of images. The next question is how much compression can be done without impairing the image quality beyond its usefulness for a given application. This paper discusses image compression that might be applied to provide useful images in unattended nuclear facility surveillance applications

  7. MRI assessment of bronchial compression in absent pulmonary valve syndrome and review of the syndrome

    International Nuclear Information System (INIS)

    Taragin, Benjamin H.; Berdon, Walter E.; Prinz, B.

    2006-01-01

    Absent pulmonary valve syndrome (APVS) is a rare cardiac malformation with massive pulmonary insufficiency that presents with short-term and long-term respiratory problems secondary to severe bronchial compression from enlarged central and hilar pulmonary arteries. Association with chromosome 22.Q11 deletions and DiGeorge syndrome is common. This historical review illustrates the airway disease with emphasis on assessment of the bronchial compression in patients with persistent respiratory difficulties post-valvular repair. Cases that had MRI for cardiac assessment are used to illustrate the pattern of airway disease. (orig.)

  8. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  9. The Destructive Birth of Massive Stars and Massive Star Clusters

    Science.gov (United States)

    Rosen, Anna; Krumholz, Mark; McKee, Christopher F.; Klein, Richard I.; Ramirez-Ruiz, Enrico

    2017-01-01

    Massive stars play an essential role in the Universe. They are rare, yet the energy and momentum they inject into the interstellar medium with their intense radiation fields dwarfs the contribution by their vastly more numerous low-mass cousins. Previous theoretical and observational studies have concluded that the feedback associated with massive stars' radiation fields is the dominant mechanism regulating massive star and massive star cluster (MSC) formation. Therefore detailed simulation of the formation of massive stars and MSCs, which host hundreds to thousands of massive stars, requires an accurate treatment of radiation. For this purpose, we have developed a new, highly accurate hybrid radiation algorithm that properly treats the absorption of the direct radiation field from stars and the re-emission and processing by interstellar dust. We use our new tool to perform a suite of three-dimensional radiation-hydrodynamic simulations of the formation of massive stars and MSCs. For individual massive stellar systems, we simulate the collapse of massive pre-stellar cores with laminar and turbulent initial conditions and properly resolve regions where we expect instabilities to grow. We find that mass is channeled to the massive stellar system via gravitational and Rayleigh-Taylor (RT) instabilities. For laminar initial conditions, proper treatment of the direct radiation field produces later onset of RT instability, but does not suppress it entirely provided the edges of the radiation-dominated bubbles are adequately resolved. RT instabilities arise immediately for turbulent pre-stellar cores because the initial turbulence seeds the instabilities. To model MSC formation, we simulate the collapse of a dense, turbulent, magnetized Mcl = 106 M⊙ molecular cloud. We find that the influence of the magnetic pressure and radiative feedback slows down star formation. Furthermore, we find that star formation is suppressed along dense filaments where the magnetic field is

  10. Uncertainties in s-process nucleosynthesis in massive stars determined by Monte Carlo variations

    Science.gov (United States)

    Nishimura, N.; Hirschi, R.; Rauscher, T.; St. J. Murphy, A.; Cescutti, G.

    2017-08-01

    The s-process in massive stars produces the weak component of the s-process (nuclei up to A ˜ 90), in amounts that match solar abundances. For heavier isotopes, such as barium, production through neutron capture is significantly enhanced in very metal-poor stars with fast rotation. However, detailed theoretical predictions for the resulting final s-process abundances have important uncertainties caused both by the underlying uncertainties in the nuclear physics (principally neutron-capture reaction and β-decay rates) as well as by the stellar evolution modelling. In this work, we investigated the impact of nuclear-physics uncertainties relevant to the s-process in massive stars. Using a Monte Carlo based approach, we performed extensive nuclear reaction network calculations that include newly evaluated upper and lower limits for the individual temperature-dependent reaction rates. We found that most of the uncertainty in the final abundances is caused by uncertainties in the neutron-capture rates, while β-decay rate uncertainties affect only a few nuclei near s-process branchings. The s-process in rotating metal-poor stars shows quantitatively different uncertainties and key reactions, although the qualitative characteristics are similar. We confirmed that our results do not significantly change at different metallicities for fast rotating massive stars in the very low metallicity regime. We highlight which of the identified key reactions are realistic candidates for improved measurement by future experiments.

  11. Nuclear compression effects on pion production in nuclear collisions

    International Nuclear Information System (INIS)

    Sano, M.; Gyulassy, M.; Wakai, M.; Kitazoe, Y.

    1985-01-01

    We show that the method of analyzing the pion excitation function proposed by Stock et al. may determine only a part of the nuclear matter equation of state. With the addition of missing kinetic energy terms the implied high density nuclear equation of state would be much stiffer than expected from conventional theory. A stiff equation of state would also follow if shock dynamics with early chemical freeze out were valid. (orig.)

  12. Nuclear propulsion apparatus with alternate reactor segments

    International Nuclear Information System (INIS)

    Szekely, T.

    1979-01-01

    Nuclear propulsion apparatus comprising: (a) means for compressing incoming air; (b) nuclear fission reactor means for heating said air; (c) means for expanding a portion of the heated air to drive said compressing means; (d) said nuclear fission reactor means being divided into a plurality of radially extending segments; (e) means for directing a portion of the compressed air for heating through alternate segments of said reactor means and another portion of the compressed air for heating through the remaining segments of said reactor means; and (f) means for further expanding the heated air from said drive means and the remaining heated air from said reactor means through nozzle means to effect reactive thrust on said apparatus. 12 claims

  13. An improvement analysis on video compression using file segmentation

    Science.gov (United States)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  14. GPU Lossless Hyperspectral Data Compression System for Space Applications

    Science.gov (United States)

    Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled

    2012-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.

  15. Massive Gravity

    OpenAIRE

    de Rham, Claudia

    2014-01-01

    We review recent progress in massive gravity. We start by showing how different theories of massive gravity emerge from a higher-dimensional theory of general relativity, leading to the Dvali–Gabadadze–Porrati model (DGP), cascading gravity, and ghost-free massive gravity. We then explore their theoretical and phenomenological consistency, proving the absence of Boulware–Deser ghosts and reviewing the Vainshtein mechanism and the cosmological solutions in these models. Finally, we present alt...

  16. Macron Formed Liner Compression as a Practical Method for Enabling Magneto-Inertial Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John

    2011-12-10

    The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. The main impediment for current nuclear fusion concepts is the complexity and large mass associated with the confinement systems. To take advantage of the smaller scale, higher density regime of magnetic fusion, an efficient method for achieving the compressional heating required to reach fusion gain conditions must be found. The very compact, high energy density plasmoid commonly referred to as a Field Reversed Configuration (FRC) provides for an ideal target for this purpose. To make fusion with the FRC practical, an efficient method for repetitively compressing the FRC to fusion gain conditions is required. A novel approach to be explored in this endeavor is to remotely launch a converging array of small macro-particles (macrons) that merge and form a more massive liner inside the reactor which then radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target FRC plasmoid suppresses the thermal transport to the confining liner significantly lowering the imploding power needed to compress the target. With the momentum flux being delivered by an assemblage of low mass, but high velocity macrons, many of the difficulties encountered with the liner implosion power technology are eliminated. The undertaking to be described in this proposal is to evaluate the feasibility achieving fusion conditions from this simple and low cost approach to fusion. During phase I the design and testing of the key components for the creation of the macron formed liner have been successfully carried out. Detailed numerical calculations of the merging, formation and radial implosion of the Macron Formed Liner (MFL) were also performed. The phase II effort will focus on an experimental demonstration of the macron launcher at full power, and the demonstration

  17. Compressing climate model simulations: reducing storage burden while preserving information

    Science.gov (United States)

    Hammerling, Dorit; Baker, Allison; Xu, Haiying; Clyne, John; Li, Samuel

    2017-04-01

    Climate models, which are run at high spatial and temporal resolutions, generate massive quantities of data. As our computing capabilities continue to increase, storing all of the generated data is becoming a bottleneck, which negatively affects scientific progress. It is thus important to develop methods for representing the full datasets by smaller compressed versions, which still preserve all the critical information and, as an added benefit, allow for faster read and write operations during analysis work. Traditional lossy compression algorithms, as for example used for image files, are not necessarily ideally suited for climate data. While visual appearance is relevant, climate data has additional critical features such as the preservation of extreme values and spatial and temporal gradients. Developing alternative metrics to quantify information loss in a manner that is meaningful to climate scientists is an ongoing process still in its early stages. We will provide an overview of current efforts to develop such metrics to assess existing algorithms and to guide the development of tailored compression algorithms to address this pressing challenge.

  18. Two Cases of Massive Hydrothorax Complicating Peritoneal Dialysis

    International Nuclear Information System (INIS)

    Bae, Sang Kyun; Yum, Ha Yong; Rim, Hark

    1994-01-01

    Massive hydrothorax complicating continuous ambulatory peritoneal dialysis (CAPD) is relatively rare. A 67-year-old male and a 23-year-old female patients during CAPD presented massive pleural effusion, They have been performing peritoneal dialysis due to end-stage renal disease for 8 months and 2 weeks respectively. We injected '9 9m Tc-labelled radiopharmaceutical (phytate and MAA, respectively) into peritoneal cavity with the dialysate. The anterior, posterior and right lateral images were obtained. The studies reveal visible radioactivity in the right chest indicating the communication between the peritoneal and the pleural space. After sclerotherapy with tetracycline, the same studies reveal no radioactivity in the right chest suggesting successful therapy. We think nuclear imaging is a simple and noninvasive method for the differential diagnosis of pleural effusion in patients during CAPD and the evaluation of therapy.

  19. Compressible dynamic stall control using high momentum microjets

    Science.gov (United States)

    Beahan, James J.; Shih, Chiang; Krothapalli, Anjaneyulu; Kumar, Rajan; Chandrasekhara, Muguru S.

    2014-09-01

    Control of the dynamic stall process of a NACA 0015 airfoil undergoing periodic pitching motion is investigated experimentally at the NASA Ames compressible dynamic stall facility. Multiple microjet nozzles distributed uniformly in the first 12 % chord from the airfoil's leading edge are used for the dynamic stall control. Point diffraction interferometry technique is used to characterize the control effectiveness, both qualitatively and quantitatively. The microjet control has been found to be very effective in suppressing both the emergence of the dynamic stall vortex and the associated massive flow separation at the entire operating range of angles of attack. At the high Mach number ( M = 0.4), the use of microjets appears to eliminate the shock structures that are responsible for triggering the shock-induced separation, establishing the fact that the use of microjets is effective in controlling dynamic stall with a strong compressibility effect. In general, microjet control has an overall positive effect in terms of maintaining leading edge suction pressure and preventing flow separation.

  20. Achalasia with massive oesophageal dilation causing tracheomalacia and asthma symptoms

    Directory of Open Access Journals (Sweden)

    Ana Gomez-Larrauri

    Full Text Available Achalasia is an uncommon oesophageal motor disorder characterized by failure of relaxation of the lower oesophageal sphincter and muscle hypertrophy, resulting in a loss of peristalsis and a dilated oesophagus. Gastrointestinal symptoms are invariably present in all cases of achalasia observed in adults. We report a case of a 34 year-old female patient with long standing history of asthma-like symptoms, labelled as uncontrolled and steroid resistant asthma with no gastrointestinal manifestations. Thoracic CT scan revealed a massive oesophagus due to achalasia, which caused severe tracheomalacia as a result of tracheal compression. Her symptoms regressed completely after a laparoscopic Heller myotomy surgery intervention.

  1. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  2. Cost analysis of negative-pressure wound therapy with instillation for wound bed preparation preceding split-thickness skin grafts for massive (>100 cm(2)) chronic venous leg ulcers.

    Science.gov (United States)

    Yang, C Kevin; Alcantara, Sean; Goss, Selena; Lantis, John C

    2015-04-01

    Massive (≥100 cm(2)) venous leg ulcers (VLUs) demonstrate very low closure rates with standard compression therapy and are costly to manage. Negative-pressure wound therapy (NPWT), followed by a split-thickness skin graft (STSG), can be a cost-effective alternative to this standard care. We performed a cost analysis of these two treatments. A retrospective review was performed of 10 ulcers treated with surgical debridement, 7 days of inpatient NPWT with topical antiseptic instillation (NPWTi), and STSG, with 4 additional days of inpatient NPWT bolster over the graft. Independent medical cost estimators were used to compare the cost of this treatment protocol with standard outpatient compression therapy. The average length of time ulcers were present before patients entered the study was 38 months (range, 3-120 months). Eight of 10 patients had complete VLU closure by 6 months after NPWTi with STSG. The 6-month costs of the proposed treatment protocol and standard twice-weekly compression therapy were estimated to be $27,000 and $28,000, respectively. NPWTi with STSG treatment is more effective for closure of massive VLUs at 6 months than that reported for standard compression therapy. Further, the cost of the proposed treatment protocol is comparable with standard compression therapy. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  3. The evolution of massive stars

    International Nuclear Information System (INIS)

    Loore, C. de

    1980-01-01

    The evolution of stars with masses between 15 M 0 and 100 M 0 is considered. Stars in this mass range lose a considerable fraction of their matter during their evolution. The treatment of convection, semi-convection and the influence of mass loss by stellar winds at different evolutionary phases are analysed as well as the adopted opacities. Evolutionary sequences computed by various groups are examined and compared with observations, and the advanced evolution of a 15 M 0 and a 25 M 0 star from zero-age main sequence (ZAMS) through iron collapse is discussed. The effect of centrifugal forces on stellar wind mass loss and the influence of rotation on evolutionary models is examined. As a consequence of the outflow of matter deeper layers show up and when the mass loss rates are large enough layers with changed composition, due to interior nuclear reactions, appear on the surface. The evolution of massive close binaries as well during the phase of mass loss by stellar wind as during the mass exchange and mass loss phase due to Roche lobe overflow is treated in detail, and the value of the parameters governing mass and angular momentum losses are discussed. The problem of the Wolf-Rayet stars, their origin and the possibilities of their production either as single stars or as massive binaries is examined. Finally, the origin of X-ray binaries is discussed and the scenario for the formation of these objects (starting from massive ZAMS close binaries, through Wolf-Rayet binaries leading to OB-stars with a compact companion after a supernova explosion) is reviewed and completed, including stellar wind mass loss. (orig.)

  4. Study of the compressibility of the nucleon

    Energy Technology Data Exchange (ETDEWEB)

    Morsch, P.H. [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Kernphysik]|[Laboratoire National Saturne, Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France)

    1996-12-31

    A brief discussion of the theoretical and experimental situation in baryon spectroscopy is given. Then, the radial structure is discussed, related to the ground state form factors and the compressibility. The compressibility derived from experimental data is compared with results from different nucleon models. From the study of the Roper resonance in nuclei information on the dynamical radius of the nucleon can be obtained. Experiments have been performed on deuteron and {sup 12}C which show no shift of the Roper resonance in these systems. This indicates no sizeable `swelling` or `shrinking` of the nucleon in the nuclear medium. (K.A.). 25 refs.

  5. Study of the compressibility of the nucleon

    International Nuclear Information System (INIS)

    Morsch, P.H.

    1996-01-01

    A brief discussion of the theoretical and experimental situation in baryon spectroscopy is given. Then, the radial structure is discussed, related to the ground state form factors and the compressibility. The compressibility derived from experimental data is compared with results from different nucleon models. From the study of the Roper resonance in nuclei information on the dynamical radius of the nucleon can be obtained. Experiments have been performed on deuteron and 12 C which show no shift of the Roper resonance in these systems. This indicates no sizeable 'swelling' or 'shrinking' of the nucleon in the nuclear medium. (K.A.)

  6. A massive, dead disk galaxy in the early Universe.

    Science.gov (United States)

    Toft, Sune; Zabl, Johannes; Richard, Johan; Gallazzi, Anna; Zibetti, Stefano; Prescott, Moire; Grillo, Claudio; Man, Allison W S; Lee, Nicholas Y; Gómez-Guijarro, Carlos; Stockmann, Mikkel; Magdis, Georgios; Steinhardt, Charles L

    2017-06-21

    At redshift z = 2, when the Universe was just three billion years old, half of the most massive galaxies were extremely compact and had already exhausted their fuel for star formation. It is believed that they were formed in intense nuclear starbursts and that they ultimately grew into the most massive local elliptical galaxies seen today, through mergers with minor companions, but validating this picture requires higher-resolution observations of their centres than is currently possible. Magnification from gravitational lensing offers an opportunity to resolve the inner regions of galaxies. Here we report an analysis of the stellar populations and kinematics of a lensed z = 2.1478 compact galaxy, which-surprisingly-turns out to be a fast-spinning, rotationally supported disk galaxy. Its stars must have formed in a disk, rather than in a merger-driven nuclear starburst. The galaxy was probably fed by streams of cold gas, which were able to penetrate the hot halo gas until they were cut off by shock heating from the dark matter halo. This result confirms previous indirect indications that the first galaxies to cease star formation must have gone through major changes not just in their structure, but also in their kinematics, to evolve into present-day elliptical galaxies.

  7. The surface compression of nuclei in relativistic mean-field approach

    International Nuclear Information System (INIS)

    Sharma, M.M.

    1991-01-01

    The surface compression properties of nuclei have been studied in the framework of the relativistic non-linear σ-ω model. Using the Thomas-Fermi approximation for semi-infinite nuclear matter, it is shown that by varying the σ-meson mass one can change the surface compression as relative to the bulk compression. This fact is in contrast with the known properties of the phenomenological Skyrme interactions, where the ratio of the surface to the bulk incompressibility (-K S /K V ) is nearly 1 in the scaling mode of compression. The results suggest that the relativistic mean-field model may provide an interaction with the essential ingredients different from those of the Skyrme interactions. (author) 23 refs., 2 figs., 1 tab

  8. The Final Stages of Massive Star Evolution and Their Supernovae

    Science.gov (United States)

    Heger, Alexander

    In this chapter I discuss the final stages in the evolution of massive stars - stars that are massive enough to burn nuclear fuel all the way to iron group elements in their core. The core eventually collapses to form a neutron star or a black hole when electron captures and photo-disintegration reduce the pressure support to an extent that it no longer can hold up against gravity. The late burning stages of massive stars are a rich subject by themselves, and in them many of the heavy elements in the universe are first generated. The late evolution of massive stars strongly depends on their mass, and hence can be significantly effected by mass loss due to stellar winds and episodic mass loss events - a critical ingredient that we do not know as well as we would like. If the star loses all the hydrogen envelope, a Type I supernova results, if it does not, a Type II supernova is observed. Whether the star makes neutron star or a black hole, or a neutron star at first and a black hole later, and how fast they spin largely affects the energetics and asymmetry of the observed supernova explosion. Beyond photon-based astronomy, other than the sun, a supernova (SN 1987) has been the only object in the sky we ever observed in neutrinos, and supernovae may also be the first thing we will ever see in gravitational wave detectors like LIGO. I conclude this chapter reviewing the deaths of the most massive stars and of Population III stars.

  9. Massive Submucosal Ganglia in Colonic Inertia.

    Science.gov (United States)

    Naemi, Kaveh; Stamos, Michael J; Wu, Mark Li-Cheng

    2018-02-01

    - Colonic inertia is a debilitating form of primary chronic constipation with unknown etiology and diagnostic criteria, often requiring pancolectomy. We have occasionally observed massively enlarged submucosal ganglia containing at least 20 perikarya, in addition to previously described giant ganglia with greater than 8 perikarya, in cases of colonic inertia. These massively enlarged ganglia have yet to be formally recognized. - To determine whether such "massive submucosal ganglia," defined as ganglia harboring at least 20 perikarya, characterize colonic inertia. - We retrospectively reviewed specimens from colectomies of patients with colonic inertia and compared the prevalence of massive submucosal ganglia occurring in this setting to the prevalence of massive submucosal ganglia occurring in a set of control specimens from patients lacking chronic constipation. - Seven of 8 specimens affected by colonic inertia harbored 1 to 4 massive ganglia, for a total of 11 massive ganglia. One specimen lacked massive ganglia but had limited sampling and nearly massive ganglia. Massive ganglia occupied both superficial and deep submucosal plexus. The patient with 4 massive ganglia also had 1 mitotically active giant ganglion. Only 1 massive ganglion occupied the entire set of 10 specimens from patients lacking chronic constipation. - We performed the first, albeit distinctly small, study of massive submucosal ganglia and showed that massive ganglia may be linked to colonic inertia. Further, larger studies are necessary to determine whether massive ganglia are pathogenetic or secondary phenomena, and whether massive ganglia or mitotically active ganglia distinguish colonic inertia from other types of chronic constipation.

  10. New massive gravity

    NARCIS (Netherlands)

    Bergshoeff, Eric A.; Hohm, Olaf; Townsend, Paul K.

    2012-01-01

    We present a brief review of New Massive Gravity, which is a unitary theory of massive gravitons in three dimensions obtained by considering a particular combination of the Einstein-Hilbert and curvature squared terms.

  11. STABLE ISOTOPE GEOCHEMISTRY OF MASSIVE ICE

    Directory of Open Access Journals (Sweden)

    Yurij K. Vasil’chuk

    2016-01-01

    Full Text Available The paper summarises stable-isotope research on massive ice in the Russian and North American Arctic, and includes the latest understanding of massive-ice formation. A new classification of massive-ice complexes is proposed, encompassing the range and variabilityof massive ice. It distinguishes two new categories of massive-ice complexes: homogeneousmassive-ice complexes have a similar structure, properties and genesis throughout, whereasheterogeneous massive-ice complexes vary spatially (in their structure and properties andgenetically within a locality and consist of two or more homogeneous massive-ice bodies.Analysis of pollen and spores in massive ice from Subarctic regions and from ice and snow cover of Arctic ice caps assists with interpretation of the origin of massive ice. Radiocarbon ages of massive ice and host sediments are considered together with isotope values of heavy oxygen and deuterium from massive ice plotted at a uniform scale in order to assist interpretation and correlation of the ice.

  12. Important role of vertical migration of compressed gas, oil and water in formation of AVPD (abnormally high pressure gradient) zones

    Energy Technology Data Exchange (ETDEWEB)

    Anikiyev, K.A.

    1980-01-01

    The principal role of vertical migration of compressed gases, gas-saturated petroleum and water during formation of abnormally high pressure gradients (AVPD) is confirmed by extensive factual data on gas production, grifons, blowouts and gushers that accompany drilling formations with AVPD from early history to the present time; the sources of vertical migration of compressed fluids, in accordance with geodynamic AVPD theory, are the deep degasified centers of the earth mantle. Among the various types of AVPD zones especially notable are the large (often massive or massive-layer) deposits and the intrusion aureoles that top them in the overlapping covering layers. Prediction of AVPD zones and determining their field and energy potential must be based on field-baric simulation of the formations being drilled in light of laws regarding the important role of the vertical migration of compressed fluids. When developing field-baric models, it is necessary to utilize the extensive and valuable data on grifons, gas production and blowouts that has been collected and categorized by drilling engineers and production geologists. To further develop data on field-baric conditions of the earth, it is necessary to collect and study signals of AVPD. First of all, there is a need to evaluate potential elastic resources of compressed fluids which can move from the bed into the well. Thus it is necessary to study and standardize intrusion aureoles and other AVPD zones within the aspect of fieldbaric modeling.

  13. The effect of compressive stress on the Young's modulus of unirradiated and irradiated nuclear graphites

    International Nuclear Information System (INIS)

    Oku, T.; Usui, T.; Ero, M.; Fukuda, Y.

    1977-01-01

    The Young's moduli of unirradiated and high temperature (800 to 1000 0 C) irradiated graphites for HTGR were measured by the ultrasonic method in the direction of applied compressive stress during and after stressing. The Young's moduli of all the tested graphites decreased with increasing compressive stress both during and after stressing. In order to investigate the reason for the decrease in Young's modulus by applying compressive stress, the mercury pore diameter distributions of a part of the unirradiated and irradiated specimens were measured. The change in pore distribution is believed to be associated with structural changes produced by irradiation and compressive stressing. The residual strain, after removing the compressive stress, showed a good correlation with the decrease in Young's modulus caused by the compressive stress. The decrease in Young's modulus by applying compressive stress was considered to be due to the increase in the mobile dislocation density and the growth or formation of cracks. The results suggest, however, that the mechanism giving the larger contribution depends on the brand of graphite, and in anisotropic graphite it depends on the direction of applied stress and the irradiation conditions. (author)

  14. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    Science.gov (United States)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  15. Methodes spectrales paralleles et applications aux simulations de couches de melange compressibles

    OpenAIRE

    Male , Jean-Michel; Fezoui , Loula ,

    1993-01-01

    La resolution des equations de Navier-Stokes en methodes spectrales pour des ecoulements compressibles peut etre assez gourmande en temps de calcul. On etudie donc ici la parallelisation d'un tel algorithme et son implantation sur une machine massivement parallele, la connection-machine CM-2. La methode spectrale s'adapte bien aux exigences du parallelisme massif, mais l'un des outils de base de cette methode, la transformee de Fourier rapide (lorsqu'elle doit etre appliquee sur les deux dime...

  16. Massive graviton geons

    Science.gov (United States)

    Aoki, Katsuki; Maeda, Kei-ichi; Misonoh, Yosuke; Okawa, Hirotada

    2018-02-01

    We find vacuum solutions such that massive gravitons are confined in a local spacetime region by their gravitational energy in asymptotically flat spacetimes in the context of the bigravity theory. We call such self-gravitating objects massive graviton geons. The basic equations can be reduced to the Schrödinger-Poisson equations with the tensor "wave function" in the Newtonian limit. We obtain a nonspherically symmetric solution with j =2 , ℓ=0 as well as a spherically symmetric solution with j =0 , ℓ=2 in this system where j is the total angular momentum quantum number and ℓ is the orbital angular momentum quantum number, respectively. The energy eigenvalue of the Schrödinger equation in the nonspherical solution is smaller than that in the spherical solution. We then study the perturbative stability of the spherical solution and find that there is an unstable mode in the quadrupole mode perturbations which may be interpreted as the transition mode to the nonspherical solution. The results suggest that the nonspherically symmetric solution is the ground state of the massive graviton geon. The massive graviton geons may decay in time due to emissions of gravitational waves but this timescale can be quite long when the massive gravitons are nonrelativistic and then the geons can be long-lived. We also argue possible prospects of the massive graviton geons: applications to the ultralight dark matter scenario, nonlinear (in)stability of the Minkowski spacetime, and a quantum transition of the spacetime.

  17. Statistical Compression for Climate Model Output

    Science.gov (United States)

    Hammerling, D.; Guinness, J.; Soh, Y. J.

    2017-12-01

    Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.

  18. Progress with lossy compression of data from the Community Earth System Model

    Science.gov (United States)

    Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.

    2017-12-01

    Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.

  19. Evolution of the Orszag-Tang vortex system in a compressible medium. I - Initial average subsonic flow

    Science.gov (United States)

    Dahlburg, R. B.; Picone, J. M.

    1989-01-01

    The results of fully compressible, Fourier collocation, numerical simulations of the Orszag-Tang vortex system are presented. The initial conditions for this system consist of a nonrandom, periodic field in which the magnetic and velocity field contain X points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average Mach number of the flow. In these numerical simulations, this initial Mach number is varied from 0.2-0.6. These values correspond to average plasma beta values ranging from 30.0 to 3.3, respectively. It is found that compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as the mass density and the nonsolenoidal flow field. These effects include (1) a retardation of growth of correlation between the magnetic field and the velocity field, (2) the emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.

  20. The mechanical vapour compression process applied to seawater desalination

    International Nuclear Information System (INIS)

    Murat, F.; Tabourier, B.

    1984-01-01

    The authors present the mechanical vapour compression process applied to sea water desalination. As an example, the paper presents the largest unit so far constructed by SIDEM using this process : a 1,500 m3/day unit installed in the Nuclear Power Plant of Flamanville in France which supplies a high quality process water to that plant. The authors outline the advantages of this process and present also the serie of mechanical vapour compression unit that SIDEM has developed in a size range in between 25 m3/day and 2,500 m3/day

  1. Hadronic production of massive lepton pairs

    International Nuclear Information System (INIS)

    Berger, E.L.

    1982-12-01

    A review is presented of recent experimental and theoretical progress in studies of the production of massive lepton pairs in hadronic collisions. I begin with the classical Drell-Yan annihilation model and its predictions. Subsequently, I discuss deviations from scaling, the status of the proofs of factorization in the parton model, higher-order terms in the perturbative QCD expansion, the discrepancy between measured and predicted yields (K factor), high-twist terms, soft gluon effects, transverse-momentum distributions, implications for weak vector boson (W +- and Z 0 ) yields and production properties, nuclear A dependence effects, correlations of the lepton pair with hadrons in the final state, and angular distributions in the lepton-pair rest frame

  2. Massive radiological releases profoundly differ from controlled releases

    International Nuclear Information System (INIS)

    Pascucci-Cahen, Ludivine; Patrick, Momal

    2012-11-01

    Preparing for a nuclear accident implies understanding potential consequences. While many specialized experts have been working on different particular aspects, surprisingly little effort has been dedicated to establishing the big picture and providing a global and balanced image of all major consequences. IRSN has been working on the cost of nuclear accidents, an exercise which must strive to be as comprehensive as possible since any omission obviously underestimates the cost. It therefore provides (ideally) an estimate of all cost components, thus revealing the structure of accident costs, and hence sketching a global picture. On a French PWR, it appears that controlled releases would cause an 'economical' accident with limited radiological consequences when compared to other costs; in contrast, massive releases would trigger a major crisis with strong radiological consequences. The two types of crises would confront managers with different types of challenges. (authors)

  3. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  4. Video-Assisted Minithoracotomy for Pulmonary Laceration with a Massive Hemothorax

    Directory of Open Access Journals (Sweden)

    Hideki Ota

    2014-01-01

    Full Text Available Severe intrathoracic hemorrhage from pulmonary parenchyma is the most serious complication of pulmonary laceration after blunt trauma requiring immediate surgical hemostasis through open thoracotomy. The safety and efficacy of video-assisted thoracoscopic surgery (VATS techniques for this life-threatening condition have not been fully evaluated yet. We report a case of pulmonary laceration with a massive hemothorax after blunt trauma successfully treated using a combination of muscle-sparing minithoracotomy with VATS techniques (video-assisted minithoracotomy. A 22-year-old man was transferred to our department after a falling accident. A diagnosis of right-sided pneumothorax was made on physical examination and urgent chest decompression was performed with a tube thoracostomy. Chest computed tomographic scan revealed pulmonary laceration with hematoma in the right lung. The pulmonary hematoma extending along segmental pulmonary artery in the helium of the middle lobe ruptured suddenly into the thoracic cavity, resulting in hemorrhagic shock on the fourth day after admission. Emergency right middle lobectomy was performed through video-assisted minithoracotomy. We used two cotton dissectors as a chopstick for achieving compression hemostasis during surgery. The patient recovered satisfactorily. Video-assisted minithoracotomy can be an alternative approach for the treatment of pulmonary lacerations with a massive hemothorax in hemodynamically unstable patients.

  5. Experimental research of the influence of the strength of ore samples on the parameters of an electromagnetic signal during acoustic excitation in the process of uniaxial compression

    Science.gov (United States)

    Yavorovich, L. V.; Bespal`ko, A. A.; Fedotov, P. I.

    2018-01-01

    Parameters of electromagnetic responses (EMRe) generated during uniaxial compression of rock samples under excitation by deterministic acoustic pulses are presented and discussed. Such physical modeling in the laboratory allows to reveal the main regularities of electromagnetic signals (EMS) generation in rock massive. The influence of the samples mechanical properties on the parameters of the EMRe excited by an acoustic signal in the process of uniaxial compression is considered. It has been established that sulfides and quartz in the rocks of the Tashtagol iron ore deposit (Western Siberia, Russia) contribute to the conversion of mechanical energy into the energy of the electromagnetic field, which is expressed in an increase in the EMS amplitude. The decrease in the EMS amplitude when the stress-strain state of the sample changes during the uniaxial compression is observed when the amount of conductive magnetite contained in the rock is increased. The obtained results are important for the physical substantiation of testing methods and monitoring of changes in the stress-strain state of the rock massive by the parameters of electromagnetic signals and the characteristics of electromagnetic emission.

  6. Massive Conformal Gravity

    International Nuclear Information System (INIS)

    Faria, F. F.

    2014-01-01

    We construct a massive theory of gravity that is invariant under conformal transformations. The massive action of the theory depends on the metric tensor and a scalar field, which are considered the only field variables. We find the vacuum field equations of the theory and analyze its weak-field approximation and Newtonian limit.

  7. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  8. The Evolution of Low-Metallicity Massive Stars

    Science.gov (United States)

    Szécsi, Dorottya

    2016-07-01

    Massive star evolution taking place in astrophysical environments consisting almost entirely of hydrogen and helium - in other words, low-metallicity environments - is responsible for some of the most intriguing and energetic cosmic phenomena, including supernovae, gamma-ray bursts and gravitational waves. This thesis aims to investigate the life and death of metal-poor massive stars, using theoretical simulations of the stellar structure and evolution. Evolutionary models of rotating, massive stars (9-600 Msun) with an initial metal composition appropriate for the low-metallicity dwarf galaxy I Zwicky 18 are presented and analyzed. We find that the fast rotating models (300 km/s) become a particular type of objects predicted only at low-metallicity: the so-called Transparent Wind Ultraviolet INtense (TWUIN) stars. TWUIN stars are fast rotating massive stars that are extremely hot (90 kK), very bright and as compact as Wolf-Rayet stars. However, as opposed to Wolf-Rayet stars, their stellar winds are optically thin. As these hot objects emit intense UV radiation, we show that they can explain the unusually high number of ionizing photons of the dwarf galaxy I Zwicky 18, an observational quantity that cannot be understood solely based on the normal stellar population of this galaxy. On the other hand, we find that the most massive, slowly rotating models become another special type of object predicted only at low-metallicity: core-hydrogen-burning cool supergiant stars. Having a slow but strong stellar wind, these supergiants may be important contributors in the chemical evolution of young galactic globular clusters. In particular, we suggest that the low mass stars observed today could form in a dense, massive and cool shell around these, now dead, supergiants. This scenario is shown to explain the anomalous surface abundances observed in these low mass stars, since the shell itself, having been made of the mass ejected by the supergiant’s wind, contains nuclear

  9. Widespread after-effects of nuclear war

    International Nuclear Information System (INIS)

    Teller, E.

    1984-01-01

    Radioactive fallout and depletion of the ozone layer, once believed catastrophic consequences of nuclear war, are now proved unimportant in comparison to immediate war damage. Today, ''nuclear winter'' is claimed to have apocalyptic effects. Uncertainties in massive smoke production and in meteorological phenomena give reason to doubt this conclusion. (author)

  10. Massive gravity from bimetric gravity

    International Nuclear Information System (INIS)

    Baccetti, Valentina; Martín-Moruno, Prado; Visser, Matt

    2013-01-01

    We discuss the subtle relationship between massive gravity and bimetric gravity, focusing particularly on the manner in which massive gravity may be viewed as a suitable limit of bimetric gravity. The limiting procedure is more delicate than currently appreciated. Specifically, this limiting procedure should not unnecessarily constrain the background metric, which must be externally specified by the theory of massive gravity itself. The fact that in bimetric theories one always has two sets of metric equations of motion continues to have an effect even in the massive gravity limit, leading to additional constraints besides the one set of equations of motion naively expected. Thus, since solutions of bimetric gravity in the limit of vanishing kinetic term are also solutions of massive gravity, but the contrary statement is not necessarily true, there is no complete continuity in the parameter space of the theory. In particular, we study the massive cosmological solutions which are continuous in the parameter space, showing that many interesting cosmologies belong to this class. (paper)

  11. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    Science.gov (United States)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress

  12. A new method of on-line multiparameter amplitude analysis with compression

    International Nuclear Information System (INIS)

    Morhac, M.; matousek, V.

    1996-01-01

    An algorithm of one-line multidimensional amplitude analysis with compression using fast adaptive orthogonal transform is presented in the paper. The method is based on a direct modification of multiplication coefficients of the signal flow graph of the fast Cooley-Tukey's algorithm. The coefficients are modified according to a reference vector representing the processed data. The method has been tested to compress three parameter experimental nuclear data. The efficiency of the derived adaptive transform is compared with classical orthogonal transforms. (orig.)

  13. Rapid depressurization of a compressible fluid

    International Nuclear Information System (INIS)

    Dang, M.; Dupont, J.F.; Weber, H.

    1978-08-01

    The rapid depressurization of a plenum is a situation frequently encountered in the dynamical analysis of nuclear gas cycles of the HHT type. Various methods of numerical analyses for a 1-dimensional flow model are examined: finite difference method; control volume method; method of characteristics. Based on the shallow water analogy to compressible flow, the numerical results are compared with those from a water table set up to simulate a standard problem. (Auth.)

  14. Compression-based aggregation model for medical web services.

    Science.gov (United States)

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  15. Massively Clustered CubeSats NCPS Demo Mission

    Science.gov (United States)

    Robertson, Glen A.; Young, David; Kim, Tony; Houts, Mike

    2013-01-01

    Technologies under development for the proposed Nuclear Cryogenic Propulsion Stage (NCPS) will require an un-crewed demonstration mission before they can be flight qualified over distances and time frames representative of a crewed Mars mission. In this paper, we describe a Massively Clustered CubeSats platform, possibly comprising hundreds of CubeSats, as the main payload of the NCPS demo mission. This platform would enable a mechanism for cost savings for the demo mission through shared support between NASA and other government agencies as well as leveraged commercial aerospace and academic community involvement. We believe a Massively Clustered CubeSats platform should be an obvious first choice for the NCPS demo mission when one considers that cost and risk of the payload can be spread across many CubeSat customers and that the NCPS demo mission can capitalize on using CubeSats developed by others for its own instrumentation needs. Moreover, a demo mission of the NCPS offers an unprecedented opportunity to invigorate the public on a global scale through direct individual participation coordinated through a web-based collaboration engine. The platform we describe would be capable of delivering CubeSats at various locations along a trajectory toward the primary mission destination, in this case Mars, permitting a variety of potential CubeSat-specific missions. Cameras on various CubeSats can also be used to provide multiple views of the space environment and the NCPS vehicle for video monitoring as well as allow the public to "ride along" as virtual passengers on the mission. This collaborative approach could even initiate a brand new Science, Technology, Engineering and Math (STEM) program for launching student developed CubeSat payloads beyond Low Earth Orbit (LEO) on future deep space technology qualification missions. Keywords: Nuclear Propulsion, NCPS, SLS, Mars, CubeSat.

  16. Vaidya spacetime in massive gravity's rainbow

    Directory of Open Access Journals (Sweden)

    Yaghoub Heydarzade

    2017-11-01

    Full Text Available In this paper, we will analyze the energy dependent deformation of massive gravity using the formalism of massive gravity's rainbow. So, we will use the Vainshtein mechanism and the dRGT mechanism for the energy dependent massive gravity, and thus analyze a ghost free theory of massive gravity's rainbow. We study the energy dependence of a time-dependent geometry, by analyzing the radiating Vaidya solution in this theory of massive gravity's rainbow. The energy dependent deformation of this Vaidya metric will be performed using suitable rainbow functions.

  17. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  18. MassiveNuS: cosmological massive neutrino simulations

    Science.gov (United States)

    Liu, Jia; Bird, Simeon; Zorrilla Matilla, José Manuel; Hill, J. Colin; Haiman, Zoltán; Madhavacheril, Mathew S.; Petri, Andrea; Spergel, David N.

    2018-03-01

    The non-zero mass of neutrinos suppresses the growth of cosmic structure on small scales. Since the level of suppression depends on the sum of the masses of the three active neutrino species, the evolution of large-scale structure is a promising tool to constrain the total mass of neutrinos and possibly shed light on the mass hierarchy. In this work, we investigate these effects via a large suite of N-body simulations that include massive neutrinos using an analytic linear-response approximation: the Cosmological Massive Neutrino Simulations (MassiveNuS). The simulations include the effects of radiation on the background expansion, as well as the clustering of neutrinos in response to the nonlinear dark matter evolution. We allow three cosmological parameters to vary: the neutrino mass sum Mν in the range of 0–0.6 eV, the total matter density Ωm, and the primordial power spectrum amplitude As. The rms density fluctuation in spheres of 8 comoving Mpc/h (σ8) is a derived parameter as a result. Our data products include N-body snapshots, halo catalogues, merger trees, ray-traced galaxy lensing convergence maps for four source redshift planes between zs=1–2.5, and ray-traced cosmic microwave background lensing convergence maps. We describe the simulation procedures and code validation in this paper. The data are publicly available at http://columbialensing.org.

  19. The breathing mode and the nuclear surface

    International Nuclear Information System (INIS)

    Blaizot, J.P.; Grammaticos, B.

    1981-01-01

    The role of nuclear surface in the breathing mode of nuclei is analyzed. We discuss a simple model in which the density varies according to a scaling of the coordinates. We show that this model reproduces accurately the results of microscopic calculations in heavy nuclei, and we use it to estimate the contribution of the surface to the effective compression modulus of semi-infinite nuclear matter. The calculation is performed in the framework of an extended Thomas-Fermi approximation and using several effective interactions. It is shown that the surface energy is maximum with respect to variations of the density around saturation density. The reduction of the effective compression modulus due to the surface turns to be proportional to the bulk compression modulus. The magnitude of the effect is compared with results of RPA calculations. Other contributions to the effective compressions modulus of finite nuclei are also discussed. (orig.)

  20. The compression algorithm for the data acquisition system in HT-7 tokamak

    International Nuclear Information System (INIS)

    Zhu Lin; Luo Jiarong; Li Guiming; Yue Dongli

    2003-01-01

    HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acquire, store, analyze and index the data. The volume of the data is nearly up to hundreds of million bytes. Besides the hardware and software support, a great capacity of data storage, process and transfer is a more important problem. To deal with this problem, the key technology is data compression algorithm. In the paper, the data format in HT-7 is introduced first, then the data compression algorithm, LZO, being a kind of portable lossless data compression algorithm with ANSIC, is analyzed. This compression algorithm, which fits well with the data acquisition and distribution in the nuclear fusion experiment, offers a pretty fast compression and extremely fast decompression. At last the performance evaluation of LZO application in HT-7 is given

  1. Holographically viable extensions of topologically massive and minimal massive gravity?

    Science.gov (United States)

    Altas, Emel; Tekin, Bayram

    2016-01-01

    Recently [E. Bergshoeff et al., Classical Quantum Gravity 31, 145008 (2014)], an extension of the topologically massive gravity (TMG) in 2 +1 dimensions, dubbed as minimal massive gravity (MMG), which is free of the bulk-boundary unitarity clash that inflicts the former theory and all the other known three-dimensional theories, was found. Field equations of MMG differ from those of TMG at quadratic terms in the curvature that do not come from the variation of an action depending on the metric alone. Here we show that MMG is a unique theory and there does not exist a deformation of TMG or MMG at the cubic and quartic order (and beyond) in the curvature that is consistent at the level of the field equations. The only extension of TMG with the desired bulk and boundary properties having a single massive degree of freedom is MMG.

  2. Climatic Consequences of Nuclear Conflict

    Science.gov (United States)

    Robock, A.

    2011-12-01

    A nuclear war between Russia and the United States could still produce nuclear winter, even using the reduced arsenals of about 4000 total nuclear weapons that will result by 2017 in response to the New START treaty. A nuclear war between India and Pakistan, with each country using 50 Hiroshima-sized atom bombs as airbursts on urban areas, could produce climate change unprecedented in recorded human history. This scenario, using much less than 1% of the explosive power of the current global nuclear arsenal, would produce so much smoke from the resulting fires that it would plunge the planet to temperatures colder than those of the Little Ice Age of the 16th to 19th centuries, shortening the growing season around the world and threatening the global food supply. Crop model studies of agriculture in the U.S. and China show massive crop losses, even for this regional nuclear war scenario. Furthermore, there would be massive ozone depletion with enhanced ultraviolet radiation reaching the surface. These surprising conclusions are the result of recent research (see URL) by a team of scientists including those who produced the pioneering work on nuclear winter in the 1980s, using the NASA GISS ModelE and NCAR WACCM GCMs. The soot is self-lofted into the stratosphere, and the effects of regional and global nuclear war would last for more than a decade, much longer than previously thought. Nuclear proliferation continues, with nine nuclear states now, and more working to develop or acquire nuclear weapons. The continued environmental threat of the use of even a small number of nuclear weapons must be considered in nuclear policy deliberations in Russia, the U.S., and the rest of the world.

  3. P. W. Bridgman's contributions to the foundations of shock compression of condensed matter

    Energy Technology Data Exchange (ETDEWEB)

    Nellis, W J, E-mail: nellis@physics.harvard.ed [Department of Physics, Harvard University, Cambridge MA 02138 (United States)

    2010-03-01

    Based on his 50-year career in static high-pressure research, P. W. Bridgman (PWB) is the father of modern high-pressure physics. What is not generally recognized is that Bridgman was also intimately connected with establishing shock compression as a scientific tool and he predicted major events in shock research that occurred up to 40 years after his death. In 1956 the first phase transition under shock compression was reported in Fe at 13 GPa (130 kbar). PWB said a phase transition could not occur in a {approx}microsec, thus setting off a controversy. The scientific legitimacy of shock compression resulted 5 years later when static high-pressure researchers confirmed with x-ray diffraction the existence of epsilon-Fe. Once PWB accepted the fact that shock waves generated with chemical explosives were a valid scientific tool, he immediately realized that substantially higher pressures would be achieved with nuclear explosives. He included his ideas for achieving higher pressures in articles published a few years after his death. L. V. Altshuler eventually read Bridgman's articles and pursued the idea of using nuclear explosives to generate super high pressures, which subsequently morphed today into giant lasers. PWB also anticipated combining static and shock methods, which today is done with pre-compression of a soft sample in a diamond anvil cell followed by laser-driven shock compression. One variation of that method is the reverberating-shock technique, in which the first shock pre-compresses a soft sample and subsequent reverberations isentropically compress the first-shocked state.

  4. Formation of the First Star Clusters and Massive Star Binaries by Fragmentation of Filamentary Primordial Gas Clouds

    Science.gov (United States)

    Hirano, Shingo; Yoshida, Naoki; Sakurai, Yuya; Fujii, Michiko S.

    2018-03-01

    We perform a set of cosmological simulations of early structure formation incorporating baryonic streaming motions. We present a case where a significantly elongated gas cloud with ∼104 solar mass (M ⊙) is formed in a pre-galactic (∼107 M ⊙) dark halo. The gas streaming into the halo compresses and heats the massive filamentary cloud to a temperature of ∼10,000 Kelvin. The gas cloud cools rapidly by atomic hydrogen cooling, and then by molecular hydrogen cooling down to ∼400 Kelvin. The rapid decrease of the temperature and hence of the Jeans mass triggers fragmentation of the filament to yield multiple gas clumps with a few hundred solar masses. We estimate the mass of the primordial star formed in each fragment by adopting an analytic model based on a large set of radiation hydrodynamics simulations of protostellar evolution. The resulting stellar masses are in the range of ∼50–120 M ⊙. The massive stars gravitationally attract each other and form a compact star cluster. We follow the dynamics of the star cluster using a hybrid N-body simulation. We show that massive star binaries are formed in a few million years through multi-body interactions at the cluster center. The eventual formation of the remnant black holes will leave a massive black hole binary, which can be a progenitor of strong gravitational wave sources similar to those recently detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO).

  5. Sustainability of compressive residual stress by stress improvement processes

    International Nuclear Information System (INIS)

    Nishikawa, Satoru; Okita, Shigeru; Yamaguchi, Atsunori

    2013-01-01

    Stress improvement processes are countermeasures against stress corrosion cracking in nuclear power plant components. It is necessary to confirm whether compressive residual stress induced by stress improvement processes can be sustained under operation environment. In order to evaluate stability of the compressive residual stress in 60-year operating conditions, the 0.07% cyclic strains of 200 times at 593 K were applied to the welded specimens, then a thermal aging treatment for 1.66x10 6 s at 673 K was carried out. As the result, it was confirmed that the compressive residual stresses were sustained on both surfaces of the dissimilar welds of austenitic stainless steel (SUS316L) and nickel base alloy (NCF600 and alloy 182) processed by laser peening (LP), water jet peening (WJP), ultrasonic shot peening (USP), shot peening (SP) and polishing under 60-year operating conditions. (author)

  6. ClC-3 Promotes Osteogenic Differentiation in MC3T3-E1 Cell After Dynamic Compression.

    Science.gov (United States)

    Wang, Dawei; Wang, Hao; Gao, Feng; Wang, Kun; Dong, Fusheng

    2017-06-01

    ClC-3 chloride channel has been proved to have a relationship with the expression of osteogenic markers during osteogenesis, persistent static compression can upregulate the expression of ClC-3 and regulate osteodifferentiation in osteoblasts. However, there was no study about the relationship between the expression of ClC-3 and osteodifferentiation after dynamic compression. In this study, we applied dynamic compression on MC3T3-E1 cells to detect the expression of ClC-3, runt-related transcription factor 2 (Runx2), bone morphogenic protein-2 (BMP-2), osteopontin (OPN), nuclear-associated antigen Ki67 (Ki67), and proliferating cell nuclear antigen (PCNA) in biopress system, then we investigated the expression of these genes after dynamic compression with Chlorotoxin (specific ClC-3 chloride channel inhibitor) added. Under transmission electron microscopy, there were more cell surface protrusions, rough surfaced endoplasmic reticulum, mitochondria, Golgi apparatus, abundant glycogen, and lysosomes scattered in the cytoplasm in MC3T3-E1 cells after dynamic compression. The nucleolus was more obvious. We found that ClC-3 was significantly up-regulated after dynamic compression. The compressive force also up-regulated Runx2, BMP-2, and OPN after dynamic compression for 2, 4 and 8 h. The proliferation gene Ki67 and PCNA did not show significantly change after dynamic compression for 8 h. Chlorotoxin did not change the expression of ClC-3 but reduced the expression of Runx2, BMP-2, and OPN after dynamic compression compared with the group without Cltx added. The data from the current study suggested that ClC-3 may promotes osteogenic differentiation in MC3T3-E1 cell after dynamic compression. J. Cell. Biochem. 118: 1606-1613, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  8. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  9. Effect of applied stress on the compressive residual stress introduced by laser peening

    International Nuclear Information System (INIS)

    Sumiya, Rie; Tazawa, Toshiyuki; Narazaki, Chihiro; Saito, Toshiyuki; Kishimoto, Kikuo

    2016-01-01

    Peening is the process which is able to be generated compressive residual stress and is known to be effective for preventing SCC initiation and improvement of fatigue strength. Laser peening is used for the nuclear power plant components in order to prevent SCC initiation. Although it is reported that the compressive residual stress decreases due to applied stresses under general operating condition, the change of residual stress might be large under excessive loading such as an earthquake. The objectives of this study are to evaluate the relaxation behavior of the compressive residual stress due to laser peening and to confirm the surface residual stress after loading. Therefore laser peened round bar test specimens of SUS316L which is used for the reactor internals of nuclear power plant were loaded at room temperature and elevated temperature and then surface residual stresses were measured by X-ray diffraction method. In the results of this test, it was confirmed that the compressive residual stress remained after applying uniform stress larger than 0.2% proof stress, and the effect of cyclic loading on the residual stress was small. The effect of applying compressive stress on the residual stress relaxation was confirmed to be less than that of applying tensile stress. Plastic deformation through a whole cross section causes the change in the residual stress distribution. As a result, the surface compressive residual stress is released. It was shown that the effect of specimen size on residual stress relaxation and the residual stress relaxation behavior in the stress concentration region can be explained by assumed stress relaxation mechanism. (author)

  10. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  11. Non-nuclear energies

    International Nuclear Information System (INIS)

    Nifenecker, Herve

    2006-01-01

    The different meanings of the word 'energy', as understood by economists, are reviewed and explained. Present rates of consumption of fossil and nuclear fuels are given as well as corresponding reserves and resources. The time left before exhaustion of these reserves is calculated for different energy consumption scenarios. On finds that coal and nuclear only allow to reach the end of this century. Without specific dispositions, the predicted massive use of coal is not compatible with any admissible value of global heating. Thus, we discuss the clean coal techniques, including carbon dioxide capture and storage. On proceeds with the discussion of availability and feasibility of renewable energies, with special attention to electricity production. One distinguishes controllable renewable energies from those which are intermittent. Among the first we find hydroelectricity, biomass, and geothermal and among the second, wind and solar. At world level, hydroelectricity will, most probably, remain the main renewable contributor to electricity production. Photovoltaic is extremely promising for providing villages remote deprived from access to a centralized network. Biomass should be an important source of biofuels. Geothermal energy should be an interesting source of low temperature heat. Development of wind energy will be inhibited by the lack of cheap and massive electricity storage; its contribution should not exceed 10% of electricity production. Its present development is totally dependent upon massive public support. (author)

  12. Nuclear deterrence: Inherent escalation?

    International Nuclear Information System (INIS)

    Bergbauer, J.R. Jr.

    1993-01-01

    Despite 40 years of peace between the super powers, there is increasing clamor to the effect that nuclear war between the super powers is imminent; or could occur through escalation from a minor conflict; or could result from harsh rhetoric (but only on the part of the U.S.) in the super power dialogue. The factor that is ignored is that a massive nuclear attack would be rational ONLY if that attack could inflict such damage that the other super power could not launch a significant retaliatory nuclear attack. ONLY in this circumstance would there be any profit in launching an initial Strategic Nuclear Attack. This First Strike capability is not now possessed nor projected to be developed by either super power. As long as ANY possible Strategic Nuclear Attack against the national territory of one super power would be insufficient to prevent an equally destructive retaliatory attack, then a Strategic Nuclear Attack would inevitably result in the destruction of both and would be profitless, hence, pointless. This situation describes Mutually Assured Destruction (MAD), the governing conflict paradigm applicable to both super powers. The only convential attack that would even remotely rival the national-destruction potential of a Strategic Nuclear Attack and could cause the attacked power to consider launching a retaliatory Strategic Nuclear Attack would be a massive land-air invasion/occupation of one super power by the other. Since neither super power can successfully execute such a conventional invasion/occupation, this situation is moot. The geo-political environments of the two super powers are so asymmetrical and their military positions so symmetrical that the probability of ANY forseeable situation resulting in their resorting to a Strategic Nuclear Exchange is vanishingly small. It is possible escape the Chicken-Little syndrome and, instead, devote energy to ensuring the maintenance of this favorable, but fragile, world system

  13. Massive Supergravity and Deconstruction

    CERN Document Server

    Gregoire, T; Shadmi, Y; Gregoire, Thomas; Schwartz, Matthew D; Shadmi, Yael

    2004-01-01

    We present a simple superfield Lagrangian for massive supergravity. It comprises the minimal supergravity Lagrangian with interactions as well as mass terms for the metric superfield and the chiral compensator. This is the natural generalization of the Fierz-Pauli Lagrangian for massive gravity which comprises mass terms for the metric and its trace. We show that the on-shell bosonic and fermionic fields are degenerate and have the appropriate spins: 2, 3/2, 3/2 and 1. We then study this interacting Lagrangian using goldstone superfields. We find that a chiral multiplet of goldstones gets a kinetic term through mixing, just as the scalar goldstone does in the non-supersymmetric case. This produces Planck scale (Mpl) interactions with matter and all the discontinuities and unitarity bounds associated with massive gravity. In particular, the scale of strong coupling is (Mpl m^4)^1/5, where m is the multiplet's mass. Next, we consider applications of massive supergravity to deconstruction. We estimate various qu...

  14. COLA with massive neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Wright, Bill S.; Winther, Hans A.; Koyama, Kazuya, E-mail: bill.wright@port.ac.uk, E-mail: hans.winther@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, Hampshire, PO1 3FX (United Kingdom)

    2017-10-01

    The effect of massive neutrinos on the growth of cold dark matter perturbations acts as a scale-dependent Newton's constant and leads to scale-dependent growth factors just as we often find in models of gravity beyond General Relativity. We show how to compute growth factors for ΛCDM and general modified gravity cosmologies combined with massive neutrinos in Lagrangian perturbation theory for use in COLA and extensions thereof. We implement this together with the grid-based massive neutrino method of Brandbyge and Hannestad in MG-PICOLA and compare COLA simulations to full N -body simulations of ΛCDM and f ( R ) gravity with massive neutrinos. Our implementation is computationally cheap if the underlying cosmology already has scale-dependent growth factors and it is shown to be able to produce results that match N -body to percent level accuracy for both the total and CDM matter power-spectra up to k ∼< 1 h /Mpc.

  15. Minimal massive 3D gravity

    International Nuclear Information System (INIS)

    Bergshoeff, Eric; Merbis, Wout; Hohm, Olaf; Routh, Alasdair J; Townsend, Paul K

    2014-01-01

    We present an alternative to topologically massive gravity (TMG) with the same ‘minimal’ bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new ‘minimal massive gravity’ has both a positive energy graviton and positive central charges for the asymptotic AdS-boundary conformal algebra. (paper)

  16. Data compression with applications to digital radiology

    International Nuclear Information System (INIS)

    Elnahas, S.E.

    1985-01-01

    The structure of arithmetic codes is defined in terms of source parsing trees. The theoretical derivations of algorithms for the construction of optimal and sub-optimal structures are presented. The software simulation results demonstrate how arithmetic coding out performs variable-length to variable-length coding. Linear predictive coding is presented for the compression of digital diagnostic images from several imaging modalities including computed tomography, nuclear medicine, ultrasound, and magnetic resonance imaging. The problem of designing optimal predictors is formulated and alternative solutions are discussed. The results indicate that noiseless compression factors between 1.7 and 7.4 can be achieved. With nonlinear predictive coding, noisy and noiseless compression techniques are combined in a novel way that may have a potential impact on picture archiving and communication systems in radiology. Adaptive fast discrete cosine transform coding systems are used as nonlinear block predictors, and optimal delta modulation systems are used as nonlinear sequential predictors. The off-line storage requirements for archiving diagnostic images are reasonably reduced by the nonlinear block predictive coding. The online performance, however, seems to be bounded by that of the linear systems. The subjective quality of image imperfect reproductions from the cosine transform coding is promising and prompts future research on the compression of diagnostic images by transform coding systems and the clinical evaluation of these systems

  17. The Compressed Baryonic Matter Experiment at FAIR

    International Nuclear Information System (INIS)

    Heuser, Johann M.

    2013-01-01

    The Compressed Baryonic Matter (CBM) experiment will explore the phase diagram of strongly interacting matter in the region of high net baryon densities. The experiment is being laid out for nuclear collision rates from 0.1 to 10 MHz to access a unique wide spectrum of probes, including rarest particles like hadrons containing charm quarks, or multi-strange hyperons. The physics programme will be performed with ion beams of energies up to 45 GeV/nucleon. Those will be delivered by the SIS-300 synchrotron at the completed FAIR accelerator complex. Parts of the research programme can already be addressed with the SIS-100 synchrotron at the start of FAIR operation in 2018. The initial energy range of up to 11 GeV/nucleon for heavy nuclei, 14 GeV/nucleon for light nuclei, and 29 GeV for protons, allows addressing the equation of state of compressed nuclear matter, the properties of hadrons in a dense medium, the production and propagation of charm near the production threshold, and exploring the third, strange dimension of the nuclide chart. In this article we summarize the CBM physics programme, the preparation of the detector, and give an outline of the recently begun construction of the Facility for Antiproton and Ion Research

  18. Russia's nuclear elite on rampage

    International Nuclear Information System (INIS)

    Popova, L.

    1993-01-01

    In July 1992, the Russian Ministry of Nuclear Industry began pressing the Russian government to adopt a plan to build new nuclear power plants. In mid-January 1993 the government announced that it will build at least 30 new nuclear power plants, and that the second stage of the building program will include construction of three fast-breeder reactors. In this article, the author addresses the rationale behind this massive building program, despite the country's economic condition and public dread of another Chernobyl-type accident. The viewpoints of both the Russian Ministry of Nuclear Industry and opposing interests are discussed

  19. Nuclear power in our societies

    International Nuclear Information System (INIS)

    Fardeau, J.C.

    2011-01-01

    Hiroshima, Chernobyl, Fukushima Daiichi are the well known sad milestones on the path toward a broad development of nuclear energy. They are so well known that they have blurred certainly for long in a very unfair way the positive image of nuclear energy in the public eye. The impact of the media appetite for disasters favours the fear and puts aside all the achievements of nuclear sciences like nuclear medicine for instance and all the assets of nuclear power like the quasi absence of greenhouse gas emission or its massive capacity to produce electricity or heat. The unique solution to enhance nuclear acceptance is the reduction of the fear through a better understanding of nuclear sciences by the public. (A.C.)

  20. Nuclear expert web search and crawler algorithm

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D.

    2013-01-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  1. Nuclear expert web search and crawler algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D., E-mail: thiagoreis@usp.br, E-mail: barroso@ipen.br, E-mail: bdbfilho@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  2. The start-up of a gas turbine engine using compressed air tangentially fed onto the blades of the basic turbine

    Science.gov (United States)

    Slobodyanyuk, L. K.; Dayneko, V. I.

    1983-01-01

    The use of compressed air was suggested to increase the reliability and motor lifetime of a gas turbine engine. Experiments were carried out and the results are shown in the form of the variation in circumferential force as a function of the entry angle of the working jet onto the turbine blade. The described start-up method is recommended for use with massive rotors.

  3. Datafile: [nuclear power in] Japan

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    Japan is third after the USA and France in terms of the Western World's installed nuclear capacity, but it has by far the largest forward programme. Great effort is also being put into the fuel cycle and advanced reactors. There is close co-operation between the government, utilities and manufacturers, but Japan has not sought to export reactors. The government has responded to the growing public opposition to nuclear power with a massive increase in its budget for public relations. Details of the nuclear power programme are given. (author)

  4. Nonsingular universe in massive gravity's rainbow

    Science.gov (United States)

    Hendi, S. H.; Momennia, M.; Eslam Panah, B.; Panahiyan, S.

    2017-06-01

    One of the fundamental open questions in cosmology is whether we can regard the universe evolution without singularity like a Big Bang or a Big Rip. This challenging subject stimulates one to regard a nonsingular universe in the far past with an arbitrarily large vacuum energy. Considering the high energy regime in the cosmic history, it is believed that Einstein gravity should be corrected to an effective energy dependent theory which could be acquired by gravity's rainbow. On the other hand, employing massive gravity provided us with solutions to some of the long standing fundamental problems of cosmology such as cosmological constant problem and self acceleration of the universe. Considering these aspects of gravity's rainbow and massive gravity, in this paper, we initiate studying FRW cosmology in the massive gravity's rainbow formalism. At first, we show that although massive gravity modifies the FRW cosmology, but it does not itself remove the big bang singularity. Then, we generalize the massive gravity to the case of energy dependent spacetime and find that massive gravity's rainbow can remove the early universe singularity. We bring together all the essential conditions for having a nonsingular universe and the effects of both gravity's rainbow and massive gravity generalizations on such criteria are determined.

  5. The role of the underground for massive storage of energy: a preliminary glance of the French case

    Science.gov (United States)

    Audigane, Pascal; Gentier, Sylvie; Bader, Anne-Gaelle; Beccaletto, Laurent; Bellenfant, Gael

    2014-05-01

    The question of storing energy in France has become of primary importance since the launch of a road map from the government which places in pole position this topic among seven major milestones to be challenged in the context of the development of innovative technology in the country. The European objective to reach 20% of renewables in the energy market, from which a large part would come from wind and solar power generation, raises several issues regarding the capacity of the grid to manage the various intermittent energy sources in line with the variability of the public demand and offer. These uncertainties are highly influenced by unpredictable weather and economic fluctuations. To facilitate the large-scale integration of variable renewable electricity sources in grids, massive energy storage is needed. In that case, electric energy storage techniques involving the use of underground are often under consideration as they offer a large storage capacity volume with a adapted potential of confining and the space required for the implantation. Among the panel of massive storage technologies, one can find (i) the Underground Pumped Hydro-Storage (UPHS) which are an adaptation of classical Pumped Hydro Storage system often connected with dam constructions, (ii) the compressed air storage (CAES) and (iii) the hydrogen storage from conversion of electricity into H2 and O2 by electrolysis. UPHS concept is based on using the potential energy between two water reservoirs positioned at different heights. Favorable natural locations like mountainous areas or cliffs are spatially limited given the geography of the territory. This concept could be extended with the integration of one of these reservoirs in an underground cavities (specifically mined or reuse of preexisting mines) to increase opportunities on the national territory. Massive storage based on compression and relaxation of air (CAES) requires high volume and confining pressure around the storage that exists

  6. Experimental search for compression phenomena in fast nucleus--nucleus collisions

    International Nuclear Information System (INIS)

    Schopper, E.; Baumgardt, H.G.; Obst, E.

    1977-01-01

    The occurrence of compression phenomena and shock waves, connected with the increase of the density of the nuclear matter during the interpenetration of two fast nuclei, are discussed. Current experiments dealing with this problem are reviewed. Before considering the mechanism of the interpenetration of two fast nuclei it may be useful to look at more simple situations, i.e., proton-proton interactions, then to envelop them with nuclear matter, considering proton-nucleus interactions. Only very general features are described, which may give suggestions for the understanding of the nucleus-nucleus impact

  7. Massive propagators in instanton fields

    International Nuclear Information System (INIS)

    Brown, L.S.; Lee, C.

    1978-01-01

    Green's functions for massive spinor and vector particles propagating in a self-dual but otherwise arbitrary non-Abelian gauge field are shown to be completely determined by the corresponding Green's functions of massive scalar particles

  8. Topologically massive supergravity

    Directory of Open Access Journals (Sweden)

    S. Deser

    1983-01-01

    Full Text Available The locally supersymmetric extension of three-dimensional topologically massive gravity is constructed. Its fermionic part is the sum of the (dynamically trivial Rarita-Schwinger action and a gauge-invariant topological term, of second derivative order, analogous to the gravitational one. It is ghost free and represents a single massive spin 3/2 excitation. The fermion-gravity coupling is minimal and the invariance is under the usual supergravity transformations. The system's energy, as well as that of the original topological gravity, is therefore positive.

  9. Isoscalar compression modes in relativistic random phase approximation

    International Nuclear Information System (INIS)

    Ma, Zhong-yu; Van Giai, Nguyen.; Wandelt, A.; Vretenar, D.; Ring, P.

    2001-01-01

    Monopole and dipole compression modes in nuclei are analyzed in the framework of a fully consistent relativistic random phase approximation (RRPA), based on effective mean-field Lagrangians with nonlinear meson self-interaction terms. The large effect of Dirac sea states on isoscalar strength distribution functions is illustrated for the monopole mode. The main contribution of Fermi and Dirac sea pair states arises through the exchange of the scalar meson. The effect of vector meson exchange is much smaller. For the monopole mode, RRPA results are compared with constrained relativistic mean-field calculations. A comparison between experimental and calculated energies of isoscalar giant monopole resonances points to a value of 250-270 MeV for the nuclear matter incompressibility. A large discrepancy remains between theoretical predictions and experimental data for the dipole compression mode

  10. Massive global ozone loss predicted following regional nuclear conflict

    Science.gov (United States)

    Mills, Michael J.; Toon, Owen B.; Turco, Richard P.; Kinnison, Douglas E.; Garcia, Rolando R.

    2008-01-01

    We use a chemistry-climate model and new estimates of smoke produced by fires in contemporary cities to calculate the impact on stratospheric ozone of a regional nuclear war between developing nuclear states involving 100 Hiroshima-size bombs exploded in cities in the northern subtropics. We find column ozone losses in excess of 20% globally, 25–45% at midlatitudes, and 50–70% at northern high latitudes persisting for 5 years, with substantial losses continuing for 5 additional years. Column ozone amounts remain near or <220 Dobson units at all latitudes even after three years, constituting an extratropical “ozone hole.” The resulting increases in UV radiation could impact the biota significantly, including serious consequences for human health. The primary cause for the dramatic and persistent ozone depletion is heating of the stratosphere by smoke, which strongly absorbs solar radiation. The smoke-laden air rises to the upper stratosphere, where removal mechanisms are slow, so that much of the stratosphere is ultimately heated by the localized smoke injections. Higher stratospheric temperatures accelerate catalytic reaction cycles, particularly those of odd-nitrogen, which destroy ozone. In addition, the strong convection created by rising smoke plumes alters the stratospheric circulation, redistributing ozone and the sources of ozone-depleting gases, including N2O and chlorofluorocarbons. The ozone losses predicted here are significantly greater than previous “nuclear winter/UV spring” calculations, which did not adequately represent stratospheric plume rise. Our results point to previously unrecognized mechanisms for stratospheric ozone depletion. PMID:18391218

  11. Ultimate capacity and influenced factors analysis of nuclear RC containment subjected to internal pressure

    International Nuclear Information System (INIS)

    Song Chenning; Hou Gangling; Zhou Guoliang

    2014-01-01

    Ultimate compressive bearing capacity, influenced factors and its rules of nuclear RC containment are key problems of safety assessment, accident treatment and structure design, etc. Ultimate compressive bearing capacity of nuclear RC containment is shown by concrete damaged plasticity model and steel double liner model of ABAQUS. The study shows that the concrete of nuclear RC containment cylinder wall becomes plastic when the internal pressure is up to 0.87 MPa, the maximum tensile strain of steel liner exceeds 3000 × 10 6 and nuclear RC containment reaches ultimate status when the internal pressure is up to 1.02 MPa. The result shows that nuclear RC containment is in elastic condition under the design internal pressure and the bearing capacity meets requirement. Prestress and steel liner play key parts in the ultimate internal pressure and failure mode of nuclear RC containment. The study results have value for the analysis of ultimate compressive bearing capacity, structure design and safety assessment. (authors)

  12. Spacetime structure of massive Majorana particles and massive gravitino

    Energy Technology Data Exchange (ETDEWEB)

    Ahluwalia, D.V.; Kirchbach, M. [Theoretical Physics Group, Facultad de Fisica, Universidad Autonoma de Zacatecas, A.P. 600, 98062 Zacatecas (Mexico)

    2003-07-01

    The profound difference between Dirac and Majorana particles is traced back to the possibility of having physically different constructs in the (1/2, 0) 0 (0,1/2) representation space. Contrary to Dirac particles, Majorana-particle propagators are shown to differ from the simple linear {gamma} {mu} p{sub {mu}}, structure. Furthermore, neither Majorana particles, nor their antiparticles can be associated with a well defined arrow of time. The inevitable consequence of this peculiarity is the particle-antiparticle metamorphosis giving rise to neutrinoless double beta decay, on the one side, and enabling spin-1/2 fields to act as gauge fields, gauginos, on the other side. The second part of the lecture notes is devoted to massive gravitino. We argue that a spin measurement in the rest frame for an unpolarized ensemble of massive gravitino, associated with the spinor-vector [(1/2, 0) 0 (0,1/2)] 0 (1/2,1/2) representation space, would yield the results 3/2 with probability one half, and 1/2 with probability one half. The latter is distributed uniformly, i.e. as 1/4, among the two spin-1/2+ and spin-1/2- states of opposite parities. From that we draw the conclusion that the massive gravitino should be interpreted as a particle of multiple spin. (Author)

  13. Massive radiological releases profoundly differ from controlled releases

    International Nuclear Information System (INIS)

    Pascucci-Cahen, Ludivine; Patrick, Momal

    2013-01-01

    In this article, the authors report identification and assessment of different types of costs associated with nuclear accidents. They first outline that these cost assessments must be as exhaustive or comprehensive as possible. While referring to past accidents, they define the different categories of costs: on-site costs (decontamination and dismantling, electricity not produced on the site), off-site costs (health costs, psychological costs, farming losses), image-related costs (impact on food and farm product exports, decrease of other exports), costs related to energy production, costs related to contaminated areas (refugees, lands). They give an assessment of a severe nuclear accident (i.e. an accident with important but controlled radiological releases) in France and outline that it would be a national catastrophe which could be however managed. They discuss the possible variations of the estimated costs. Then, they show that a major accident (i.e. an accident with massive radiological releases) in France would be an unmanageable European catastrophe because of the radiological consequences, of high economic costs, and of huge losses

  14. Search of massive star formation with COMICS

    Science.gov (United States)

    Okamoto, Yoshiko K.

    2004-04-01

    Mid-infrared observations is useful for studies of massive star formation. Especially COMICS offers powerful tools: imaging survey of the circumstellar structures of forming massive stars such as massive disks and cavity structures, mass estimate from spectroscopy of fine structure lines, and high dispersion spectroscopy to census gas motion around formed stars. COMICS will open the next generation infrared studies of massive star formation.

  15. Non-nuclear energies

    International Nuclear Information System (INIS)

    Nifenecker, H.

    2007-01-01

    The different meanings of the word 'energy', as understood by economists, are reviewed and explained. Present rates of consumption of fossil and nuclear fuels are given as well as corresponding reserves and resources. The time left before exhaustion of these reserves is calculated for different energy consumption scenarios. On finds that coal and nuclear only allow to reach the end of this century. Without specific dispositions, the predicted massive use of coal is not compatible with any admissible value of global heating. Thus, we discuss the clean coal techniques, including carbon dioxide capture and storage. One proceeds with the discussion of availability and feasibility of renewable energies, with special attention to electricity production. One distinguishes controllable renewable energies from those which are intermittent. Among the first we find hydroelectricity, biomass, and geothermal and among the second, wind and solar. At world level, hydroelectricity will, most probably, remain the main renewable contributor to electricity production. Photovoltaic is extremely promising for providing villages remote deprived from access to a centralized network. Biomass should be an important source of bio-fuels. Geothermal energy should be an interesting source of low temperature heat. Development of wind energy will be inhibited by the lack of cheap and massive electricity storage; its contribution should not exceed 10% of electricity production. Its present development is totally dependent upon massive public support. A large part of this paper follows chapters of the monograph 'L'energie de demain: technique, environnement, economie', EDP Sciences, 2005. (author)

  16. Evidence for wide-spread active galactic nucleus-driven outflows in the most massive z ∼ 1-2 star-forming galaxies

    International Nuclear Information System (INIS)

    Genzel, R.; Förster Schreiber, N. M.; Rosario, D.; Lang, P.; Lutz, D.; Wisnioski, E.; Wuyts, E.; Wuyts, S.; Bandara, K.; Bender, R.; Berta, S.; Kurk, J.; Mendel, J. T.; Tacconi, L. J.; Wilman, D.; Beifiori, A.; Burkert, A.; Buschkamp, P.; Chan, J.; Brammer, G.

    2014-01-01

    In this paper, we follow up on our previous detection of nuclear ionized outflows in the most massive (log(M * /M ☉ ) ≥ 10.9) z ∼ 1-3 star-forming galaxies by increasing the sample size by a factor of six (to 44 galaxies above log(M * /M ☉ ) ≥ 10.9) from a combination of the SINS/zC-SINF, LUCI, GNIRS, and KMOS 3D spectroscopic surveys. We find a fairly sharp onset of the incidence of broad nuclear emission (FWHM in the Hα, [N II], and [S II] lines ∼450-5300 km s –1 ), with large [N II]/Hα ratios, above log(M * /M ☉ ) ∼ 10.9, with about two-thirds of the galaxies in this mass range exhibiting this component. Broad nuclear components near and above the Schechter mass are similarly prevalent above and below the main sequence of star-forming galaxies, and at z ∼ 1 and ∼2. The line ratios of the nuclear component are fit by excitation from active galactic nuclei (AGNs), or by a combination of shocks and photoionization. The incidence of the most massive galaxies with broad nuclear components is at least as large as that of AGNs identified by X-ray, optical, infrared, or radio indicators. The mass loading of the nuclear outflows is near unity. Our findings provide compelling evidence for powerful, high-duty cycle, AGN-driven outflows near the Schechter mass, and acting across the peak of cosmic galaxy formation.

  17. On maximal massive 3D supergravity

    OpenAIRE

    Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K

    2010-01-01

    ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...

  18. Thermophysical properties of shock compressed argon and xenon

    International Nuclear Information System (INIS)

    Fortov, V.E.; Gryaznov, V.K.; Mintsev, V.B.; Ternovoi, V.Ya.

    2001-01-01

    The problem of the nature of thermodynamic properties and the high level electrical conductivity of substances at high pressures and temperatures is one of the most key issues of physics of high energy densities. So called pressure ionization is one of the most impressive demonstrations of the strong coupling effects in plasma under compression. Noble gases are the simplest object of studying of these phenomena because of absence of molecules and spherical symmetry of their atoms. In the present paper we are trying to have a common look from the chemical plasma picture on the whole available massive of the experimental data on Ar and Xe in a wide range of the parameters: from gaseous densities of 0,01 g/cc and pressures of several kilobars up to extremely high densities corresponding to the insulator-metal transition and megabar pressure range. (orig.)

  19. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  20. A compression and shear loading test of concrete filled steel bearing wall

    International Nuclear Information System (INIS)

    Akiyama, Hiroshi; Sekimoto, Hisashi; Fukihara, Masaaki; Nakanishi, Kazuo; Hara, Kiyoshi.

    1991-01-01

    Concrete-filled steel bearing walls called SC structure which are the composite structure of concrete and steel plates have larger load-carrying capacity and higher ductility as compared with conventional RC structures, and their construction method enables the rationalization of construction procedures at sites and the shortening of construction period. Accordingly, the SC structures have become to be applied to the inner concrete structures of PWR nuclear power plants, and subsequently, it is planned to apply them to the auxiliary buildings of nuclear power plants. The purpose of this study is to establish a rational design method for the SC structures which can be applied to the auxiliary buildings of nuclear power plants. In this study, the buckling strength of surface plates and the ultimate strength of the SC structure were evaluated with the results of the compression and shear tests which have been carried out. The outline of the study and the tests, the results of the compression test and the shear test and their evaluation are reported. Stud bolts were effective for preventing the buckling of surface plates. The occurrence of buckling can be predicted analytically. (K.I.)

  1. Integration of a very high share of renewable production, the role of nuclear; Integracion de una muy alta cuota de produccion renovable. El papel de la nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Chiarri, A.

    2010-07-01

    In the following decade, 2010-2020, there are few uncertainties regarding the generation mix: the Electricity System already counts on the thermal power that ir requires, the renewable energy targets are fairly clear, the nuclear option would require development period that goes beyond this decade and the Carbon Capture and Storage technology (CCS) is unlikely to be commercially available yet. Nevertheless, a challenge arises: how to manage a System with a high share of non-manageable energy sources. In the future, periods of excess of energy are bound to happen and its annual profile is foreseen sharp: high power peaks but little annual energy. Therefore, to make use of such an excess, very high investments in new capacity would be needed (pumping hydro, compressed air). However the load factor of this new capacity would be low, at least by this concept. Spillage of renewable energy may be the most efficient solution, and thus it should be accepted by every stake holder. Looking at the very long term alternative are opened. One of them tends to balance the production of the different energy sources: nuclear, thermal and renewable (1/3-1/3-1/3). Although this option is aligned with the targets of competitiveness, sustainability and energy security, doubts may arise about the compatibility of such share of nuclear and renewable Massive, and coordinated, deployment of the electric vehicle and the smart grids, facilitates remarkably the integration of new nuclear capacity, not as much by the energy involved (TWh) as by the power (GW) and the capacity to manage such power when charging the batteries. (Author)

  2. Massive neutrinos in astrophysics

    International Nuclear Information System (INIS)

    Qadir, A.

    1982-08-01

    Massive neutrinos are among the big hopes of cosmologists. If they happen to have the right mass they can close the Universe, explain the motion of galaxies in clusters, provide galactic halos and even, possibly, explain galaxy formation. Tremaine and Gunn have argued that massive neutrinos cannot do all these things. I will explain, here, what some of us believe is wrong with their arguments. (author)

  3. International Conference on Extreme States in Nuclear Systems

    International Nuclear Information System (INIS)

    Arlt, R.; Kuehn, B.

    1979-12-01

    The abstracts of contributed papers are arranged under the following headings: (1) nuclear matter, incl. elementary interactions, phase transitions, compression of nuclear matter; (2) heavy ion reactions, incl. nucleus-nucleus potential, mechanism of heavy ion reactions, role of non-equilibrium processes, nuclear quasimolecules, superheavy nuclei; (3) high spin states and nuclear structure; and (4) relativistic nuclear physics, incl. heavy ion reactions, particle production, role of nucleon associations. (author)

  4. Effect of high image compression on the reproducibility of cardiac Sestamibi reporting

    International Nuclear Information System (INIS)

    Thomas, P.; Allen, L.; Beuzeville, S.

    1999-01-01

    Full text: Compression algorithms have been mooted to minimize storage space and transmission times of digital images. We assessed the impact of high-level lousy compression using JPEG and wavelet algorithms on image quality and reporting accuracy of cardiac Sestamibi studies. Twenty stress/rest Sestamibi cardiac perfusion studies were reconstructed into horizontal short, vertical long and horizontal long axis slices using conventional methods. Each of these six sets of slices were aligned for reporting and saved (uncompressed) as a bitmap. This bitmap was then compressed using JPEG compression, then decompressed and saved as a bitmap for later viewing. This process was repeated using the original bitmap and wavelet compression. Finally, a second copy of the original bitmap was made. All 80 bitmaps were randomly coded to ensure blind reporting. The bitmaps were read blinded and by consensus of 2 experienced nuclear medicine physicians using a 5-point scale and 25 cardiac segments. Subjective image quality was also reported using a 3-point scale. Samples of the compressed images were also subtracted from the original bitmap for visual comparison of differences. Results showed an average compression ratio of 23:1 for wavelet and 13:1 for JPEG. Image subtraction showed only very minor discordance between the original and compressed images. There was no significant difference in subjective quality between the compressed and uncompressed images. There was no significant difference in reporting reproducibility of the identical bitmap copy, the JPEG image and the wavelet image compared with the original bitmap. Use of the high compression algorithms described had no significant impact on reporting reproducibility and subjective image quality of cardiac Sestamibi perfusion studies

  5. Nuclear weapons and NATO operations: Doctrine, studies, and exercises

    International Nuclear Information System (INIS)

    Karber, P.A.

    1994-01-01

    A listing of papers is presented on the doctrine, studies, and exercises dealing with nuclear weapons and NATO operations for the period 1950-1983. The papers deal with studies on massive retaliation, sword and shield, and flexible response. Some of the enduring issues of nuclear weapons in NATO are listed

  6. Shock waves in relativistic nuclear matter, I

    International Nuclear Information System (INIS)

    Gleeson, A.M.; Raha, S.

    1979-02-01

    The relativistic Rankine-Hugoniot relations are developed for a 3-dimensional plane shock and a 3-dimensional oblique shock. Using these discontinuity relations together with various equations of state for nuclear matter, the temperatures and the compressibilities attainable by shock compression for a wide range of laboratory kinetic energy of the projectile are calculated. 12 references

  7. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  8. The effect of convection and semi-convection on the C/O yield of massive stars

    International Nuclear Information System (INIS)

    Dearborn, D.S.

    1979-01-01

    The C/O ratio produced during core helium burning affects the future evolution and nucleosynthetic yield of massive stars. This ratio is shown to be sensitive to the treatment of convection as well as uncertainties in nuclear rates. By minimizing the effect of semi-convection and reducing the size of the convective core, mass loss in OB stars increases the C/O ratio. (Author)

  9. Japan's long-term energy outlook to 2050: Estimation for the potential of massive CO2 mitigation

    Energy Technology Data Exchange (ETDEWEB)

    Komiyama, Ryoichi

    2010-09-15

    This paper analyzes Japan's energy outlook and CO2 emissions to 2050. Scenario analysis reveals that Japan's CO2 emissions in 2050 could be potentially reduced by 58% from the emissions in 2005. For achieving this massive mitigation, it is required to reduce primary energy supply per GDP by 60% in 2050 from the 2005 level and to expand the share of non-fossil fuel in total supply to 50% by 2050. Concerning power generation mix, nuclear will account for 60%, renewable for 30% in 2050. For massive CO2 abatement, Japan should tackle technological and economic challenges for large-scale deployment of advanced technologies.

  10. Climatic consequences of nuclear war: Working Group No. 1

    International Nuclear Information System (INIS)

    Knox, J.B.

    1985-12-01

    Research needs on the climate consequences of nuclear war were discussed. These include: (1) a better definition of the emissions from massive urban fires; (2) the exploration of prescribed forest burns; (3) the dirty cloud problem; (4) microphysical studies of soot; and (5) simulation of the second summer season after nuclear war

  11. THE SIZE-STAR FORMATION RELATION OF MASSIVE GALAXIES AT 1.5 < z < 2.5

    International Nuclear Information System (INIS)

    Toft, S.; Franx, M.; Van Dokkum, P.; Foerster Schreiber, N. M.; Labbe, I.; Wuyts, S.; Marchesini, D.

    2009-01-01

    We study the relation between size and star formation activity in a complete sample of 225 massive (M * > 5 x 10 10 M sun ) galaxies at 1.5 PSF ∼ 0.''45) ground-based ISAAC data, we confirm and improve the significance of the relation between star formation activity and compactness found in previous studies, using a large, complete mass-limited sample. At z ∼ 2, massive quiescent galaxies are significantly smaller than massive star-forming galaxies, and a median factor of 0.34 ± 0.02 smaller than galaxies of similar mass in the local universe. Thirteen percent of the quiescent galaxies are unresolved in the ISAAC data, corresponding to sizes <1 kpc, more than five times smaller than galaxies of similar mass locally. The quiescent galaxies span a Kormendy relation which, compared to the relation for local early types, is shifted to smaller sizes and brighter surface brightnesses and is incompatible with passive evolution. The progenitors of the quiescent galaxies were likely dominated by highly concentrated, intense nuclear starbursts at z ∼ 3-4, in contrast to star-forming galaxies at z ∼ 2 which are extended and dominated by distributed star formation.

  12. Pressure Infusion Cuff and Blood Warmer during Massive Transfusion: An Experimental Study About Hemolysis and Hypothermia.

    Science.gov (United States)

    Poder, Thomas G; Pruneau, Denise; Dorval, Josée; Thibault, Louis; Fisette, Jean-François; Bédard, Suzanne K; Jacques, Annie; Beauregard, Patrice

    2016-01-01

    Blood warmers were developed to reduce the risk of hypothermia associated with the infusion of cold blood products. During massive transfusion, these devices are used with compression sleeve, which induce a major stress to red blood cells. In this setting, the combination of blood warmer and compression sleeve could generate hemolysis and harm the patient. We conducted this study to compare the impact of different pressure rates on the hemolysis of packed red blood cells and on the outlet temperature when a blood warmer set at 41.5°C is used. Pressure rates tested were 150 and 300 mmHg. Ten packed red blood cells units were provided by Héma-Québec and each unit was sequentially tested. We found no increase in hemolysis either at 150 or 300 mmHg. By cons, we found that the blood warmer was not effective at warming the red blood cells at the specified temperature. At 150 mmHg, the outlet temperature reached 37.1°C and at 300 mmHg, the temperature was 33.7°C. To use a blood warmer set at 41.5°C in conjunction with a compression sleeve at 150 or 300 mmHg does not generate hemolysis. At 300 mmHg a blood warmer set at 41.5°C does not totally avoid a risk of hypothermia.

  13. Magnetic nuclear core restraint and control

    International Nuclear Information System (INIS)

    Cooper, M.H.

    1979-01-01

    A lateral restraint and control system for a nuclear reactor core adaptable to provide an inherent decrease of core reactivity in response to abnormally high reactor coolant fluid temperatures. An electromagnet is associated with structure for radially compressing the core during normal reactor conditions. A portion of the structures forming a magnetic circuit are composed of ferromagnetic material having a curie temperature corresponding to a selected coolant fluid temperature. Upon a selected signal, or inherently upon a preselected rise in coolant temperature, the magnetic force is decreased a given amount sufficient to relieve the compression force so as to allow core radial expansion. The expanded core configuration provides a decreased reactivity, tending to shut down the nuclear reaction

  14. Magnetic nuclear core restraint and control

    International Nuclear Information System (INIS)

    Cooper, M.H.

    1979-01-01

    A lateral restraint and control systemm for a nuclear reactor core provides an inherent decrease of core reactivity in response to abnormally high reactor coolant fluid temperatures. An electromagnet is associated with structure for radially compressing the core during normal reactor conditions. A portion of the structures forming a magnetic circuit is composed of ferromagnetic material having a curie temperature corresponding to a selected coolant fluid temperature. Upon a selected signal, or inherently upon a preselected rise in coolant temperature, the magnetic force is decreased by an amount sufficient to relieve the compression force so as to allow core radial expansion. The expanded core configuration provides a decreased reactivity, tending to shut down the nuclear reaction

  15. The efficiency of seismic attributes to differentiate between massive and non-massive carbonate successions for hydrocarbon exploration activity

    Science.gov (United States)

    Sarhan, Mohammad Abdelfattah

    2017-12-01

    The present work investigates the efficiency of applying volume seismic attributes to differentiate between massive and non-massive carbonate sedimentary successions on using seismic data. The main objective of this work is to provide a pre-drilling technique to recognize the porous carbonate section (probable hydrocarbon reservoirs) based on seismic data. A case study from the Upper Cretaceous - Eocene carbonate successions of Abu Gharadig Basin, northern Western Desert of Egypt has been tested in this work. The qualitative interpretations of the well-log data of four available wells distributed in the study area, namely; AG-2, AG-5, AG-6 and AG-15 wells, has confirmed that the Upper Cretaceous Khoman A Member represents the massive carbonate section whereas the Eocene Apollonia Formation represents the non-massive carbonate unit. The present work have proved that the most promising seismic attributes capable of differentiating between massive and non-massive carbonate sequences are; Root Mean Square (RMS) Amplitude, Envelope (Reflection Strength), Instantaneous Frequency, Chaos, Local Flatness and Relative Acoustic Impedance.

  16. The Fukushima Daiichi nuclear accident final report of the AESJ investigation committee

    CERN Document Server

    Atomic Energy Society of Japan

    2015-01-01

    The Magnitude 9 Great East Japan Earthquake on March 11, 2011, followed by a massive tsunami struck  TEPCO’s Fukushima Daiichi Nuclear Power Station and triggered an unprecedented core melt/severe accident in Units 1 – 3. The radioactivity release led to the evacuation of local residents, many of whom still have not been able to return to their homes. As a group of nuclear experts, the Atomic Energy Society of Japan established the Investigation Committee on the Nuclear Accident at the Fukushima Daiichi Nuclear Power Station, to investigate and analyze the accident from scientific and technical perspectives for clarifying the underlying and fundamental causes, and to make recommendations. The results of the investigation by the AESJ Investigation Committee has been compiled herewith as the Final Report. Direct contributing factors of the catastrophic nuclear incident at Fukushima Daiichi NPP initiated by an unprecedented massive earthquake/ tsunami – inadequacies in tsunami measures, severe accident ma...

  17. Trigenerative micro compressed air energy storage: Concept and thermodynamic assessment

    International Nuclear Information System (INIS)

    Facci, Andrea L.; Sánchez, David; Jannelli, Elio; Ubertini, Stefano

    2015-01-01

    Highlights: • The trigenerative-CAES concept is introduced. • The thermodynamic feasibility of the trigenerative-CAES is assessed. • The effects of the relevant parameter on the system performances are dissected. • Technological issues on the trigenerative-CAES are highlighted. - Abstract: Energy storage is a cutting edge front for renewable and sustainable energy research. In fact, a massive exploitation of intermittent renewable sources, such as wind and sun, requires the introduction of effective mechanical energy storage systems. In this paper we introduce the concept of a trigenerative energy storage based on a compressed air system. The plant in study is a simplified design of the adiabatic compressed air energy storage and accumulates mechanical and thermal (both hot and cold) energy at the same time. We envisage the possibility to realize a relatively small size trigenerative compressed air energy storage to be placed close to the energy demand, according to the distributed generation paradigm. Here, we describe the plant concept and we identify all the relevant parameters influencing its thermodynamic behavior. Their effects are dissected through an accurate thermodynamic model. The most relevant technological issues, such as the guidelines for a proper choice of the compressor, expander and heat exchangers are also addressed. Our results show that T-CAES may have an interesting potential as a distributed system that combines electricity storage with heat and cooling energy production. We also show that the performances are significantly influenced by some operating and design parameters, whose feasibility in real applications must be considered.

  18. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  19. Massive Born--Infeld and Other Dual Pairs

    CERN Document Server

    Ferrara, S

    2015-01-01

    We consider massive dual pairs of p-forms and (D-p-1)-forms described by non-linear Lagrangians, where non-linear curvature terms in one theory translate into non-linear mass-like terms in the dual theory. In particular, for D=2p and p even the two non-linear structures coincide when the non-linear massless theory is self-dual. This state of affairs finds a natural realization in the four-dimensional massive N=1 supersymmetric Born-Infeld action, which describes either a massive vector multiplet or a massive linear (tensor) multiplet with a Born-Infeld mass-like term. These systems should play a role for the massive gravitino multiplet obtained from a partial super-Higgs in N=2 Supergravity.

  20. ChIPWig: a random access-enabling lossless and lossy compression method for ChIP-seq data.

    Science.gov (United States)

    Ravanmehr, Vida; Kim, Minji; Wang, Zhiying; Milenkovic, Olgica

    2018-03-15

    Chromatin immunoprecipitation sequencing (ChIP-seq) experiments are inexpensive and time-efficient, and result in massive datasets that introduce significant storage and maintenance challenges. To address the resulting Big Data problems, we propose a lossless and lossy compression framework specifically designed for ChIP-seq Wig data, termed ChIPWig. ChIPWig enables random access, summary statistics lookups and it is based on the asymptotic theory of optimal point density design for nonuniform quantizers. We tested the ChIPWig compressor on 10 ChIP-seq datasets generated by the ENCODE consortium. On average, lossless ChIPWig reduced the file sizes to merely 6% of the original, and offered 6-fold compression rate improvement compared to bigWig. The lossy feature further reduced file sizes 2-fold compared to the lossless mode, with little or no effects on peak calling and motif discovery using specialized NarrowPeaks methods. The compression and decompression speed rates are of the order of 0.2 sec/MB using general purpose computers. The source code and binaries are freely available for download at https://github.com/vidarmehr/ChIPWig-v2, implemented in C ++. milenkov@illinois.edu. Supplementary data are available at Bioinformatics online.

  1. Evaluation of shear-compressive strength properties for laminated GFRP composites in electromagnet system

    Science.gov (United States)

    Song, Jun Hee; Kim, Hak Kun; Kim, Sam Yeon

    2014-07-01

    Laminated fiber-reinforced composites can be applied to an insulating structure of a nuclear fusion device. It is necessary to investigate the interlaminar fracture characteristics of the laminated composites for the assurance of design and structural integrity. The three methods used to prepare the glass fiber reinforced plastic composites tested in this study were vacuum pressure impregnation, high pressure laminate (HPL), and prepreg laminate. We discuss the design criteria for safe application of composites and the shear-compressive test methods for evaluating mechanical properties of the material. Shear-compressive tests could be performed successfully using series-type test jigs that were inclined 0°, 30°, 45°, 60°, and 75° to the normal axis. Shear strength depends strongly on the applied compressive stress. The design range of allowable shear stress was extended by use of the appropriate composite fabrication method. HPL had the largest design range, and the allowable interlaminar shear stress was 0.254 times the compressive stress.

  2. MASSIVE+: The Growth Histories of MASSIVE Survey Galaxies from their Globular Cluster Colors

    Science.gov (United States)

    Blakeslee, John

    2017-08-01

    The MASSIVE survey is targeting the 100 most massive galaxies within 108 Mpc that are visible in the northern sky. These most massive galaxies in the present-day universe reside in a surprisingly wide variety of environments, from rich clusters to fossil groups to near isolation. We propose to use WFC3/UVIS and ACS to carry out a deep imaging study of the globular cluster populations around a selected subset of the MASSIVE targets. Though much is known about GC systems of bright galaxies in rich clusters, we know surprisingly little about the effects of environment on these systems. The MASSIVE sample provides a golden opportunity to learn about the systematics of GC systems and what they can tell us about environmental drivers on the evolution of the highest mass galaxies. The most pressing questions to be addressed include: (1) Do isolated giants have the same constant mass fraction of GCs to total halo mass as BCGs of similar luminosity? (2) Do their GC systems show the same color (metallicity) distribution, which is an outcome of the mass spectrum of gas-rich halos during hierarchical growth? (3) Do the GCs in isolated high-mass galaxies follow the same radial distribution versus metallicity as in rich environments (a test of the relative importance of growth by accretion)? (4) Do the GCs of galaxies in sparse environments follow the same mass function? Our proposed second-band imaging will enable us to secure answers to these questions and add enormously to the legacy value of existing HST imaging of the highest mass galaxies in the universe.

  3. Method and device for the powerful compression of laser-produced plasmas for nuclear fusion

    International Nuclear Information System (INIS)

    Hora, H.

    1975-01-01

    According to the invention, more than 10% of the laser energy are converted into mechanical energy of compression, in that the compression is produced by non-linear excessive radiation pressure. The time and local spectral and intensity distribution of the laser pulse must be controlled. The focussed laser beams must increase to over 10 15 W/cm 2 in less than 10 -9 seconds and the time variation of the intensities must be carried out so that the dynamic absorption of the outer plasma corona by rippling consumes less than 90% of the laser energy. (GG) [de

  4. Nuclear matter in all its states

    International Nuclear Information System (INIS)

    Bonche, P.; Cugnon, J.; Babinet, R.; Mathiot, J.F.; Van Hove, L.; Buenerd, M.; Galin, J.; Lemaire, M.C.; Meyer, J.

    1986-01-01

    This report includes the nine lectures which have been presented at the Joliot-Curie School of Nuclear Physics in 1985. The subjects covered are the following: thermodynamic description of excited nuclei; heavy ion reactions at high energy (theoretical approach); heavy ion reactions at high energy (experimental approach); relativistic nuclear physics and quark effects in nuclei; quark matter; nuclear compressibility and its experimental determinations; hot nuclei; anti p-nucleus interaction; geant resonances at finite temperature [fr

  5. Non-vitrectomizing vitreous surgery and adjuvant intravitreal tissue plasminogen activator for non-recent massive premacular hemorrhage

    Directory of Open Access Journals (Sweden)

    Tsung-Tien Wu

    2011-12-01

    Full Text Available Massive premacular hemorrhage can cause sudden visual loss. We sought to evaluate the efficacy, safety and visual outcome of nonvitrectomizing vitreous surgery with intravitreal tissue plasminogen activator (t-pa for long-lasting thick premacular hemorrhage. This retrospective, interventional study examined three consecutive eyes of three patients who received nonvitrectomizing vitreous surgery with intravitreal t-pa for the treatment of non-recent massive premacular hemorrhage. Detailed ophthalmoscopic examinations were performed pre- and postoperatively to evaluate the visual outcome, the resolution of premacular hemorrhage and the changes in lenticular opacity.In all three eyes, the premacular hemorrhage cleared after the procedure. Final best-corrected visual acuities improved from 6/30 to 6/10 in patient 1, 2/60 to 6/4 in patient 2, and 3/60 to 6/6 in patient 3. Operated and fellow eyes did not differ in terms of nuclear sclerosis. No complications from the procedure were noted.In these selected cases, nonvitrectomizing vitreous surgery with intravitreal t-pa was an effective and safe alternative treatment for non-recent massive premacular hemorrhage.

  6. Nuclear radiation and the properties of concrete

    International Nuclear Information System (INIS)

    Kaplan, M.F.

    1983-08-01

    Concrete is used for structures in which the concrete is exposed to nuclear radiation. Exposure to nuclear radiation may affect the properties of concrete. The report mentions the types of nuclear radiation while radiation damage in concrete is discussed. Attention is also given to the effects of neutron and gamma radiation on compressive and tensile strength of concrete. Finally radiation shielding, the attenuation of nuclear radiation and the value of concrete as a shielding material is discussed

  7. Food availability after nuclear war

    International Nuclear Information System (INIS)

    Cropper, W.P. Jr.; Harwell, M.A.

    1985-01-01

    The analysis of acute-phase food shortage vulnerabilities for 15 countries clearly indicates that in many countries massive levels of malnutrition and starvation are a possible outcome of a major nuclear war. The principal direct cause of such food shortages would be the climatic disturbances and societal disruptions during the initial post-war year. Even without climatic disturbances, import-dependent countries could suffer food shortages. Many of the countries with the highest levels of agricultural production and storage would probably be targets of nuclear weapons. It seems unlikely that food exports would continue from severely damaged countries, thus propagating effects to non-combatant countries. A similar analysis of food storage vulnerability in 130 countries indicates that a majority of people live in countries with inadequate food stores for such major perturbations. This is true even if consumption rates of 1,000 kcal . person/sup -1/ . day/sup -1/ are assumed rather than 1,500 kcal . person/sup -1/ . day/sup -1/. This vulnerability is particularly severe in Africa, and South America. Even though most of the countries of these continents have no nuclear weapons and are not likely to be targeted, the human consequences of a major nuclear war could be nearly as severe as in the principal combatant countries. Few countries would have sufficient food stores for their entire population and massive mortality would result if only pre-harvest levels were available. These conclusions represent an aspect of nuclear war that has only been recently realized. The possibility of climatic disturbances following a large nuclear war has introduced a new element to the global consequences expected. Not only are the populations of the major combatant countries at risk in a nuclear exchange, but also most of the global human population

  8. Massively dilated right atrium masquerading as a mediastinal tumor

    Directory of Open Access Journals (Sweden)

    Thomas Schroeter

    2011-04-01

    Full Text Available Severe tricuspid valve insufficiency causes right atrial dilatation, venous congestion, and reduced atrial contractility, and may eventually lead to right heart failure. We report a case of a patient with severe tricuspid valve insufficiency, right heart failure, and a massively dilated right atrium. The enormously dilated atrium compressed the right lung, resulting in a radiographic appearance of a mediastinal tumor. Tricuspid valve repair and reduction of the right atrium was performed. Follow up examination revealed improvement of liver function, reduced peripheral edema and improved New York Heart Association (NYHA class. The reduction of the atrial size and repair of the tricuspid valve resulted in a restoration of the conduit and reservoir function of the right atrium. Given the chronicity of the disease process and the long-standing atrial fibrillation, there is no impact of this operation on right atrial contraction. In combination with the reconstruction of the tricuspid valve, the reduction atrioplasty will reduce the risk of thrombembolic events and preserve the right ventricular function.

  9. Thermodynamics inducing massive particles' tunneling and cosmic censorship

    International Nuclear Information System (INIS)

    Zhang, Baocheng; Cai, Qing-yu; Zhan, Ming-sheng

    2010-01-01

    By calculating the change of entropy, we prove that the first law of black hole thermodynamics leads to the tunneling probability of massive particles through the horizon, including the tunneling probability of massive charged particles from the Reissner-Nordstroem black hole and the Kerr-Newman black hole. Novelly, we find the trajectories of massive particles are close to that of massless particles near the horizon, although the trajectories of massive charged particles may be affected by electromagnetic forces. We show that Hawking radiation as massive particles tunneling does not lead to violation of the weak cosmic-censorship conjecture. (orig.)

  10. Strain Rate Dependence of Compressive Yield and Relaxation in DGEBA Epoxies

    Science.gov (United States)

    Arechederra, Gabriel K.; Reprogle, Riley C.; Clarkson, Caitlyn M.; McCoy, John D.; Kropka, Jamie M.; Long, Kevin N.; Chambers, Robert S.

    2015-03-01

    The mechanical response in uniaxial compression of two diglycidyl ether of bisphenol-A epoxies were studied. These were 828DEA (Epon 828 cured with diethanolamine (DEA)) and 828T403 (Epon 828 cured with Jeffamine T-403). Two types of uniaxial compression tests were performed: A) constant strain rate compression and B) constant strain rate compression followed by a constant strain relaxation. The peak (yield) stress was analyzed as a function of strain rate from Eyring theory for activation volume. Runs at different temperatures permitted the construction of a mastercurve, and the resulting shift factors resulted in an activation energy. Strain and hold tests were performed for a low strain rate where a peak stress was lacking and for a higher strain rate where the peak stress was apparent. Relaxation from strains at different places along the stress-strain curve was tracked and compared. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Topological massive sigma models

    International Nuclear Information System (INIS)

    Lambert, N.D.

    1995-01-01

    In this paper we construct topological sigma models which include a potential and are related to twisted massive supersymmetric sigma models. Contrary to a previous construction these models have no central charge and do not require the manifold to admit a Killing vector. We use the topological massive sigma model constructed here to simplify the calculation of the observables. Lastly it is noted that this model can be viewed as interpolating between topological massless sigma models and topological Landau-Ginzburg models. ((orig.))

  12. Nuclear reactors

    International Nuclear Information System (INIS)

    Prescott, R.F.

    1976-01-01

    A nuclear reactor containment vessel faced internally with a metal liner is provided with thermal insulation for the liner, comprising one or more layers of compressible material such as ceramic fiber, such as would be conventional in an advanced gas-cooled reactor and also a superposed layer of ceramic bricks or tiles in combination with retention means therefor, the retention means (comprising studs projecting from the liner, and bolts or nuts in threaded engagement with the studs) being themselves insulated from the vessel interior so that the coolant temperatures achieved in a High-Temperature Reactor or a Fast Reactor can be tolerated with the vessel. The layer(s) of compressible material is held under a degree of compression either by the ceramic bricks or tiles themselves or by cover plates held on the studs, in which case the bricks or tiles are preferably bedded on a yielding layer (for example of carbon fibers) rather than directly on the cover plates

  13. Issues with Strong Compression of Plasma Target by Stabilized Imploding Liner

    Science.gov (United States)

    Turchi, Peter; Frese, Sherry; Frese, Michael

    2017-10-01

    Strong compression (10:1 in radius) of an FRC by imploding liquid metal liners, stabilized against Rayleigh-Taylor modes, using different scalings for loss based on Bohm vs 100X classical diffusion rates, predict useful compressions with implosion times half the initial energy lifetime. The elongation (length-to-diameter ratio) near peak compression needed to satisfy empirical stability criterion and also retain alpha-particles is about ten. The present paper extends these considerations to issues of the initial FRC, including stability conditions (S*/E) and allowable angular speeds. Furthermore, efficient recovery of the implosion energy and alpha-particle work, in order to reduce the necessary nuclear gain for an economical power reactor, is seen as an important element of the stabilized liner implosion concept for fusion. We describe recent progress in design and construction of the high energy-density prototype of a Stabilized Liner Compressor (SLC) leading to repetitive laboratory experiments to develop the plasma target. Supported by ARPA-E ALPHA Program.

  14. Magnetic nuclear core restraint and control

    International Nuclear Information System (INIS)

    Cooper, M.H.

    1978-01-01

    Disclosed is a lateral restraint and control system for a nuclear reactor core adaptable to provide an inherent decrease of core reactivity in response to abnormally high reactor coolant fluid temperatures. An electromagnet is associated with structure for radially compressing the core during normal reactor conditions. A portion of the structures forming a magnetic circuit are composed of ferromagnetic material having a curie temperature corresponding to a selected coolant fluid temperature. Upon a selected signal, or inherently upon a preselected rise in coolant temperature, the magnetic force is decreased a given amount sufficient to relieve the compression force so as to allow core radial expansion. The expanded core configuration provides a decreased reactivity, tending to shut down the nuclear reaction

  15. Very massive runaway stars from three-body encounters

    Science.gov (United States)

    Gvaramadze, Vasilii V.; Gualandris, Alessia

    2011-01-01

    Very massive stars preferentially reside in the cores of their parent clusters and form binary or multiple systems. We study the role of tight very massive binaries in the origin of the field population of very massive stars. We performed numerical simulations of dynamical encounters between single (massive) stars and a very massive binary with parameters similar to those of the most massive known Galactic binaries, WR 20a and NGC 3603-A1. We found that these three-body encounters could be responsible for the origin of high peculiar velocities (≥70 km s-1) observed for some very massive (≥60-70 M⊙) runaway stars in the Milky Way and the Large Magellanic Cloud (e.g. λ Cep, BD+43°3654, Sk -67°22, BI 237, 30 Dor 016), which can hardly be explained within the framework of the binary-supernova scenario. The production of high-velocity massive stars via three-body encounters is accompanied by the recoil of the binary in the opposite direction to the ejected star. We show that the relative position of the very massive binary R145 and the runaway early B-type star Sk-69°206 on the sky is consistent with the possibility that both objects were ejected from the central cluster, R136, of the star-forming region 30 Doradus via the same dynamical event - a three-body encounter.

  16. High temperature compression tests performed on doped fuels

    International Nuclear Information System (INIS)

    Duguay, C.; Mocellin, A.; Dehaudt, P.; Fantozzi, G.

    1997-01-01

    The use of additives of corundum structure M 2 O 3 (M=Cr, Al) is an effective way of promoting grain growth of uranium dioxide. The high-temperature compressive deformation of large-grained UO 2 doped with these oxides has been investigated and compared with that of pure UO 2 with a standard microstructure. Such doped fuels are expected to exhibit enhanced plasticity. Their use would therefore reduce the pellet-cladding mechanical interaction and thus improve the performances of the nuclear fuel. (orig.)

  17. Epidemiology of massive transfusion

    DEFF Research Database (Denmark)

    Halmin, M A; Chiesa, F; Vasan, S K

    2015-01-01

    and to describe characteristics and mortality of massively transfused patients. Methods: We performed a retrospective cohort study based on the Scandinavian Donations and Transfusions (SCANDAT2) database, linking data on blood donation, blood components and transfused patients with inpatient- and population.......4% among women transfused for obstetrical bleeding. Mortality increased gradually with age and among all patients massively transfused at age 80 years, only 26% were alive [TABLE PRESENTED] after 5 years. The relative mortality, early after transfusion, was high and decreased with time since transfusion...

  18. Reappraising the concept of massive transfusion in trauma

    DEFF Research Database (Denmark)

    Stanworth, Simon J; Morris, Timothy P; Gaarder, Christine

    2010-01-01

    ABSTRACT : INTRODUCTION : The massive-transfusion concept was introduced to recognize the dilutional complications resulting from large volumes of packed red blood cells (PRBCs). Definitions of massive transfusion vary and lack supporting clinical evidence. Damage-control resuscitation regimens...... of modern trauma care are targeted to the early correction of acute traumatic coagulopathy. The aim of this study was to identify a clinically relevant definition of trauma massive transfusion based on clinical outcomes. We also examined whether the concept was useful in that early prediction of massive...... transfusion as a concept in trauma has limited utility, and emphasis should be placed on identifying patients with massive hemorrhage and acute traumatic coagulopathy....

  19. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  20. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. New filterability and compressibility test cell design for nuclear products

    Energy Technology Data Exchange (ETDEWEB)

    Féraud, J.P. [CEA Marcoule, DTEC/SGCS/LGCI, BP 17171, 30207 Bagnols-sur-Cèze (France); Bourcier, D., E-mail: damien.bourcier@cea.fr [CEA Marcoule, DTEC/SGCS/LGCI, BP 17171, 30207 Bagnols-sur-Cèze (France); Ode, D. [CEA Marcoule, DTEC/SGCS/LGCI, BP 17171, 30207 Bagnols-sur-Cèze (France); Puel, F. [Université Lyon 1, Villeurbanne (France); CNRS, UMR5007, Laboratoire d‘Automatique et de Génie des Procédés (LAGEP), CPE-Lyon, 43 bd du 11 Novembre 1918, 69100 Villeurbanne (France)

    2013-12-15

    Highlights: • Test easily usable without tools in a glove box. • The test minimizes the slurry volume necessary for this type of study. • The test characterizes the flow resistance in a porous medium in formation. • The test is performed at four pressure levels to determine the compressibility. • The technical design ensures reproducible flow resistance measurements. -- Abstract: Filterability and compressibility tests are often carried out at laboratory scale to obtain data required to scale up solid/liquid separation processes. Current technologies, applied with a constant pressure drop, enable specific resistance and cake formation rate measurement in accordance with a modified Darcy's law. The new test cell design described in this paper is easily usable without tools in a glove box and minimizes the slurry volume necessary for this type of study. This is an advantage for investigating toxic and hazardous products such as radioactive materials. Uranium oxalate precipitate slurries were used to test and validate this new cell. In order to reduce the test cell volume, a statistical approach was applied on 8 results obtained with cylindrical test cells of 1.8 cm and 3 cm in diameter. Wall effects can therefore be ignored despite the small filtration cell diameter, allowing tests to be performed with only about one-tenth of the slurry volume of a standard commercial cell. The significant reduction in the size of this experimental device does not alter the consistency of filtration data which may be used in the design of industrial equipment.

  2. New filterability and compressibility test cell design for nuclear products

    International Nuclear Information System (INIS)

    Féraud, J.P.; Bourcier, D.; Ode, D.; Puel, F.

    2013-01-01

    Highlights: • Test easily usable without tools in a glove box. • The test minimizes the slurry volume necessary for this type of study. • The test characterizes the flow resistance in a porous medium in formation. • The test is performed at four pressure levels to determine the compressibility. • The technical design ensures reproducible flow resistance measurements. -- Abstract: Filterability and compressibility tests are often carried out at laboratory scale to obtain data required to scale up solid/liquid separation processes. Current technologies, applied with a constant pressure drop, enable specific resistance and cake formation rate measurement in accordance with a modified Darcy's law. The new test cell design described in this paper is easily usable without tools in a glove box and minimizes the slurry volume necessary for this type of study. This is an advantage for investigating toxic and hazardous products such as radioactive materials. Uranium oxalate precipitate slurries were used to test and validate this new cell. In order to reduce the test cell volume, a statistical approach was applied on 8 results obtained with cylindrical test cells of 1.8 cm and 3 cm in diameter. Wall effects can therefore be ignored despite the small filtration cell diameter, allowing tests to be performed with only about one-tenth of the slurry volume of a standard commercial cell. The significant reduction in the size of this experimental device does not alter the consistency of filtration data which may be used in the design of industrial equipment

  3. Flow-induced vibration of helical coil compression springs

    International Nuclear Information System (INIS)

    Stokes, F.E.; King, R.A.

    1983-01-01

    Helical coil compression springs are used in some nuclear fuel assembly designs to maintain holddown and to accommodate thermal expansion. In the reactor environment, the springs are exposed to flowing water, elevated temperatures and pressures, and irradiation. Flow parallel to the longitudinal axis of the spring may excite the spring coils and cause vibration. The purpose of this investigation was to determine the flow-induced vibration (FIV) response characteristics of the helical coil compression springs. Experimental tests indicate that a helical coil spring responds like a single circular cylinder in cross-flow. Two FIV excitation mechanisms control spring vibration. Namely: 1) Turbulent Buffeting causes small amplitude vibration which increases as a function of velocity squared. 2) Vortex Shedding causes large amplitude vibration when the spring natural frequency and Strouhal frequency coincide. Several methods can be used to reduce or to prevent vortex shedding large amplitude vibrations. One method is compressing the spring to a coil pitch-to-diameter ratio of 2 thereby suppressing the vibration amplitude. Another involves modifying the spring geometry to alter its stiffness and frequency characteristics. These changes result in separation of the natural and Strouhal frequencies. With an understanding of how springs respond in the flowing water environment, the spring physical parameters can be designed to avoid large amplitude vibration. (orig.)

  4. Compressive strength test for cemented waste forms: validation process

    International Nuclear Information System (INIS)

    Haucz, Maria Judite A.; Candido, Francisco Donizete; Seles, Sandro Rogerio

    2007-01-01

    In the Cementation Laboratory (LABCIM), of the Development Centre of the Nuclear Technology (CNEN/CDTN-MG), hazardous/radioactive wastes are incorporated in cement, to transform them into monolithic products, preventing or minimizing the contaminant release to the environment. The compressive strength test is important to evaluate the cemented product quality, in which it is determined the compression load necessary to rupture the cemented waste form. In LABCIM a specific procedure was developed to determine the compressive strength of cement waste forms based on the Brazilian Standard NBR 7215. The accreditation of this procedure is essential to assure reproductive and accurate results in the evaluation of these products. To achieve this goal the Laboratory personal implemented technical and administrative improvements in accordance with the NBR ISO/IEC 17025 standard 'General requirements for the competence of testing and calibration laboratories'. As the developed procedure was not a standard one the norm ISO/IEC 17025 requests its validation. There are some methodologies to do that. In this paper it is described the current status of the accreditation project, especially the validation process of the referred procedure and its results. (author)

  5. Massive Parallelism of Monte-Carlo Simulation on Low-End Hardware using Graphic Processing Units

    International Nuclear Information System (INIS)

    Mburu, Joe Mwangi; Hah, Chang Joo Hah

    2014-01-01

    Within the past decade, research has been done on utilizing GPU massive parallelization in core simulation with impressive results but unfortunately, not much commercial application has been done in the nuclear field especially in reactor core simulation. The purpose of this paper is to give an introductory concept on the topic and illustrate the potential of exploiting the massive parallel nature of GPU computing on a simple monte-carlo simulation with very minimal hardware specifications. To do a comparative analysis, a simple two dimension monte-carlo simulation is implemented for both the CPU and GPU in order to evaluate performance gain based on the computing devices. The heterogeneous platform utilized in this analysis is done on a slow notebook with only 1GHz processor. The end results are quite surprising whereby high speedups obtained are almost a factor of 10. In this work, we have utilized heterogeneous computing in a GPU-based approach in applying potential high arithmetic intensive calculation. By applying a complex monte-carlo simulation on GPU platform, we have speed up the computational process by almost a factor of 10 based on one million neutrons. This shows how easy, cheap and efficient it is in using GPU in accelerating scientific computing and the results should encourage in exploring further this avenue especially in nuclear reactor physics simulation where deterministic and stochastic calculations are quite favourable in parallelization

  6. Massive Parallelism of Monte-Carlo Simulation on Low-End Hardware using Graphic Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Mburu, Joe Mwangi; Hah, Chang Joo Hah [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-05-15

    Within the past decade, research has been done on utilizing GPU massive parallelization in core simulation with impressive results but unfortunately, not much commercial application has been done in the nuclear field especially in reactor core simulation. The purpose of this paper is to give an introductory concept on the topic and illustrate the potential of exploiting the massive parallel nature of GPU computing on a simple monte-carlo simulation with very minimal hardware specifications. To do a comparative analysis, a simple two dimension monte-carlo simulation is implemented for both the CPU and GPU in order to evaluate performance gain based on the computing devices. The heterogeneous platform utilized in this analysis is done on a slow notebook with only 1GHz processor. The end results are quite surprising whereby high speedups obtained are almost a factor of 10. In this work, we have utilized heterogeneous computing in a GPU-based approach in applying potential high arithmetic intensive calculation. By applying a complex monte-carlo simulation on GPU platform, we have speed up the computational process by almost a factor of 10 based on one million neutrons. This shows how easy, cheap and efficient it is in using GPU in accelerating scientific computing and the results should encourage in exploring further this avenue especially in nuclear reactor physics simulation where deterministic and stochastic calculations are quite favourable in parallelization.

  7. MASSIVE STARS IN THE Cl 1813-178 CLUSTER: AN EPISODE OF MASSIVE STAR FORMATION IN THE W33 COMPLEX

    International Nuclear Information System (INIS)

    Messineo, Maria; Davies, Ben; Figer, Donald F.; Trombley, Christine; Kudritzki, R. P.; Valenti, Elena; Najarro, F.; Michael Rich, R.

    2011-01-01

    Young massive (M > 10 4 M sun ) stellar clusters are a good laboratory to study the evolution of massive stars. Only a dozen of such clusters are known in the Galaxy. Here, we report about a new young massive stellar cluster in the Milky Way. Near-infrared medium-resolution spectroscopy with UIST on the UKIRT telescope and NIRSPEC on the Keck telescope, and X-ray observations with the Chandra and XMM satellites, of the Cl 1813-178 cluster confirm a large number of massive stars. We detected 1 red supergiant, 2 Wolf-Rayet stars, 1 candidate luminous blue variable, 2 OIf, and 19 OB stars. Among the latter, twelve are likely supergiants, four giants, and the faintest three dwarf stars. We detected post-main-sequence stars with masses between 25 and 100 M sun . A population with age of 4-4.5 Myr and a mass of ∼10, 000 M sun can reproduce such a mixture of massive evolved stars. This massive stellar cluster is the first detection of a cluster in the W33 complex. Six supernova remnants and several other candidate clusters are found in the direction of the same complex.

  8. Creep and creep recovery of concrete subjected to triaxial compressive stresses at elevated temperature

    International Nuclear Information System (INIS)

    Ohnuma, Hiroshi; Abe, Hirotoshi

    1979-01-01

    In order to design rationally the vessels made of prestressed concrete for nuclear power stations and to improve the accuracy of high temperature creep analysis, the Central Research Institute of Electric Power Industry had carried out the proving experiments with scale models. In order to improve the accuracy of analysis, it is important to grasp the creep behavior of the concrete subjected to triaxial compressive stresses at high temperature as the basic property of concrete, because actual prestressed concrete vessels are in such conditions. In this paper, the triaxial compression creep test at 60 deg. C using the concrete specimens with same mixing ratio as the scale models is reported. The compressive strength of the concrete at the age of 28 days was 406 kg/cm 2 , and the age of the concrete at the time of loading was 63 days. Creep and creep recovery were measured for 5 months and 2 months, respectively. The creep of concrete due to uniaxial compression increased with temperature rise, and the creep strain at 60 deg. C was 2.54 times as much as that at 20 deg. C. The effective Poisson's ratio in triaxial compression creep was 0.15 on the average, based on the creep strain due to uniaxial compression at 60 deg. C. The creep recovery rate in high temperature, triaxial compression creep was 33% on the average. (Kako, I.)

  9. Nuclear energy and its synergies with renewable energies

    International Nuclear Information System (INIS)

    Carre, F.; Mermilliod, N.; Devezeaux De Lavergne, J.G.; Durand, S.

    2011-01-01

    France has the ambition to become a world leader in both nuclear industry and in renewable energies. 3 types of synergies between nuclear power and renewable energies are highlighted. First, nuclear power can be used as a low-carbon energy to produce the equipment required to renewable energy production for instance photovoltaic cells. Secondly, to benefit from the complementary features of both energies: continuous/intermittency of the production, centralized/local production. The future development of smart grids will help to do that. Thirdly, to use nuclear energy to produce massively hydrogen from water and synthetic fuels from biomass. (A.C.)

  10. Working in nuclear industry? why not?

    International Nuclear Information System (INIS)

    Brechet, Y.

    2017-01-01

    Today 200 nuclear reactors are being built or scheduled in the world and despite this, nuclear energy in western countries seems to collapse under the weights of prejudices and false ideas. No matter what the opponents say, nuclear energy is safe and clean and is a bringer of jobs. In France nuclear industry is one of a few industrial sectors that have been spared by massive de-industrialization. Nuclear energy as a carbon-free energy, has an important role to play to mitigate climate warming by working with renewable energies to provide a reliable electric power. This future is a new future for nuclear energy as new challenges have to be overcome, for instance nuclear energy has to adapt itself to the intermittency of wind and solar energies, nuclear industry has to be innovative and has to fully appropriate numerical technologies. Nuclear industry is a promising sector that proposes interesting scientific and technical jobs and is also a vital interest for the country. (A.C.)

  11. To dare nuclear energy to find the solution of the climate issue

    International Nuclear Information System (INIS)

    2014-01-01

    This report first briefly recalls the IPCC reference scenarios which allow the global temperature increase to be limited to 2 degrees (Representative Concentration Pathway 2.6), and rely on a massive CO 2 capture and storage. Two categories of scenarios have been proposed: IMAGE by the Netherlands Environmental Assessment Agency, and MESSAGE by the Austrian International Institute for Applied System Analysis. But only the MESSAGE category limits CO 2 storage to 24 billions of tons by means of a massive development of nuclear energy between 2060 and 2100 or of a drastic decrease of energy consumption. Each category comprises three scenarios: a Supply scenario which authorizes high energy consumption, an Efficiency scenario which is also a phasing out nuclear scenario with a 45 per cent reduction of energy consumption, and an intermediate Mix scenario. This study proposes nuclear variations of the Mix and Supply scenarios, with a strong development of nuclear energy from 2020 rather than from 2060, and with a share of 60 per cent for the nuclear energy. It is then possible to considerably reduce the role of CO 2 storage

  12. Progress of Nuclear Hydrogen Program in Korea

    International Nuclear Information System (INIS)

    Lee, Won Jae

    2009-01-01

    To cope with dwindling fossil fuels and climate change, it is clear that a clean alternative energy that can replace fossil fuels is required. Hydrogen is considered a promising future energy solution because it is clean, abundant and storable and has a high energy density. As other advanced countries, the Korean government had established a long-term vision for transition to the hydrogen economy in 2005. One of the major challenges in establishing a hydrogen economy is how to produce massive quantities of hydrogen in a clean, safe and economical way. Among various hydrogen production methods, the massive, safe and economic production of hydrogen by water splitting using a very high temperature gas-cooled reactor (VHTR) can provide a success path to the hydrogen economy. Particularly in Korea, where usable land is limited, the nuclear production of hydrogen is deemed a practical solution due to its high energy density. To meet the expected demand for hydrogen, the Korea Atomic Energy Institute (KAERI) launched a nuclear hydrogen program in 2004 together with Korea Institute of Energy Research (KIER) and Korea Institute of Science and Technology (KIST). Then, the nuclear hydrogen key technologies development program was launched in 2006, which aims at the development and validation of key and challenging technologies required for the realization of the nuclear hydrogen production demonstration system. In 2008, Korean Atomic Energy Commission officially approved a long-term development plan of the nuclear hydrogen system technologies as in the figure below and now the nuclear hydrogen program became the national agenda. This presentation introduces the current status of nuclear hydrogen projects in Korea and the progress of the nuclear hydrogen key technologies development. Perspectives of nuclear process heat applications are also addressed

  13. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest.

    Science.gov (United States)

    Monsieurs, Koenraad G; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F; Calle, Paul A

    2012-11-01

    BACKGROUND AND GOAL OF STUDY: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with decreased depth. In patients undergoing prehospital cardiopulmonary resuscitation by health care professionals, chest compression rate and depth were recorded using an accelerometer (E-series monitor-defibrillator, Zoll, U.S.A.). Compression depth was compared for rates 120/min. A difference in compression depth ≥0.5 cm was considered clinically significant. Mixed models with repeated measurements of chest compression depth and rate (level 1) nested within patients (level 2) were used with compression rate as a continuous and as a categorical predictor of depth. Results are reported as means and standard error (SE). One hundred and thirty-three consecutive patients were analysed (213,409 compressions). Of all compressions 2% were 120/min, 36% were 5 cm. In 77 out of 133 (58%) patients a statistically significant lower depth was observed for rates >120/min compared to rates 80-120/min, in 40 out of 133 (30%) this difference was also clinically significant. The mixed models predicted that the deepest compression (4.5 cm) occurred at a rate of 86/min, with progressively lower compression depths at higher rates. Rates >145/min would result in a depth compression depth for rates 80-120/min was on average 4.5 cm (SE 0.06) compared to 4.1 cm (SE 0.06) for compressions >120/min (mean difference 0.4 cm, Pcompression rates and lower compression depths. Avoiding excessive compression rates may lead to more compressions of sufficient depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. How I treat patients with massive hemorrhage

    DEFF Research Database (Denmark)

    Johansson, Pär I; Stensballe, Jakob; Oliveri, Roberto

    2014-01-01

    Massive hemorrhage is associated with coagulopathy and high mortality. The transfusion guidelines up to 2006 recommended that resuscitation of massive hemorrhage should occur in successive steps using crystalloids, colloids and red blood cells (RBC) in the early phase, and plasma and platelets...... in the late phase. With the introduction of the cell-based model of hemostasis in the mid 1990ties, our understanding of the hemostatic process and of coagulopathy has improved. This has contributed to a change in resuscitation strategy and transfusion therapy of massive hemorrhage along with an acceptance...... outcome, although final evidence on outcome from randomized controlled trials are lacking. We here present how we in Copenhagen and Houston, today, manage patients with massive hemorrhage....

  15. Shock compression experiments on Lithium Deuteride (LiD) single crystals

    Science.gov (United States)

    Knudson, M. D.; Desjarlais, M. P.; Lemke, R. W.

    2016-12-01

    Shock compression experiments in the few hundred GPa (multi-Mbar) regime were performed on Lithium Deuteride single crystals. This study utilized the high velocity flyer plate capability of the Sandia Z Machine to perform impact experiments at flyer plate velocities in the range of 17-32 km/s. Measurements included pressure, density, and temperature between ˜190 and 570 GPa along the Principal Hugoniot—the locus of end states achievable through compression by large amplitude shock waves—as well as pressure and density of reshock states up to ˜920 GPa. The experimental measurements are compared with density functional theory calculations, tabular equation of state models, and legacy nuclear driven results that have been reanalyzed using modern equations of state for the shock wave standards used in the experiments.

  16. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  17. Some topics on nuclear astrophysics and neutrino astronomy

    International Nuclear Information System (INIS)

    Nakazato, Ken'ichiro

    2010-01-01

    Massive stars make a gravitational collapse at the end of their lives emitting a large amount of neutrinos. In this process, the density and temperature of matter become high. Therefore neutrino detection of stellar collapse can teach us properties of hot and/or dense nuclear matter. In this article, some subjects on the nuclear astrophysics and/or neutrino astronomy, on which we are now working, are reported. (author)

  18. Nuclear energy at the turning point

    Energy Technology Data Exchange (ETDEWEB)

    Weinberg, A.M.

    1977-07-01

    In deciding the future course of nuclear energy, it is necessary to re-examine man's long-term energy options, in particular solar energy and the breeder reactor. Both systems pose difficultiies: energy from the sun is likely to be expensive as well as limited, whereas a massive world-wide deployment of nuclear breeders will create problems of safety and of proliferation. Nuclear energy's long-term success depends on resolving both of these problems. Collocation of nuclear facilities with a system of resident inspectors are measures that ought to help increase the proliferation-resistance as well as the safety of a large-scale, long-term nuclear system based on breeders. In such a long-term system a strengthened International Atomic Energy Agency (IAEA) is viewed as playing a central role.

  19. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  20. Systematic study of the giant monopolar resonance via inelastic scattering of 108.5 MeV 3He. Measurement of the nuclear compressibility

    International Nuclear Information System (INIS)

    Lebrun, Didier.

    1981-09-01

    The giant monopole resonance has been studied via inelastic scattering of 108.5 MeV 3 He at very small angles (including 0 0 ) on approximately 50 nuclei. Its angular distribution reaches its maximum in this region and leads to clear separation with GQR. DWBA analysis shows a smooth increase of the strength from few per cent of the sum rule in light nuclei up to 100% in heavier ones. The excitation energy analysis shows a crossing effect of the monopole and quadrupole frequencies in A = 40-50 region, a coupling effect between the two modes in deformed nuclei, an asymmetry effect in several series of isotopes. Compressibility moduli of nuclear matter Ksub(infinity), surface Ksub(s) and asymmetry Ksub(tau) have seen extracted, as well as the Landau parameter F 0 at saturation [fr

  1. On the singularities of massive superstring amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.

    1987-06-04

    Superstring one-loop amplitudes with massive external states are shown to be in general ill-defined due to internal on-shell propagators. However, we argue that since any massive string state (in the uncompactified theory) has a finite lifetime to decay into massless particles, such amplitudes are not terms in the perturbative expansion of physical S-matrix elements: These can be defined only with massless external states. Consistent massive amplitudes repuire an off-shell formalism.

  2. Massive supermultiplets in four-dimensional superstring theory

    International Nuclear Information System (INIS)

    Feng Wanzhe; Lüst, Dieter; Schlotterer, Oliver

    2012-01-01

    We extend the discussion of Feng et al. (2011) on massive Regge excitations on the first mass level of four-dimensional superstring theory. For the lightest massive modes of the open string sector, universal supermultiplets common to all four-dimensional compactifications with N=1,2 and N=4 spacetime supersymmetry are constructed respectively - both their vertex operators and their supersymmetry variations. Massive spinor helicity methods shed light on the interplay between individual polarization states.

  3. Nuclear reactor

    International Nuclear Information System (INIS)

    Gibbons, J.F.; McLaughlin, D.J.

    1978-01-01

    In the pressure vessel of the water-cooled nuclear reactor there is provided an internal flange on which the one- or two-part core barrel is hanging by means of an external flange. A cylinder is extending from the reactor vessel closure downwards to a seat on the core cupport structure and serves as compression element for the transmission of the clamping load from the closure head to the core barrel (upper guide structure). With the core barrel, subject to tensile stress, between the vessel internal flange and its seat on one hand and the compression of the cylinder resp. hold-down element between the closure head and the seat on the other a very strong, elastic sprung structure is obtained. (DG) [de

  4. Calculation of LUEC using HEEP Software for Nuclear Hydrogen Production Plant

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jongho; Lee, Kiyoung; Kim, Minhwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    To achieve the hydrogen economy, it is very important to produce a massive amount of hydrogen in a clean, safe and efficient way. Nuclear production of hydrogen would allow massive production of hydrogen at economic prices while avoiding environments pollution by reducing the release of carbon dioxide. A Very High Temperature Reactor (VHTR) is considered as an efficient reactor to couple with the thermo-chemical Sulfur Iodine (SI) cycle to achieve the hydrogen economy. HEEP(Hydrogen Economy Evaluation Program) is one of the software tools developed by IAEA to evaluate the economy of the nuclear hydrogen production system by estimating unit hydrogen production cost. In this paper, the LUHC (Levelized Unit Hydrogen Cost) is calculated by using HEEP for nuclear hydrogen production plant, which consists of 4 modules of 600 MWth VHTR coupled with SI process. The levelized unit hydrogen production cost(LUHC) was calculated by the HEEP software.

  5. Update on massive transfusion.

    Science.gov (United States)

    Pham, H P; Shaz, B H

    2013-12-01

    Massive haemorrhage requires massive transfusion (MT) to maintain adequate circulation and haemostasis. For optimal management of massively bleeding patients, regardless of aetiology (trauma, obstetrical, surgical), effective preparation and communication between transfusion and other laboratory services and clinical teams are essential. A well-defined MT protocol is a valuable tool to delineate how blood products are ordered, prepared, and delivered; determine laboratory algorithms to use as transfusion guidelines; and outline duties and facilitate communication between involved personnel. In MT patients, it is crucial to practice damage control resuscitation and to administer blood products early in the resuscitation. Trauma patients are often admitted with early trauma-induced coagulopathy (ETIC), which is associated with mortality; the aetiology of ETIC is likely multifactorial. Current data support that trauma patients treated with higher ratios of plasma and platelet to red blood cell transfusions have improved outcomes, but further clinical investigation is needed. Additionally, tranexamic acid has been shown to decrease the mortality in trauma patients requiring MT. Greater use of cryoprecipitate or fibrinogen concentrate might be beneficial in MT patients from obstetrical causes. The risks and benefits for other therapies (prothrombin complex concentrate, recombinant activated factor VII, or whole blood) are not clearly defined in MT patients. Throughout the resuscitation, the patient should be closely monitored and both metabolic and coagulation abnormalities corrected. Further studies are needed to clarify the optimal ratios of blood products, treatment based on underlying clinical disorder, use of alternative therapies, and integration of laboratory testing results in the management of massively bleeding patients.

  6. Nuclear data, their importance and evaluation

    International Nuclear Information System (INIS)

    Schmidt, J.J.

    1980-01-01

    Nuclear data comprize all quantitative results of nuclear physics investigations and can be subdivided in the three areas of nuclear structure, nuclear decay and nuclear reaction data. For the purposes of fission and fusion reactor design mostly neutron reaction data are needed, while for the nuclear fuel cycle outside the reactor and for a large variety of ''non-energy'' scientific applications a number of photonuclear and charged particle nuclear reaction data and of nuclear structure and decay data are needed, in addition to selected neutron nuclear reaction data. To meet the needs of nuclear science and technology for accurate nuclear data, comprehensive computer libraries of evaluated nuclear data have been built up from evaluation of a massive volume of experimental data complemented by data calculated from nuclear theory. The basic characteristics and requirements of evaluated data libraries are discussed and evaluation sources and methods illustrated with the example of a few important neutron nuclear reactions. International mechanisms have been developed, coordinated by the IAEA Nuclear Data Section with the cooperation of many nuclear data centres and groups, for the efficient dissemination of bibliographic and numerical experimental and evaluated nuclear data to data users in the whole world. (author)

  7. Nuclear fuel pellet production method and nuclear fuel pellet

    International Nuclear Information System (INIS)

    Yuda, Ryoichi; Ito, Ken-ichi; Masuda, Hiroshi.

    1993-01-01

    In a method of manufacturing nuclear fuel pellets by compression-molding UO 2 powders followed by sintering, a sintering agent having a composition of about 40 to 80 wt% of SiO 2 and the balance of Al 2 O 3 , a sintering agent at a ratio of 10 to 500 ppm based on the total amount of UO 2 and UO 2 powders are mixed, compression molded and then sintered at a sintering temperature of about 1500 of 1800degC. The UO 2 particles have an average grain size of about 20 to 60μm, most of the crystal grain boundary thereof is coated with a glassy or crystalline alumina silicate phase, and the porosity is about 1 to 4 vol%. With such a constitution, the sintering agent forms a single liquid phase eutectic mixture during sintering, to promote a surface reaction between nuclear fuel powders by a liquid phase sintering mechanism, increase their density and promote the crystal growth. Accordingly, it is possible to lower the softening temperature, improve the creep velocity of the pellets and improve the resistance against pellet-clad interaction. (T.M.)

  8. Massive lepton pair production in massive quantum electrodynamics

    International Nuclear Information System (INIS)

    Raychaudhuri, P.

    1976-01-01

    The pp → l + +l - +x inclusive interaction has been studied at high energies in terms of the massive quantum electrodynamics. The differential cross-section (dsigma/dQ 2 ) is derived and proves to be proportional to Q -4 , where Q-mass of the lepton pair. Basic features of the cross-section are demonstrated to be consistent with the Drell-Yan model

  9. Epidemiology of Massive Transfusion

    DEFF Research Database (Denmark)

    Halmin, Märit; Chiesa, Flaminia; Vasan, Senthil K

    2016-01-01

    in Sweden from 1987 and in Denmark from 1996. A total of 92,057 patients were included. Patients were followed until the end of 2012. MEASUREMENTS AND MAIN RESULTS: Descriptive statistics were used to characterize the patients and indications. Post transfusion mortality was expressed as crude 30-day...... mortality and as long-term mortality using the Kaplan-Meier method and using standardized mortality ratios. The incidence of massive transfusion was higher in Denmark (4.5 per 10,000) than in Sweden (2.5 per 10,000). The most common indication for massive transfusion was major surgery (61.2%) followed...

  10. Lossless compression of waveform data for efficient storage and transmission

    International Nuclear Information System (INIS)

    Stearns, S.D.; Tan, Li Zhe; Magotra, Neeraj

    1993-01-01

    Compression of waveform data is significant in many engineering and research areas since it can be used to alleviate data storage and transmission bandwidth. For example, seismic data are widely recorded and transmitted so that analysis can be performed on large amounts of data for numerous applications such as petroleum exploration, determination of the earth's core structure, seismic event detection and discrimination of underground nuclear explosions, etc. This paper describes a technique for lossless wave form data compression. The technique consists of two stages. The first stage is a modified form of linear prediction with discrete coefficients and the second stage is bi-level sequence coding. The linear predictor generates an error or residue sequence in a way such that exact reconstruction of the original data sequence can be accomplished with a simple algorithm. The residue sequence is essentially white Gaussian with seismic or other similar waveform data. Bi-level sequence coding, in which two sample sizes are chosen and the residue sequence is encoded into subsequences that alternate from one level to the other, further compresses the residue sequence. The principal feature of the two-stage data compression algorithm is that it is lossless, that is, it allows exact, bit-for-bit recovery of the original data sequence. The performance of the lossless compression algorithm at each stage is analyzed. The advantages of using bi-level sequence coding in the second stage are its simplicity of implementation, its effectiveness on data with large amplitude variations, and its near-optimal performance in encoding Gaussian sequences. Applications of the two-stage technique to typical seismic data indicates that an average number of compressed bits per sample close to the lower bound is achievable in practical situations

  11. Compression stockings

    Science.gov (United States)

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  12. On the singularities of massive superstring amplitudes

    International Nuclear Information System (INIS)

    Foda, O.

    1987-01-01

    Superstring one-loop amplitudes with massive external states are shown to be in general ill-defined due to internal on-shell propagators. However, we argue that since any massive string state (in the uncompactified theory) has a finite lifetime to decay into massless particles, such amplitudes are not terms in the perturbative expansion of physical S-matrix elements: These can be defined only with massless external states. Consistent massive amplitudes repuire an off-shell formalism. (orig.)

  13. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  14. Using Nuclear Science and Technology Safely and Peacefully; La Utilizacion de la Ciencia y Tecnologia Nuclear de Forma Segura y Pacifica

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, P.

    2011-07-01

    The Fukushima Daiichi nuclear power plant in Japan suffered a serious accident on March 11, 2011, following a massive earthquake and tsunami. It caused millions of people around the world ask whether nuclear energy can ever be made sufficiently safe. In view of these questions, IAEA Director General Yukiya Amano said the IAEA will continue to pursue its mandate to help improve nuclear power plant safety and to ensure transparency about the risks of radiation. Only in this way will we succeed in addressing the concerns that have been raised by Fukushima Daiichi. (Author)

  15. Nucleosynthesis and remnants in massive stars of solar metallicity

    International Nuclear Information System (INIS)

    Woosley, S.E.; Heger, A.

    2007-01-01

    Hans Bethe contributed in many ways to our understanding of the supernovae that happen in massive stars, but, to this day, a first principles model of how the explosion is energized is lacking. Nevertheless, a quantitative theory of nucleosynthesis is possible. We present a survey of the nucleosynthesis that occurs in 32 stars of solar metallicity in the mass range 12-120M sun . The most recent set of solar abundances, opacities, mass loss rates, and current estimates of nuclear reaction rates are employed. Restrictions on the mass cut and explosion energy of the supernovae based upon nucleosynthesis, measured neutron star masses, and light curves are discussed and applied. The nucleosynthetic results, when integrated over a Salpeter initial mass function (IMF), agree quite well with what is seen in the sun. We discuss in some detail the production of the long lived radioactivities, 26 Al and 60 Fe, and why recent model-based estimates of the ratio 60 Fe/ 26 Al are overly large compared with what satellites have observed. A major source of the discrepancy is the uncertain nuclear cross sections for the creation and destruction of these unstable isotopes

  16. Thermodynamics inducing massive particles' tunneling and cosmic censorship

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Baocheng [Chinese Academy of Sciences, State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Wuhan (China); Graduate University of Chinese Academy of Sciences, Beijing (China); Cai, Qing-yu [Chinese Academy of Sciences, State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Wuhan (China); Zhan, Ming-sheng [Chinese Academy of Sciences, State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Wuhan (China); Chinese Academy of Sciences, Center for Cold Atom Physics, Wuhan (China)

    2010-08-15

    By calculating the change of entropy, we prove that the first law of black hole thermodynamics leads to the tunneling probability of massive particles through the horizon, including the tunneling probability of massive charged particles from the Reissner-Nordstroem black hole and the Kerr-Newman black hole. Novelly, we find the trajectories of massive particles are close to that of massless particles near the horizon, although the trajectories of massive charged particles may be affected by electromagnetic forces. We show that Hawking radiation as massive particles tunneling does not lead to violation of the weak cosmic-censorship conjecture. (orig.)

  17. High temperature compression tests performed on doped fuels

    Energy Technology Data Exchange (ETDEWEB)

    Duguay, C.; Mocellin, A.; Dehaudt, P. [Commissariat a l`Energie Atomique, CEA Grenoble (France); Fantozzi, G. [INSA Lyon - GEMPPM, Villeurbanne (France)

    1997-12-31

    The use of additives of corundum structure M{sub 2}O{sub 3} (M=Cr, Al) is an effective way of promoting grain growth of uranium dioxide. The high-temperature compressive deformation of large-grained UO{sub 2} doped with these oxides has been investigated and compared with that of pure UO{sub 2} with a standard microstructure. Such doped fuels are expected to exhibit enhanced plasticity. Their use would therefore reduce the pellet-cladding mechanical interaction and thus improve the performances of the nuclear fuel. (orig.) 5 refs.

  18. Public health protection after nuclear and radiation disasters

    International Nuclear Information System (INIS)

    Du Liqing; Liu Qiang; Fan Feiyue

    2012-01-01

    The Fukushima Daiichi nuclear disaster in Japan combined with massive earthquake and immense tsunami, Some crucial lessons were reviewed in this paper, including emergency response for natural technological disasters, international effects, public psychological health effects and communication between the government and public. (authors)

  19. Exact Solutions in 3D New Massive Gravity

    Science.gov (United States)

    Ahmedov, Haji; Aliev, Alikram N.

    2011-01-01

    We show that the field equations of new massive gravity (NMG) consist of a massive (tensorial) Klein-Gordon-type equation with a curvature-squared source term and a constraint equation. We also show that, for algebraic type D and N spacetimes, the field equations of topologically massive gravity (TMG) can be thought of as the “square root” of the massive Klein-Gordon-type equation. Using this fact, we establish a simple framework for mapping all types D and N solutions of TMG into NMG. Finally, we present new examples of types D and N solutions to NMG.

  20. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  1. Educating nuclear engineers at German universities

    International Nuclear Information System (INIS)

    Knorr, J.

    1995-01-01

    Nuclear technology is a relatively young university discipline. Yet, as a consequence of the declining public acceptance of the peaceful use of nuclear power, its very existence is already being threatened at many universities. However, if Germany needs nuclear power, which undoubtedly is the case, highly qualified, committed experts are required above all. Nuclear technology develops internationally. Consequently, also university education must meet international standards. Generally, university education has been found to be the most effective way of increasing the number of scientific and engineering personnel. Nuclear techniques have meanwhile found acceptance in many other scientific disciplines, thus advancing those branches of science. Teaching needs research; like research in nucelar technology at the national research centers, also the universities are suffering massive financial disadvantages. Research is possible only if outside funds are solicited, which increase dependency and decreases basic research. (orig.) [de

  2. Spacetime structure of massive Majorana particles and massive gravitino

    CERN Document Server

    Ahluwalia, D V

    2003-01-01

    The profound difference between Dirac and Majorana particles is traced back to the possibility of having physically different constructs in the (1/2, 0) 0 (0,1/2) representation space. Contrary to Dirac particles, Majorana-particle propagators are shown to differ from the simple linear gamma mu p submu, structure. Furthermore, neither Majorana particles, nor their antiparticles can be associated with a well defined arrow of time. The inevitable consequence of this peculiarity is the particle-antiparticle metamorphosis giving rise to neutrinoless double beta decay, on the one side, and enabling spin-1/2 fields to act as gauge fields, gauginos, on the other side. The second part of the lecture notes is devoted to massive gravitino. We argue that a spin measurement in the rest frame for an unpolarized ensemble of massive gravitino, associated with the spinor-vector [(1/2, 0) 0 (0,1/2)] 0 (1/2,1/2) representation space, would yield the results 3/2 with probability one half, and 1/2 with probability one half. The ...

  3. Massive vector fields and black holes

    International Nuclear Information System (INIS)

    Frolov, V.P.

    1977-04-01

    A massive vector field inside the event horizon created by the static sources located outside the black hole is investigated. It is shown that the back reaction of such a field on the metric near r = 0 cannot be neglected. The possibility of the space-time structure changing near r = 0 due to the external massive field is discussed

  4. Nuclear power development in Japan

    International Nuclear Information System (INIS)

    Mishiro, M.

    2000-01-01

    This article describes the advantages of nuclear energy for Japan. In 1997 the composition of the total primary energy supply (TPES) was oil 52.7%, coal 16.5%, nuclear 16.1% and natural gas 10.7%. Nuclear power has a significant role to play in contributing to 3 national interests: i) energy security, ii) economic growth and iii) environmental protection. Energy security is assured because a stable supply of uranium fuel can be reasonably expected in spite of dependence on import from abroad. Economic growth implies the reduction of energy costs. As nuclear power is capital intensive, the power generation cost is less affected by the fuel cost, therefore nuclear power can realize low cost by favoring high capacity utilization factor. Fossil fuels have substantial impacts on environment such as global warming and acid rain by releasing massive quantities of CO 2 , so nuclear power is a major option for meeting the Kyoto limitations. In Japan, in 2010 nuclear power is expected to reach 17% of TPES and 45% of electricity generated. (A.C.)

  5. Investigation of the status quo of massive blood transfusion in China and a synopsis of the proposed guidelines for massive blood transfusion.

    Science.gov (United States)

    Yang, Jiang-Cun; Wang, Qiu-Shi; Dang, Qian-Li; Sun, Yang; Xu, Cui-Xiang; Jin, Zhan-Kui; Ma, Ting; Liu, Jing

    2017-08-01

    The aim of this study was to provide an overview of massive transfusion in Chinese hospitals, identify the important indications for massive transfusion and corrective therapies based on clinical evidence and supporting experimental studies, and propose guidelines for the management of massive transfusion. This multiregion, multicenter retrospective study involved a Massive Blood Transfusion Coordination Group composed of 50 clinical experts specializing in blood transfusion, cardiac surgery, anesthesiology, obstetrics, general surgery, and medical statistics from 20 tertiary general hospitals across 5 regions in China. Data were collected for all patients who received ≥10 U red blood cell transfusion within 24 hours in the participating hospitals from January 1 2009 to December 31 2010, including patient demographics, pre-, peri-, and post-operative clinical characteristics, laboratory test results before, during, and after transfusion, and patient mortality at post-transfusion and discharge. We also designed an in vitro hemodilution model to investigate the changes of blood coagulation indices during massive transfusion and the correction of coagulopathy through supplement blood components under different hemodilutions. The experimental data in combination with the clinical evidence were used to determine the optimal proportion and timing for blood component supplementation during massive transfusion. Based on the findings from the present study, together with an extensive review of domestic and international transfusion-related literature and consensus feedback from the 50 experts, we drafted the guidelines on massive blood transfusion that will help Chinese hospitals to develop standardized protocols for massive blood transfusion.

  6. Strength and deformation behaviors of veined marble specimens after vacuum heat treatment under conventional triaxial compression

    Science.gov (United States)

    Su, Haijian; Jing, Hongwen; Yin, Qian; Yu, Liyuan; Wang, Yingchao; Wu, Xingjie

    2017-10-01

    The mechanical behaviors of rocks affected by high temperature and stress are generally believed to be significant for the stability of certain projects involving rocks, such as nuclear waste storage and geothermal resource exploitation. In this paper, veined marble specimens were treated to high temperature treatment and then used in conventional triaxial compression tests to investigate the effect of temperature, confining pressure, and vein angle on strength and deformation behaviors. The results show that the strength and deformation parameters of the veined marble specimens changed with the temperature, presenting a critical temperature of 600 °C. The triaxial compression strength of a horizontal vein (β = 90°) is obviously larger than that of a vertical vein (β = 0°). The triaxial compression strength, elasticity modulus, and secant modulus have an approximately linear relation to the confining pressure. Finally, Mohr-Coulomb and Hoek-Brown criteria were respectively used to analyze the effect of confining pressure on triaxial compression strength.

  7. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  8. Management of massive haemoptysis | Adegboye | Nigerian Journal ...

    African Journals Online (AJOL)

    Background: This study compares two management techniques in the treatment of massive haemotysis. Method: All patients with massive haemoptysis treated between January 1969 and December 1980 (group 1) were retrospectively reviewed and those prospectively treated between January 1981 and August 1999 ...

  9. Topologically massive gravity and Ricci-Cotton flow

    Energy Technology Data Exchange (ETDEWEB)

    Lashkari, Nima; Maloney, Alexander, E-mail: lashkari@physics.mcgill.ca, E-mail: maloney@physics.mcgill.ca [McGill Physics Department, 3600 rue University, Montreal, QC H3A 2T8 (Canada)

    2011-05-21

    We consider topologically massive gravity (TMG), which is three-dimensional general relativity with a cosmological constant and a gravitational Chern-Simons term. When the cosmological constant is negative the theory has two potential vacuum solutions: anti-de Sitter space and warped anti-de Sitter space. The theory also contains a massive graviton state which renders these solutions unstable for certain values of the parameters and boundary conditions. We study the decay of these solutions due to the condensation of the massive graviton mode using Ricci-Cotton flow, which is the appropriate generalization of Ricci flow to TMG. When the Chern-Simons coupling is small the AdS solution flows to warped AdS by the condensation of the massive graviton mode. When the coupling is large the situation is reversed, and warped AdS flows to AdS. Minisuperspace models are constructed where these flows are studied explicitly.

  10. Topologically massive gravity and Ricci-Cotton flow

    International Nuclear Information System (INIS)

    Lashkari, Nima; Maloney, Alexander

    2011-01-01

    We consider topologically massive gravity (TMG), which is three-dimensional general relativity with a cosmological constant and a gravitational Chern-Simons term. When the cosmological constant is negative the theory has two potential vacuum solutions: anti-de Sitter space and warped anti-de Sitter space. The theory also contains a massive graviton state which renders these solutions unstable for certain values of the parameters and boundary conditions. We study the decay of these solutions due to the condensation of the massive graviton mode using Ricci-Cotton flow, which is the appropriate generalization of Ricci flow to TMG. When the Chern-Simons coupling is small the AdS solution flows to warped AdS by the condensation of the massive graviton mode. When the coupling is large the situation is reversed, and warped AdS flows to AdS. Minisuperspace models are constructed where these flows are studied explicitly.

  11. Neutron stars structure in the context of massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Hendi, S.H.; Bordbar, G.H.; Panah, B. Eslam; Panahiyan, S., E-mail: hendi@shirazu.ac.ir, E-mail: ghbordbar@shirazu.ac.ir, E-mail: behzad.eslampanah@gmail.com, E-mail: sh.panahiyan@gmail.com [Physics Department and Biruni Observatory, College of Sciences, Shiraz University, Shiraz 71454 (Iran, Islamic Republic of)

    2017-07-01

    Motivated by the recent interests in spin−2 massive gravitons, we study the structure of neutron star in the context of massive gravity. The modifications of TOV equation in the presence of massive gravity are explored in 4 and higher dimensions. Next, by considering the modern equation of state for the neutron star matter (which is extracted by the lowest order constrained variational (LOCV) method with the AV18 potential), different physical properties of the neutron star (such as Le Chatelier's principle, stability and energy conditions) are investigated. It is shown that consideration of the massive gravity has specific contributions into the structure of neutron star and introduces new prescriptions for the massive astrophysical objects. The mass-radius relation is examined and the effects of massive gravity on the Schwarzschild radius, average density, compactness, gravitational redshift and dynamical stability are studied. Finally, a relation between mass and radius of neutron star versus the Planck mass is extracted.

  12. Neutron stars structure in the context of massive gravity

    Science.gov (United States)

    Hendi, S. H.; Bordbar, G. H.; Eslam Panah, B.; Panahiyan, S.

    2017-07-01

    Motivated by the recent interests in spin-2 massive gravitons, we study the structure of neutron star in the context of massive gravity. The modifications of TOV equation in the presence of massive gravity are explored in 4 and higher dimensions. Next, by considering the modern equation of state for the neutron star matter (which is extracted by the lowest order constrained variational (LOCV) method with the AV18 potential), different physical properties of the neutron star (such as Le Chatelier's principle, stability and energy conditions) are investigated. It is shown that consideration of the massive gravity has specific contributions into the structure of neutron star and introduces new prescriptions for the massive astrophysical objects. The mass-radius relation is examined and the effects of massive gravity on the Schwarzschild radius, average density, compactness, gravitational redshift and dynamical stability are studied. Finally, a relation between mass and radius of neutron star versus the Planck mass is extracted.

  13. Neutron stars structure in the context of massive gravity

    International Nuclear Information System (INIS)

    Hendi, S.H.; Bordbar, G.H.; Panah, B. Eslam; Panahiyan, S.

    2017-01-01

    Motivated by the recent interests in spin−2 massive gravitons, we study the structure of neutron star in the context of massive gravity. The modifications of TOV equation in the presence of massive gravity are explored in 4 and higher dimensions. Next, by considering the modern equation of state for the neutron star matter (which is extracted by the lowest order constrained variational (LOCV) method with the AV18 potential), different physical properties of the neutron star (such as Le Chatelier's principle, stability and energy conditions) are investigated. It is shown that consideration of the massive gravity has specific contributions into the structure of neutron star and introduces new prescriptions for the massive astrophysical objects. The mass-radius relation is examined and the effects of massive gravity on the Schwarzschild radius, average density, compactness, gravitational redshift and dynamical stability are studied. Finally, a relation between mass and radius of neutron star versus the Planck mass is extracted.

  14. Effects of nuclear structure in the spin-dependent scattering of weakly interacting massive particles

    Science.gov (United States)

    Nikolaev, M. A.; Klapdor-Kleingrothaus, H. V.

    1993-06-01

    We present calculations of the nuclear from factors for spin-dependent elastic scattering of dark matter WIMPs from123Te and131Xe isotopes, proposed to be used for dark matter detection. A method based on the theory of finite Fermi systems was used to describe the reduction of the single-particle spin-dependent matrix elements in the nuclear medium. Nucleon single-particle states were calculated in a realistic shell model potential; pairing effects were treated within the BCS model. The coupling of the lowest single-particle levels in123Te to collective 2+ excitations of the core was taken into account phenomenologically. The calculated nuclear form factors are considerably less then the single-particle ones for low momentum transfer. At high momentum transfer some dynamical amplification takes place due to the pion exchange term in the effective nuclear interaction. But as the momentum transfer increases, the difference disappears, the momentum transfer increases and the quenching effect disappears. The shape of the nuclear form factor for the131Xe isotope differs from the one obtained using an oscillator basis.

  15. Effects of nuclear structure in the spin-dependent scattering of weakly interacting massive particles

    International Nuclear Information System (INIS)

    Nikolaev, M.A.; Klapdor-Kleingrothaus, H.V.

    1993-01-01

    We present calculations of the nuclear from factors for spin-dependent elastic scattering of dark matter WIMPs from 123 Te and 131 Xe isotopes, proposed to be used for dark matter detection. A method based on the theory of finite Fermi systems was used to describe the reduction of the single-particle spin-dependent matrix elements in the nuclear medium. Nucelon single-particle states were calculated in a realistic shell model potential; pairing effects were treated within the BCS model. The coupling of the lowest single-particle levels in 123 Te to collective 2 + excitations of the core was taken into account phenomenologically. The calculated nuclear form factors are considerably less then the single-particle ones for low momentum transfer. At high momentum transfer some dynamical amplification takes place due to the pion exchange term in the effective nuclear interaction. But as the momentum transfer increases, the difference disappears, the momentum transfer increases and quenching effect disappears. The shape of the nuclear form factor for the 131 Xe isotope differs from the one obtained using an oscillator basis. (orig.)

  16. Simultaneous heating and compression of irradiated graphite during synchrotron microtomographic imaging

    Science.gov (United States)

    Bodey, A. J.; Mileeva, Z.; Lowe, T.; Williamson-Brown, E.; Eastwood, D. S.; Simpson, C.; Titarenko, V.; Jones, A. N.; Rau, C.; Mummery, P. M.

    2017-06-01

    Nuclear graphite is used as a neutron moderator in fission power stations. To investigate the microstructural changes that occur during such use, it has been studied for the first time by X-ray microtomography with in situ heating and compression. This experiment was the first to involve simultaneous heating and mechanical loading of radioactive samples at Diamond Light Source, and represented the first study of radioactive materials at the Diamond-Manchester Imaging Branchline I13-2. Engineering methods and safety protocols were developed to ensure the safe containment of irradiated graphite as it was simultaneously compressed to 450N in a Deben 10kN Open-Frame Rig and heated to 300°C with dual focused infrared lamps. Central to safe containment was a double containment vessel which prevented escape of airborne particulates while enabling compression via a moveable ram and the transmission of infrared light to the sample. Temperature measurements were made in situ via thermocouple readout. During heating and compression, samples were simultaneously rotated and imaged with polychromatic X-rays. The resulting microtomograms are being studied via digital volume correlation to provide insights into how thermal expansion coefficients and microstructure are affected by irradiation history, load and heat. Such information will be key to improving the accuracy of graphite degradation models which inform safety margins at power stations.

  17. Permutations of massive vacua

    Energy Technology Data Exchange (ETDEWEB)

    Bourget, Antoine [Department of Physics, Universidad de Oviedo, Avenida Calvo Sotelo 18, 33007 Oviedo (Spain); Troost, Jan [Laboratoire de Physique Théorique de l’É cole Normale Supérieure, CNRS,PSL Research University, Sorbonne Universités, 75005 Paris (France)

    2017-05-09

    We discuss the permutation group G of massive vacua of four-dimensional gauge theories with N=1 supersymmetry that arises upon tracing loops in the space of couplings. We concentrate on superconformal N=4 and N=2 theories with N=1 supersymmetry preserving mass deformations. The permutation group G of massive vacua is the Galois group of characteristic polynomials for the vacuum expectation values of chiral observables. We provide various techniques to effectively compute characteristic polynomials in given theories, and we deduce the existence of varying symmetry breaking patterns of the duality group depending on the gauge algebra and matter content of the theory. Our examples give rise to interesting field extensions of spaces of modular forms.

  18. Massive stars in galaxies

    International Nuclear Information System (INIS)

    Humphreys, R.M.

    1987-01-01

    The relationship between the morphologic type of a galaxy and the evolution of its massive stars is explored, reviewing observational results for nearby galaxies. The data are presented in diagrams, and it is found that the massive-star populations of most Sc spiral galaxies and irregular galaxies are similar, while those of Sb spirals such as M 31 and M 81 may be affected by morphology (via differences in the initial mass function or star-formation rate). Consideration is also given to the stability-related upper luminosity limit in the H-R diagram of hypergiant stars (attributed to radiation pressure in hot stars and turbulence in cool stars) and the goals of future observation campaigns. 88 references

  19. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    Science.gov (United States)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  20. Nitrogen chronology of massive main sequence stars

    NARCIS (Netherlands)

    Köhler, K.; Borzyszkowski, M.; Brott, I.; Langer, N.; de Koter, A.

    2012-01-01

    Context. Rotational mixing in massive main sequence stars is predicted to monotonically increase their surface nitrogen abundance with time. Aims. We use this effect to design a method for constraining the age and the inclination angle of massive main sequence stars, given their observed luminosity,

  1. Using massive digital libraries a LITA guide

    CERN Document Server

    Weiss, Andrew

    2014-01-01

    Some have viewed the ascendance of the digital library as some kind of existential apocalypse, nothing less than the beginning of the end for the traditional library. But Weiss, recognizing the concept of the library as a ""big idea"" that has been implemented in many ways over thousands of years, is not so gloomy. In this thought-provoking and unabashedly optimistic book, he explores how massive digital libraries are already adapting to society's needs, and looks ahead to the massive digital libraries of tomorrow, coveringThe author's criteria for defining massive digital librariesA history o

  2. Two-dimensional thermofield bosonization II: Massive fermions

    International Nuclear Information System (INIS)

    Amaral, R.L.P.G.; Belvedere, L.V.; Rothe, K.D.

    2008-01-01

    We consider the perturbative computation of the N-point function of chiral densities of massive free fermions at finite temperature within the thermofield dynamics approach. The infinite series in the mass parameter for the N-point functions are computed in the fermionic formulation and compared with the corresponding perturbative series in the interaction parameter in the bosonized thermofield formulation. Thereby we establish in thermofield dynamics the formal equivalence of the massive free fermion theory with the sine-Gordon thermofield model for a particular value of the sine-Gordon parameter. We extend the thermofield bosonization to include the massive Thirring model

  3. Nuclear fuel element

    International Nuclear Information System (INIS)

    Iwano, Yoshihiko.

    1993-01-01

    Microfine cracks having a depth of less than 10% of a pipe thickness are disposed radially from a central axis each at an interval of less than 100 micron over the entire inner circumferential surface of a zirconium alloy fuel cladding tube. For manufacturing such a nuclear fuel element, the inside of the cladding tube is at first filled with an electrolyte solution of potassium chloride. Then, electrolysis is conducted using the cladding tube as an anode and the electrolyte solution as a cathode, and the inner surface of the cladding tube with a zirconium dioxide layer having a predetermined thickness. Subsequently, the cladding tube is laid on a smooth steel plate and lightly compressed by other smooth steel plate to form microfine cracks in the zirconium dioxide layer on the inner surface of the cladding tube. Such a compressing operation is continuously applied to the cladding tube while rotating the cladding tube. This can inhibit progress of cracks on the inner surface of the cladding tube, thereby enabling to prevent failure of the cladding tube even if a pellet/cladding tube mechanical interaction is applied. Accordingly, reliability of the nuclear fuel elements is improved. (I.N.)

  4. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  5. Method of manufacturing nuclear fuel pellet

    International Nuclear Information System (INIS)

    Oguma, Masaomi; Masuda, Hiroshi; Hirai, Mutsumi; Tanabe, Isami; Yuda, Ryoichi.

    1989-01-01

    In a method of manufacturing nuclear fuel pellets by compression molding an oxide powder of nuclear fuel material followed by sintering, a metal nuclear material is mixed with an oxide powder of the nuclear fuel material. As the metal nuclear fuel material, whisker or wire-like fine wire or granules of metal uranium can be used effectively. As a result, a fuel pellet in which the metal nuclear fuel is disposed in a network-like manner can be obtained. The pellet shows a great effect of preventing thermal stress destruction of pellets upon increase of fuel rod power as compared with conventional pellets. Further, the metal nuclear fuel material acts as an oxygen getter to suppress the increase of O/M ratio of the pellets. Further, it is possible to reduce the swelling of pellet at high burn-up degree. (T.M.)

  6. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  7. Limiting Accretion onto Massive Stars by Fragmentation-Induced Starvation

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Thomas; /ZAH, Heidelberg; Klessen, Ralf S.; /ZAH, Heidelberg /KIPAC, Menlo Park; Mac Low, Mordecai-Mark; /Amer. Museum Natural Hist.; Banerjee, Robi; /ZAH, Heidelberg

    2010-08-25

    Massive stars influence their surroundings through radiation, winds, and supernova explosions far out of proportion to their small numbers. However, the physical processes that initiate and govern the birth of massive stars remain poorly understood. Two widely discussed models are monolithic collapse of molecular cloud cores and competitive accretion. To learn more about massive star formation, we perform simulations of the collapse of rotating, massive, cloud cores including radiative heating by both non-ionizing and ionizing radiation using the FLASH adaptive mesh refinement code. These simulations show fragmentation from gravitational instability in the enormously dense accretion flows required to build up massive stars. Secondary stars form rapidly in these flows and accrete mass that would have otherwise been consumed by the massive star in the center, in a process that we term fragmentation-induced starvation. This explains why massive stars are usually found as members of high-order stellar systems that themselves belong to large clusters containing stars of all masses. The radiative heating does not prevent fragmentation, but does lead to a higher Jeans mass, resulting in fewer and more massive stars than would form without the heating. This mechanism reproduces the observed relation between the total stellar mass in the cluster and the mass of the largest star. It predicts strong clumping and filamentary structure in the center of collapsing cores, as has recently been observed. We speculate that a similar mechanism will act during primordial star formation.

  8. Nuclear power in our societies; Le nucleaire dans nos societes

    Energy Technology Data Exchange (ETDEWEB)

    Fardeau, J.C.

    2011-07-01

    Hiroshima, Chernobyl, Fukushima Daiichi are the well known sad milestones on the path toward a broad development of nuclear energy. They are so well known that they have blurred certainly for long in a very unfair way the positive image of nuclear energy in the public eye. The impact of the media appetite for disasters favours the fear and puts aside all the achievements of nuclear sciences like nuclear medicine for instance and all the assets of nuclear power like the quasi absence of greenhouse gas emission or its massive capacity to produce electricity or heat. The unique solution to enhance nuclear acceptance is the reduction of the fear through a better understanding of nuclear sciences by the public. (A.C.)

  9. Hunting for a massive neutrino

    CERN Document Server

    AUTHOR|(CDS)2108802

    1997-01-01

    A great effort is devoted by many groups of physicists all over the world to give an answer to the following question: Is the neutrino massive ? This question has profound implications with particle physics, astrophysics and cosmology, in relation to the so-called Dark Matter puzzle. The neutrino oscillation process, in particular, can only occur if the neutrino is massive. An overview of the neutrino mass measurements, of the oscillation formalism and experiments will be given, also in connection with the present experimental programme at CERN with the two experiments CHORUS and NOMAD.

  10. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  11. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  12. On the singularities of massive superstring amplitudes

    NARCIS (Netherlands)

    Foda, O.

    1987-01-01

    Superstring one-loop amplitudes with massive external states are shown to be in general ill-defined due to internal on-shell propagators. However, we argue that since any massive string state (in the uncompactified theory) has a finite lifetime to decay into massless particles, such amplitudes are

  13. Defining the new initiatives of struggle against the proliferation of arms of massive destruction

    International Nuclear Information System (INIS)

    Hautecouverture, Benjamin

    2007-01-01

    The author discusses the various terms of the concept of 'new initiatives of struggle against arms of massive destruction' by discussing how these initiatives are new, how they address new threats (State-based proliferation of AMD, terrorism of massive destruction). He comments the background of these initiatives which may be launched to respond to a specific threat or to implement specific means of struggle. He identifies the main characteristics of these political or institutional initiatives: they are pragmatic, functional, instrumental, have different scopes, are based on an institutional flexibility, and on cooperation and partnership. For different of these initiatives (Proliferation Security Initiative, Container Security Initiative, Global Initiative to Combat Nuclear Terrorism, and so on), the author indicates whether they are unilateral, bilateral, supported by regional organisations, by the UN, by operational international organisations, or inter-governmental groups. He finally outlines questions raised by these initiatives: how to assess their impact? Must they be more integrated? Can they or must they have a better defined role in the global regime of non proliferation and disarmament?

  14. Multi-objective optimization and exergoeconomic analysis of a combined cooling, heating and power based compressed air energy storage system

    International Nuclear Information System (INIS)

    Yao, Erren; Wang, Huanran; Wang, Ligang; Xi, Guang; Maréchal, François

    2017-01-01

    Highlights: • A novel tri-generation based compressed air energy storage system. • Trade-off between efficiency and cost to highlight the best compromise solution. • Components with largest irreversibility and potential improvements highlighted. - Abstract: Compressed air energy storage technologies can improve the supply capacity and stability of the electricity grid, particularly when fluctuating renewable energies are massively connected. While incorporating the combined cooling, heating and power systems into compressed air energy storage could achieve stable operation as well as efficient energy utilization. In this paper, a novel combined cooling, heating and power based compressed air energy storage system is proposed. The system combines a gas engine, supplemental heat exchangers and an ammonia-water absorption refrigeration system. The design trade-off between the thermodynamic and economic objectives, i.e., the overall exergy efficiency and the total specific cost of product, is investigated by an evolutionary multi-objective algorithm for the proposed combined system. It is found that, with an increase in the exergy efficiency, the total product unit cost is less affected in the beginning, while rises substantially afterwards. The best trade-off solution is selected with an overall exergy efficiency of 53.04% and a total product unit cost of 20.54 cent/kWh, respectively. The variation of decision variables with the exergy efficiency indicates that the compressor, turbine and heat exchanger preheating the inlet air of turbine are the key equipment to cost-effectively pursuit a higher exergy efficiency. It is also revealed by an exergoeconomic analysis that, for the best trade-off solution, the investment costs of the compressor and the two heat exchangers recovering compression heat and heating up compressed air for expansion should be reduced (particularly the latter), while the thermodynamic performance of the gas engine need to be improved

  15. Black holes in massive gravity as heat engines

    Science.gov (United States)

    Hendi, S. H.; Eslam Panah, B.; Panahiyan, S.; Liu, H.; Meng, X.-H.

    2018-06-01

    The paper at hand studies the heat engine provided by black holes in the presence of massive gravity. The main motivation is to investigate the effects of massive gravity on different properties of the heat engine. It will be shown that massive gravity parameters modify the efficiency of engine on a significant level. Furthermore, it will be pointed out that it is possible to have a heat engine for non-spherical black holes in massive gravity, and therefore, we will study the effects of horizon topology on the properties of heat engine. Surprisingly, it will be shown that the highest efficiency for the heat engine belongs to black holes with the hyperbolic horizon, while the lowest one belongs to the spherical black holes.

  16. Wavelet representation of the nuclear dynamics

    International Nuclear Information System (INIS)

    Jouault, B.; Sebille, F.; Mota, V. de la.

    1997-01-01

    The study of transport phenomena in nuclear matter is addressed in a new approach named DYWAN, based on the projection methods of statistical physics and on the mathematical theory of wavelets. Strongly compressed representations of the nuclear systems are obtained with an accurate description of the wave functions and of their antisymmetrization. The results of the approach are illustrated for the ground state description as well as for the dissipative dynamics of nuclei at intermediate energies. (K.A.)

  17. The VLT-FLAMES survey of massive stars

    NARCIS (Netherlands)

    Evans, C.; Langer, N.; Brott, I.; Hunter, I.; Smartt, S.J.; Lennon, D.J.

    2008-01-01

    The VLT-FLAMES Survey of Massive Stars was an ESO Large Programme to understand rotational mixing and stellar mass loss in different metallicity environments, in order to better constrain massive star evolution. We gathered high-quality spectra of over 800 stars in the Galaxy and in the Magellanic

  18. Binding Energy and Compression Modulus of Infinite Nuclear Matter ...

    African Journals Online (AJOL)

    ... MeV at the normal nuclear matter saturation density consistent with the best available density-dependent potentials derived from the G-matrix approach. The results of the incompressibility modulus, k∞ is in excellent agreement with the results of other workers. Journal of the Nigerian Association of Mathematical Physics, ...

  19. Massive cerebellar infarction: a neurosurgical approach

    Directory of Open Access Journals (Sweden)

    Salazar Luis Rafael Moscote

    2015-12-01

    Full Text Available Cerebellar infarction is a challenge for the neurosurgeon. The rapid recognition will crucial to avoid devastating consequences. The massive cerebellar infarction has pseudotumoral behavior, should affect at least one third of the volume of the cerebellum. The irrigation of the cerebellum presents anatomical diversity, favoring the appearance of atypical infarcts. The neurosurgical management is critical for massive cerebellar infarction. We present a review of the literature.

  20. Nuclear radiation in water

    International Nuclear Information System (INIS)

    Abrams, H.L.

    1989-01-01

    The manifestations of acute radiation sickness in the post-nuclear attack period must be recognized and understood in order to apply therapeutic measure appropriately. The syndromes observed-hematopoietic, gastrointestinal, central nervous system-are dose dependent and vary in the degree of patient impairment and lethality. Estimates of mortality and morbidity following a massive exchange vary profoundly, depending on the targeting scenarios, the modes employed, and the meteorologic conditions anticipated. Even the LD-50 dose remain the subject of controversy. Using a US Government model of such an exchange, an estimated 23 million survivors would have radiation sickness, frequently complicated by trauma and burns. Among these survivors, an overriding consideration will be the presence and extent of infection, associated with alterations in the immune system, malnutrition, dehydration, exposure and hardship. Triage and treatment will be extraordinarily complex, requiring patient relocation, massive fluid replacement, antibiotics, a sterile environment , and many other measures. Massive disparities between supply and demand for physicians, nurses, other health workers, hospital beds, supplies and equipment, antibiotics, and other pharmaceutical agents will render a coherent physician response virtually impossible. Such disparities will be compounded by the destruction of transport systems and intolerably high radiation levels in many areas. If it is true that the meliorative efforts of physicians in post-attack radiation damage will be incapable of addressing this massive health care problem meaningfully, then clearly their most effective role is to prevent the threat from materializing. (authors)

  1. Gaseous core nuclear-driven engines featuring a self-shutoff mechanism to provide nuclear safety

    International Nuclear Information System (INIS)

    Heidrich, J.; Pettibone, J.; Chow, Tze-Show; Condit, R.; Zimmerman, G.

    1991-11-01

    Nuclear driven engines are described that could be run in either pulsed or steady state modes. In the pulsed mode nuclear energy is released by fissioning of uranium or plutonium in a supercritical assembly of fuel and working gas. In a steady state mode a fuel-gas mixture is injected into a magnetic nozzle where it is compressed into a critical state and produces energy. Engine performance is modeled using a code that calculates hydrodynamics, fission energy production, and neutron transport self-consistently. Results are given demonstrating a large negative temperature coefficient that produces self-shutoff or control of energy production. Reduced fission product inventory and the self-shutoff provide inherent nuclear safety. It is expected that nuclear engine reactor units could be scaled up from about 100 MW e

  2. Nuclear Winter Revisited: can it Make a Difference This Time?

    Science.gov (United States)

    Schneider, S.

    2006-12-01

    Some 23 years ago, in the middle of a Cold War and the threat of a strategic nuclear weapons exchange between NATO and the Warsaw Pact nations, atmospheric scientists pointed out that the well-anticipated side effects of a large-scale nuclear war ozone depletion, radioactive contamination and some climatic effects had massively underestimated the more likely implications: massive fires, severe dimming and cooling beneath circulating smoke clouds, disruption to agriculture in non-combatant nations, severe loss of imports of food to already-food-deficient regions and major alterations to atmospheric circulation. While the specific consequences were dependent on both scenarios of weapons use and injections and removals of smoke and dust and other chemicals into the atmosphere, it was clear that this would be despite passionately argued uncertainties a large major additional effect. As further investigations of smoke removal, patchy transport, etc., were pursued, the basic concerns remained, but the magnitude calculated with one-dimensional models diminished creating an unfortunate media debate over nuclear winter vs. nuclear autumn. Of course, one can't grow summer crops in any autumn natural or nuclear but that concern often got lost in the contentious political debate. Of course, it was pointed out that anyone who required knowing the additional environmental consequences of a major nuclear exchange to be finally deterred was already so far from the reality of the direct effects of the blasts that they might never see the concerns. But for non-combatants, it was a major awakening of their inability to escape severe consequences of the troubles of others, even if they were bystanders in the east-west conflicts. Two decades later, things have radically changed: the prospect of a massive strategic nuclear exchange is greatly diminished good news but the possibility of limited regional exchanges or terrorist incidents is widely believed to have greatly increased bad

  3. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  4. On massive gravitons in 2+1 dimensions

    NARCIS (Netherlands)

    Bergshoeff, Eric; Hohm, Olaf; Townsend, Paul; Lazkoz, R; Vera, R

    2010-01-01

    The Fierz-Pauli (FP) free field theory for massive spin-2 particles can be extended, in a spacetime of (1+2) dimensions (3D), to a generally covariant parity-preserving interacting field theory, in at least two ways. One is "new massive gravity" (NMG), with an action that involves curvature-squared

  5. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  6. Holographic heat engine within the framework of massive gravity

    Science.gov (United States)

    Mo, Jie-Xiong; Li, Gu-Qiang

    2018-05-01

    Heat engine models are constructed within the framework of massive gravity in this paper. For the four-dimensional charged black holes in massive gravity, it is shown that the existence of graviton mass improves the heat engine efficiency significantly. The situation is more complicated for the five-dimensional neutral black holes since the constant which corresponds to the third massive potential also contributes to the efficiency. It is also shown that the existence of graviton mass can improve the heat engine efficiency. Moreover, we probe how the massive gravity influences the behavior of the heat engine efficiency approaching the Carnot efficiency.

  7. Formation of Massive Molecular Cloud Cores by Cloud-cloud Collision

    OpenAIRE

    Inoue, Tsuyoshi; Fukui, Yasuo

    2013-01-01

    Recent observations of molecular clouds around rich massive star clusters including NGC3603, Westerlund 2, and M20 revealed that the formation of massive stars could be triggered by a cloud-cloud collision. By using three-dimensional, isothermal, magnetohydrodynamics simulations with the effect of self-gravity, we demonstrate that massive, gravitationally unstable, molecular cloud cores are formed behind the strong shock waves induced by the cloud-cloud collision. We find that the massive mol...

  8. Micro-structured nuclear fuel and novel nuclear reactor concepts for advanced power production

    International Nuclear Information System (INIS)

    Popa-Simil, Liviu

    2008-01-01

    Many applications (e.g. terrestrial and space electric power production, naval, underwater and railroad propulsion and auxiliary power for isolated regions) require a compact-high-power electricity source. The development of such a reactor structure necessitates a deeper understanding of fission energy transport and materials behavior in radiation dominated structures. One solution to reduce the greenhouse-gas emissions and delay the catastrophic events' occurrences may be the development of massive nuclear power. The actual basic conceptions in nuclear reactors are at the base of the bottleneck in enhancements. The current nuclear reactors look like high security prisons applied to fission products. The micro-bead heterogeneous fuel mesh gives the fission products the possibility to acquire stable conditions outside the hot zones without spilling, in exchange for advantages - possibility of enhancing the nuclear technology for power production. There is a possibility to accommodate the materials and structures with the phenomenon of interest, the high temperature fission products free fuel with near perfect burning. This feature is important to the future of nuclear power development in order to avoid the nuclear fuel peak, and high price increase due to the immobilization of the fuel in the waste fuel nuclear reactor pools. (author)

  9. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    Science.gov (United States)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  10. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  11. Massively Parallel Algorithms for Solution of Schrodinger Equation

    Science.gov (United States)

    Fijany, Amir; Barhen, Jacob; Toomerian, Nikzad

    1994-01-01

    In this paper massively parallel algorithms for solution of Schrodinger equation are developed. Our results clearly indicate that the Crank-Nicolson method, in addition to its excellent numerical properties, is also highly suitable for massively parallel computation.

  12. Wavelet representation of the nuclear dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Jouault, B.; Sebille, F.; Mota, V. de la

    1997-12-31

    The study of transport phenomena in nuclear matter is addressed in a new approach named DYWAN, based on the projection methods of statistical physics and on the mathematical theory of wavelets. Strongly compressed representations of the nuclear systems are obtained with an accurate description of the wave functions and of their antisymmetrization. The results of the approach are illustrated for the ground state description as well as for the dissipative dynamics of nuclei at intermediate energies. (K.A.). 52 refs.

  13. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  14. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  15. Critical N = (1, 1) general massive supergravity

    Science.gov (United States)

    Deger, Nihat Sadik; Moutsopoulos, George; Rosseel, Jan

    2018-04-01

    In this paper we study the supermultiplet structure of N = (1, 1) General Massive Supergravity at non-critical and critical points of its parameter space. To do this, we first linearize the theory around its maximally supersymmetric AdS3 vacuum and obtain the full linearized Lagrangian including fermionic terms. At generic values, linearized modes can be organized as two massless and 2 massive multiplets where supersymmetry relates them in the standard way. At critical points logarithmic modes appear and we find that in three of such points some of the supersymmetry transformations are non-invertible in logarithmic multiplets. However, in the fourth critical point, there is a massive logarithmic multiplet with invertible supersymmetry transformations.

  16. HOW TO FIND YOUNG MASSIVE CLUSTER PROGENITORS

    Energy Technology Data Exchange (ETDEWEB)

    Bressert, E.; Longmore, S.; Testi, L. [European Southern Observatory, Karl Schwarzschild Str. 2, D-85748 Garching bei Muenchen (Germany); Ginsburg, A.; Bally, J.; Battersby, C. [Center for Astrophysics and Space Astronomy, University of Colorado, Boulder, CO 80309 (United States)

    2012-10-20

    We propose that bound, young massive stellar clusters form from dense clouds that have escape speeds greater than the sound speed in photo-ionized gas. In these clumps, radiative feedback in the form of gas ionization is bottled up, enabling star formation to proceed to sufficiently high efficiency so that the resulting star cluster remains bound even after gas removal. We estimate the observable properties of the massive proto-clusters (MPCs) for existing Galactic plane surveys and suggest how they may be sought in recent and upcoming extragalactic observations. These surveys will potentially provide a significant sample of MPC candidates that will allow us to better understand extreme star-formation and massive cluster formation in the Local Universe.

  17. Massive type IIA supergravity and E10

    International Nuclear Information System (INIS)

    Henneaux, M.; Kleinschmidt, A.; Persson, D.; Jamsin, E.

    2009-01-01

    In this talk we investigate the symmetry under E 10 of Romans' massive type IIA supergravity. We show that the dynamics of a spinning particle in a non-linear sigma model on the coset space E 10 /K(E 10 ) reproduces the bosonic and fermionic dynamics of massive IIA supergravity, in the standard truncation. In particular, we identify Romans' mass with a generator of E 10 that is beyond the realm of the generators of E 10 considered in the eleven-dimensional analysis, but using the same, underformed sigma model. As a consequence, this work provides a dynamical unification of the massless and massive versions of type IIA supergravity inside E 10 . (Abstract Copyright [2009], Wiley Periodicals, Inc.)

  18. Nuclear matter and its equation of state

    International Nuclear Information System (INIS)

    Stock, R.

    1985-11-01

    We can estimate the nuclear bulk compressibility from the excitation energy of the monopole vibration mode, which represents a density oscillation about rho 0 , of extremely small magnitude (a few percent) only. A description of the monopole excitation energy systematics has been obtained by assuming a parabolic shape about rho 0 for the energy-density relation of cold nuclear matter. This implies a linear pressure response to small density changes inside nuclear matter. It enables one to define a nuclear 'sound' mode and the sound velocity turns out to be vsub(s)proportional0.2 c. All of this could be known only for small excursions from rho 0 as long as we were unable to subject nuclei to extreme stresses. The study of head-on collisions of heavy nuclei at high energy has removed this limitation. In these reactions we are reproducing under laboratory conditions the extremely violent transformations of matter occuring in the cosmic and stellar evolution. From the quark-gluon stage of the Big Bang, prior to hadronic freeze-out, to the supernova these cosmic events require an understanding of matter bulk properties over an enormous range of density, from about 10 times rho 0 down to about 10 -3 rho 0 . We will approach them through the compression-expansion-freeze-out cycle of central nucleus-nucleus collisions in the energy range from 50 MeV per projectile nucleon, corresponding to the compression barrier, upwards to 225 GeV/A (the top energy of the CERN SPS), and further into the TeV/A range by observation of events induced by cosmic ray nuclei. In this article I describe some of the results recently obtained at the BEVALAC, i.e. in the GeV/A domain. (orig./HSI)

  19. Revealing the Physics of Galactic Winds Through Massively-Parallel Hydrodynamics Simulations

    Science.gov (United States)

    Schneider, Evan Elizabeth

    This thesis documents the hydrodynamics code Cholla and a numerical study of multiphase galactic winds. Cholla is a massively-parallel, GPU-based code designed for astrophysical simulations that is freely available to the astrophysics community. A static-mesh Eulerian code, Cholla is ideally suited to carrying out massive simulations (> 20483 cells) that require very high resolution. The code incorporates state-of-the-art hydrodynamics algorithms including third-order spatial reconstruction, exact and linearized Riemann solvers, and unsplit integration algorithms that account for transverse fluxes on multidimensional grids. Operator-split radiative cooling and a dual-energy formalism for high mach number flows are also included. An extensive test suite demonstrates Cholla's superior ability to model shocks and discontinuities, while the GPU-native design makes the code extremely computationally efficient - speeds of 5-10 million cell updates per GPU-second are typical on current hardware for 3D simulations with all of the aforementioned physics. The latter half of this work comprises a comprehensive study of the mixing between a hot, supernova-driven wind and cooler clouds representative of those observed in multiphase galactic winds. Both adiabatic and radiatively-cooling clouds are investigated. The analytic theory of cloud-crushing is applied to the problem, and adiabatic turbulent clouds are found to be mixed with the hot wind on similar timescales as the classic spherical case (4-5 t cc) with an appropriate rescaling of the cloud-crushing time. Radiatively cooling clouds survive considerably longer, and the differences in evolution between turbulent and spherical clouds cannot be reconciled with a simple rescaling. The rapid incorporation of low-density material into the hot wind implies efficient mass-loading of hot phases of galactic winds. At the same time, the extreme compression of high-density cloud material leads to long-lived but slow-moving clumps

  20. Hyper-massive cloud, shock and stellar formation efficiency

    International Nuclear Information System (INIS)

    Louvet, Fabien

    2014-01-01

    O and B types stars are of paramount importance in the energy budget of galaxies and play a crucial role enriching the interstellar medium. However, their formation, unlike that of solar-type stars, is still subject to debate, if not an enigma. The earliest stages of massive star formation and the formation of their parent cloud are still crucial astrophysical questions that drew a lot of attention in the community, both from the theoretical and observational perspective, during the last decade. It has been proposed that massive stars are born in massive dense cores that form through very dynamic processes, such as converging flows of gas. During my PhD, I conducted a thorough study of the formation of dense cores and massive stars in the W43-MM1 supermassive structure, located at 6 kpc from the sun. At first, I showed a direct correlation between the star formation efficiency and the volume gas density of molecular clouds, in contrast with scenarios suggested by previous studies. Indeed, the spatial distribution and mass function of the massive dense cores currently forming in W43-MM1 suggests that this supermassive filament is undergoing a star formation burst, increasing as one approaches its center. I compared these observational results with the most recent numerical and analytical models of star formation. This comparison not only provides new constraints on the formation of supermassive filaments, but also suggests that understanding star formation in high density, extreme ridges requires a detailed portrait of the structure of these exceptional objects. Second, having shown that the formation of massive stars depends strongly on the properties of the ridges where they form, I studied the formation processes of these filaments, thanks of the characterization of their global dynamics. Specifically, I used a tracer of shocks (SiO molecule) to disentangle the feedback of local star formation processes (bipolar jets and outflows) from shocks tracing the pristine

  1. A Massive Star Census of the Starburst Cluster R136

    Science.gov (United States)

    Crowther, Paul

    2012-10-01

    We propose to carry out a comprehensive census of the most massive stars in the central parsec {4"} of the starburst cluster, R136, which powers the Tarantula Nebula in the LMC. R136 is both sufficiently massive that the upper mass function is richly populated and young enough that its most massive stars have yet to explode as supernovae. The identification of very massive stars in R136, up to 300 solar masses, raises general questions of star formation, binarity and feedback in young massive clusters. The proposed STIS spectral survey of 36 stars more massive than 50 solar masses within R136 is ground-breaking, of legacy value, and is specifically tailored to a} yield physical properties; b} detect the majority of binaries by splitting observations between Cycles 19 and 20; c} measure rotational velocities, relevant for predictions of rotational mixing; d} quantify mass-loss properties for very massive stars; e} determine surface compositions; f} measure radial velocities, relevant for runaway stars and cluster dynamics; g} quantify radiative and mechanical feedback. This census will enable the mass function of very massive stars to be measured for the first time, as a result of incomplete and inadequate spectroscopy to date. It will also perfectly complement our Tarantula Survey, a ground-based VLT Large Programme, by including the most massive stars that are inaccessible to ground-based visual spectroscopy due to severe crowding. These surveys, together with existing integrated UV and optical studies will enable 30 Doradus to serve as a bona-fide template for unresolved extragalactic starburst regions.

  2. Reappraising the concept of massive transfusion in trauma

    DEFF Research Database (Denmark)

    Stanworth, Simon J; Morris, Timothy P; Gaarder, Christine

    2010-01-01

    ABSTRACT : INTRODUCTION : The massive-transfusion concept was introduced to recognize the dilutional complications resulting from large volumes of packed red blood cells (PRBCs). Definitions of massive transfusion vary and lack supporting clinical evidence. Damage-control resuscitation regimens o...

  3. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  4. Massive stars and X-ray pulsars

    International Nuclear Information System (INIS)

    Henrichs, H.

    1982-01-01

    This thesis is a collection of 7 separate articles entitled: long term changes in ultraviolet lines in γ CAS, UV observations of γ CAS: intermittent mass-loss enhancement, episodic mass loss in γ CAS and in other early-type stars, spin-up and spin-down of accreting neutron stars, an excentric close binary model for the X Persei system, has a 97 minute periodicity in 4U 1700-37/HD 153919 really been discovered, and, mass loss and stellar wind in massive X-ray binaries. (Articles 1, 2, 5, 6 and 7 have been previously published). The first three articles are concerned with the irregular mass loss in massive stars. The fourth critically reviews thoughts since 1972 on the origin of the changes in periodicity shown by X-ray pulsars. The last articles indicate the relation between massive stars and X-ray pulsars. (C.F.)

  5. An effective theory of massive gauge bosons

    International Nuclear Information System (INIS)

    Doria, R.M.; Helayel Neto, J.A.

    1986-01-01

    The coupling of a group-valued massive scalar field to a gauge field through a symmetric rank-2 field strenght is studied. By considering energies very small compared with the mass of the scalar and invoking the decoupling theorem, one is left with a low-energy effective theory describing a dynamics of massive vector fields. (Author) [pt

  6. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  7. Massive gravity with mass term in three dimensions

    International Nuclear Information System (INIS)

    Nakasone, Masashi; Oda, Ichiro

    2009-01-01

    We analyze the effect of the Pauli-Fierz mass term on a recently established, new massive gravity theory in three space-time dimensions. We show that the Pauli-Fierz mass term makes the new massive gravity theory nonunitary. Moreover, although we add the gravitational Chern-Simons term to this model, the situation remains unchanged and the theory stays nonunitary despite that the structure of the graviton propagator is greatly changed. Thus, the Pauli-Fierz mass term is not allowed to coexist with mass-generating higher-derivative terms in the new massive gravity.

  8. Report on the behalf of the Foreign Affairs, Defense and Armed Forces Commission on the bill project, adopted by the National Assembly, related to the struggle against the proliferation of arms of massive destruction and their vectors

    International Nuclear Information System (INIS)

    2011-01-01

    This report recalls the origins of the bill project which is the implementation of the UN Security Council resolution 1540, the aim of which was to promote the setting up of efficient tools to struggle against proliferation. The bill project aims at updating and reinforcing the existing law arsenal. The report also contains remarks made by the Commission. The bill project addresses several issues: the struggle against proliferation of arms of massive destruction (nuclear weapons, nuclear materials, biological weapons, and chemical weapons), the struggle against proliferation of vectors of arms of massive destruction, double-use goods, the use of these weapons and vectors in acts of terrorism

  9. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  10. Reappraising the concept of massive transfusion in trauma

    NARCIS (Netherlands)

    Stanworth, Simon J.; Morris, Timothy P.; Gaarder, Christine; Goslings, J. Carel; Maegele, Marc; Cohen, Mitchell J.; König, Thomas C.; Davenport, Ross A.; Pittet, Jean-Francois; Johansson, Pär I.; Allard, Shubha; Johnson, Tony; Brohi, Karim

    2010-01-01

    The massive-transfusion concept was introduced to recognize the dilutional complications resulting from large volumes of packed red blood cells (PRBCs). Definitions of massive transfusion vary and lack supporting clinical evidence. Damage-control resuscitation regimens of modern trauma care are

  11. The Political and Strategic Conditions of Nuclear Disarmament

    International Nuclear Information System (INIS)

    Tertrais, Bruno

    2009-01-01

    In this lecture on issues related to nuclear disarmament, the author proposes and comments several scenarios under which the end of nuclear weapons could be considered: the 'abolition scenario', the 'interdiction scenario', and the 'elimination scenario'. The first one would be a deliberate decision to get rid of nuclear weapons after a major nuclear event (an act of terrorism or a nuclear war). The second one is that of a deliberate decision to reduce the role and numbers of nuclear weapons with a view of achieving a nuclear weapon-free world in a reasonable time frame (few decades). The third one supposes that the utility of nuclear weapons has been dramatically reduced, allowing first massive and rapid reductions, and then elimination. This last and main scenario is notably discussed with respect to the Non Proliferation Treaty (NPT), and two variants are distinguished: firstly, alternatives to nuclear weapons have been brought to existence, and secondly, nuclear weapons or any equivalent thereof are not needed any more

  12. Find and neutralize clandestine nuclear weapons

    International Nuclear Information System (INIS)

    Canavan, G.H.

    1997-09-01

    The objective of finding nuclear material at entry portals is to provide a secure perimeter as large as a weapon damage radius so that operations could be conducted within it relatively unencumbered. The objective of wide area search for nuclear material to provide a safe zone of similar dimensions in an area in which it is not possible to maintain a secure perimeter, to provide assurance for civilians living at an area at risk, or to provide rapid, wide area search of regions that could conceal nuclear threats to forces in the field. This rapid, wide-area, and confident detection of nuclear materials is the essential first step in developing the ability to negate terrorist nuclear assemblies or weapons. The ability to detect and negate nuclear materials are necessary to prevent the forced, massive evacuation of urban populations or the disruption of military operations in response to terrorist threats. This paper describes the limitations to current sensors used for nuclear weapon detection and discusses a novel approach to nuclear weapon detection using a combination of directional information (imaging) and gamma ray energy (color) to produce a gamma ray color camera

  13. THE THREE-DIMENSIONAL EVOLUTION TO CORE COLLAPSE OF A MASSIVE STAR

    Energy Technology Data Exchange (ETDEWEB)

    Couch, Sean M. [TAPIR, Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA 91125 (United States); Chatzopoulos, Emmanouil [Flash Center for Computational Science, Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637 (United States); Arnett, W. David [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Timmes, F. X., E-mail: smc@tapir.caltech.edu [Joint Institute for Nuclear Astrophysics, Michigan State University, East Lansing, MI 48824 (United States)

    2015-07-20

    We present the first three-dimensional (3D) simulation of the final minutes of iron core growth in a massive star, up to and including the point of core gravitational instability and collapse. We capture the development of strong convection driven by violent Si burning in the shell surrounding the iron core. This convective burning builds the iron core to its critical mass and collapse ensues, driven by electron capture and photodisintegration. The non-spherical structure and motion generated by 3D convection is substantial at the point of collapse, with convective speeds of several hundreds of km s{sup −1}. We examine the impact of such physically realistic 3D initial conditions on the core-collapse supernova mechanism using 3D simulations including multispecies neutrino leakage and find that the enhanced post-shock turbulence resulting from 3D progenitor structure aids successful explosions. We conclude that non-spherical progenitor structure should not be ignored, and should have a significant and favorable impact on the likelihood for neutrino-driven explosions. In order to make simulating the 3D collapse of an iron core feasible, we were forced to make approximations to the nuclear network making this effort only a first step toward accurate, self-consistent 3D stellar evolution models of the end states of massive stars.

  14. THE THREE-DIMENSIONAL EVOLUTION TO CORE COLLAPSE OF A MASSIVE STAR

    International Nuclear Information System (INIS)

    Couch, Sean M.; Chatzopoulos, Emmanouil; Arnett, W. David; Timmes, F. X.

    2015-01-01

    We present the first three-dimensional (3D) simulation of the final minutes of iron core growth in a massive star, up to and including the point of core gravitational instability and collapse. We capture the development of strong convection driven by violent Si burning in the shell surrounding the iron core. This convective burning builds the iron core to its critical mass and collapse ensues, driven by electron capture and photodisintegration. The non-spherical structure and motion generated by 3D convection is substantial at the point of collapse, with convective speeds of several hundreds of km s −1 . We examine the impact of such physically realistic 3D initial conditions on the core-collapse supernova mechanism using 3D simulations including multispecies neutrino leakage and find that the enhanced post-shock turbulence resulting from 3D progenitor structure aids successful explosions. We conclude that non-spherical progenitor structure should not be ignored, and should have a significant and favorable impact on the likelihood for neutrino-driven explosions. In order to make simulating the 3D collapse of an iron core feasible, we were forced to make approximations to the nuclear network making this effort only a first step toward accurate, self-consistent 3D stellar evolution models of the end states of massive stars

  15. Successful treatment of massive ascites due to lupus peritonitis with hydroxychloroquine in old- onset lupus erythematosus.

    Science.gov (United States)

    Hammami, Sonia; Bdioui, Fethia; Ouaz, Afef; Loghmari, Hichem; Mahjoub, Sylvia; Saffar, Hamouda

    2014-01-01

    Systemic lupus erythematous (SLE) is an auto-immune disease with multiple organ involvements that occurs mainly in young women. Literature data suggest that serositis is more frequent in late-onset SLE. However, peritoneal serositis with massive ascites is an extremely rare manifestation. We report a case of old-onset lupus peritonitis treated successfully by Hydroxychloroquine. A 77-year-old Tunisian woman was hospitalized because of massive painful ascites. Her family history did not include any autoimmune disease. She was explored 4 years prior to admission for exudative pleuritis of the right lung without any established diagnosis. Physical examination showed only massive ascites. Laboratory investigations showed leucopenia: 3100/mm3, lymphopenia: 840/mm3 and trace protein (0.03 g/24 h). Ascitic fluid contained 170 cells mm(3) (67% lymphocytes), 46 g/L protein, but no malignant cells. The main etiologies of exudative ascites were excluded. She had markedly elevated anti-nuclear antibody (ANA) titer of 1/1600 and a significantly elevated titer of antibody to double-stranded DNA (83 IU/mL) with hypo-complementemia (C3 levl was at 67 mg/dL). Antibody against the Smith antigen was also positive. Relying on these findings, the patient was diagnosed with SLE and treated with Hydroxychloroquine 200 mg daily in combination with diuretics. One month later, there was no detectable ascitic fluid and no pleural effusions. Five months later she remained free from symptoms while continuing to take chloroquine. This case was characterized by old age of onset of SLE, the extremely rare initial presentation with lupus peritonitis and massive painful ascites with dramatic response to only hydroxychloroquine treatment.

  16. Isospin effects on collective nuclear dynamics

    CERN Document Server

    Di Toro, M; Baran, V; Larionov, A B

    1999-01-01

    We suggest several ways to study properties of the symmetry term in the nuclear equation of state, EOS, from collective modes in beta-unstable nuclei. After a general discussion on compressibility and saturation density in asymmetric nuclear matter we show some predictions on the collective response based on the solution of generalized Landau dispersion relations. Isoscalar-isovector coupling, disappearance of collectivity and possibility of new instabilities in low and high density regions are discussed with accent on their relation to the symmetry term of effective forces. The onset of chemical plus mechanical instabilities in a dilute asymmetric nuclear matter is discussed with reference to new features in fragmentation reactions.

  17. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  18. Extramedullary hematopoiesis presented as cytopenia and massive paraspinal masses leading to cord compression in a patient with hereditary persistence of fetal hemoglobin.

    Science.gov (United States)

    Katchi, Tasleem; Kolandaivel, Krishna; Khattar, Pallavi; Farooq, Taliya; Islam, Humayun; Liu, Delong

    2016-01-01

    Extramedullary hematopoeisis (EMH) can occur in various physiological and pathologic states. The spleen is the most common site of EMH. We report a case with hereditary persistence of fetal hemoglobin with extramedullary hematopoiesis presented as cord compression and cytopenia secondary to multi-paraspinal masses. Treatment can be a challenge. Relapse is a possibility.

  19. Massive-Star Magnetospheres: Now in 3-D!

    Science.gov (United States)

    Townsend, Richard

    Magnetic fields are unexpected in massive stars, due to the absence of a dynamo convection zone beneath their surface layers. Nevertheless, kilogauss-strength, ordered fields were detected in a small subset of these stars over three decades ago, and the intervening years have witnessed the steady expansion of this subset. A distinctive feature of magnetic massive stars is that they harbor magnetospheres --- circumstellar environments where the magnetic field interacts strongly with the star's radiation-driven wind, confining it and channelling it into energetic shocks. A wide range of observational signatures are associated with these magnetospheres, in diagnostics ranging from X-rays all the way through to radio emission. Moreover, these magnetospheres can play an important role in massive-star evolution, by amplifying angular momentum loss in the wind. Recent progress in understanding massive-star magnetospheres has largely been driven by magnetohydrodynamical (MHD) simulations. However, these have been restricted to two- dimensional axisymmetric configurations, with three-dimensional configurations possible only in certain special cases. These restrictions are limiting further progress; we therefore propose to develop completely general three-dimensional models for the magnetospheres of massive stars, on the one hand to understand their observational properties and exploit them as plasma-physics laboratories, and on the other to gain a comprehensive understanding of how they influence the evolution of their host star. For weak- and intermediate-field stars, the models will be based on 3-D MHD simulations using a modified version of the ZEUS-MP code. For strong-field stars, we will extend our existing Rigid Field Hydrodynamics (RFHD) code to handle completely arbitrary field topologies. To explore a putative 'photoionization-moderated mass loss' mechanism for massive-star magnetospheres, we will also further develop a photoionization code we have recently

  20. Key Technologies in Massive MIMO

    Directory of Open Access Journals (Sweden)

    Hu Qiang

    2018-01-01

    Full Text Available The explosive growth of wireless data traffic in the future fifth generation mobile communication system (5G has led researchers to develop new disruptive technologies. As an extension of traditional MIMO technology, massive MIMO can greatly improve the throughput rate and energy efficiency, and can effectively improve the link reliability and data transmission rate, which is an important research direction of 5G wireless communication. Massive MIMO technology is nearly three years to get a new technology of rapid development and it through a lot of increasing the number of antenna communication, using very duplex communication mode, make the system spectrum efficiency to an unprecedented height.

  1. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    Science.gov (United States)

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  2. Report on the behalf of the Foreign Affairs, Defense and Armed Forces Commission on the bill project, adopted by the National Assembly, related to the struggle against the proliferation of arms of massive destruction and their vectors; Rapport fait au nom de la commission des affaires etrangeres, de la defense et des forces armees (1) sur le projet de loi, ADOPTE PAR L'ASSEMBLEE NATIONALE, relatif a la lutte contre la proliferation des armes de destruction massive et de leurs vecteurs

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    This report recalls the origins of the bill project which is the implementation of the UN Security Council resolution 1540, the aim of which was to promote the setting up of efficient tools to struggle against proliferation. The bill project aims at updating and reinforcing the existing law arsenal. The report also contains remarks made by the Commission. The bill project addresses several issues: the struggle against proliferation of arms of massive destruction (nuclear weapons, nuclear materials, biological weapons, and chemical weapons), the struggle against proliferation of vectors of arms of massive destruction, double-use goods, the use of these weapons and vectors in acts of terrorism

  3. Understanding renal nuclear protein accumulation: an in vitro approach to explain an in vivo phenomenon.

    Science.gov (United States)

    Luks, Lisanne; Maier, Marcia Y; Sacchi, Silvia; Pollegioni, Loredano; Dietrich, Daniel R

    2017-11-01

    Proper subcellular trafficking is essential to prevent protein mislocalization and aggregation. Transport of the peroxisomal enzyme D-amino acid oxidase (DAAO) appears dysregulated by specific pharmaceuticals, e.g., the anti-overactive bladder drug propiverine or a norepinephrine/serotonin reuptake inhibitor (NSRI), resulting in massive cytosolic and nuclear accumulations in rat kidney. To assess the underlying molecular mechanism of the latter, we aimed to characterize the nature of peroxisomal and cyto-nuclear shuttling of human and rat DAAO overexpressed in three cell lines using confocal microscopy. Indeed, interference with peroxisomal transport via deletion of the PTS1 signal or PEX5 knockdown resulted in induced nuclear DAAO localization. Having demonstrated the absence of active nuclear import and employing variably sized mCherry- and/or EYFP-fusion proteins of DAAO and catalase, we showed that peroxisomal proteins ≤134 kDa can passively diffuse into mammalian cell nuclei-thereby contradicting the often-cited 40 kDa diffusion limit. Moreover, their inherent nuclear presence and nuclear accumulation subsequent to proteasome inhibition or abrogated peroxisomal transport suggests that nuclear localization is a characteristic in the lifecycle of peroxisomal proteins. Based on this molecular trafficking analysis, we suggest that pharmaceuticals like propiverine or an NSRI may interfere with peroxisomal protein targeting and import, consequently resulting in massive nuclear protein accumulation in vivo.

  4. Complicated Massive Choledochal Cyst: A Case Report | Okoromah ...

    African Journals Online (AJOL)

    Choledochal cysts are rare congenital anomalies resulting from congenital dilatations of the common bile duct (CBD) and usually they present during infancy with cholestatic jaundice. This report is on a massive-sized choledochal cyst associated with massive abdominal distention, respiratory embarrassment, postprandial ...

  5. The determination of nuclear matter temperature and density

    International Nuclear Information System (INIS)

    Wolf, K.L.

    1981-01-01

    The purpose of this paper is to review some of the things we have learned about nuclear matter under extreme conditions during the past few years in relativistic heavy ion studies. High energy heavy-ion collisions provide a unique mechanism for exploring the dependence of the nuclear potential energy epsilon(rho,T) on the degree of compression and excitation, and may even show the existence of new phases of matter. Thus the determination of the nuclear equation of state remains the ultimate goal of many researchers in this field. (orig.)

  6. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  7. Does the quality of chest compressions deteriorate when the chest compression rate is above 120/min?

    Science.gov (United States)

    Lee, Soo Hoon; Kim, Kyuseok; Lee, Jae Hyuk; Kim, Taeyun; Kang, Changwoo; Park, Chanjong; Kim, Joonghee; Jo, You Hwan; Rhee, Joong Eui; Kim, Dong Hoon

    2014-08-01

    The quality of chest compressions along with defibrillation is the cornerstone of cardiopulmonary resuscitation (CPR), which is known to improve the outcome of cardiac arrest. We aimed to investigate the relationship between the compression rate and other CPR quality parameters including compression depth and recoil. A conventional CPR training for lay rescuers was performed 2 weeks before the 'CPR contest'. CPR anytime training kits were distributed to respective participants for self-training on their own in their own time. The participants were tested for two-person CPR in pairs. The quantitative and qualitative data regarding the quality of CPR were collected from a standardised check list and SkillReporter, and compared by the compression rate. A total of 161 teams consisting of 322 students, which includes 116 men and 206 women, participated in the CPR contest. The mean depth and rate for chest compression were 49.0±8.2 mm and 110.2±10.2/min. Significantly deeper chest compression depths were noted at rates over 120/min than those at any other rates (47.0±7.4, 48.8±8.4, 52.3±6.7, p=0.008). Chest compression depth was proportional to chest compression rate (r=0.206, pcompression including chest compression depth and chest recoil by chest compression rate. Further evaluation regarding the upper limit of the chest compression rate is needed to ensure complete full chest wall recoil while maintaining an adequate chest compression depth. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Environmentalists for nuclear energy

    International Nuclear Information System (INIS)

    Comby, B.

    2001-01-01

    Fossil fuels such as coal oil, and gas, massively pollute the Earth atmosphere (CO, CO 2 , SOX, NOX...), provoking acid rains and changing the global climate by increasing the greenhouse effect, while nuclear energy does not participate in these pollutions and presents well-founded environmental benefits. Renewable energies (solar, wind) not being able to deliver the amount of energy required by populations in developing and developed countries, nuclear energy is in fact the only clean and safe energy available to protect the planet during the 21 century. The first half of the book, titled The Atomic Paradox, describes in layman language the risks of nuclear power, its environmental impact, quality and safety standards, waste management, why a power reactor is not a bomb, energy alternatives, nuclear weapons, and other major global and environmental problems. In each case the major conclusions are framed for greater emphasis. Although examples are taken from the French nuclear power program, the conclusions are equally valid elsewhere. The second half of the book is titled Information on Nuclear Energy and the Environment and briefly provides a historical survey, an explanation of the different types of radiation, radioactivity, dose effects of radiation, Chernobyl, medical uses of radiation, accident precautions, as well as a glossary of terms and abbreviations and a bibliography. (author)

  9. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  10. Observations of Bright Massive Stars Using Small Size Telescopes

    Science.gov (United States)

    Beradze, Sopia; Kochiashvili, Nino

    2017-11-01

    The size of a telescope determines goals and objects of observations. During the latest decades it becomes more and more difficult to get photometric data of bright stars because most of telescopes of small sizes do not operate already. But there are rather interesting questions connected to the properties and evolution ties between different types of massive stars. Multi-wavelength photometric data are needed for solution of some of them. We are presenting our observational plans of bright Massive X-ray binaries, WR and LBV stars using a small size telescope. All these stars, which are presented in the poster are observational targets of Sopia Beradze's future PhD thesis. We already have got very interesting results on the reddening and possible future eruption of the massive hypergiant star P Cygni. Therefore, we decided to choose some additional interesting massive stars of different type for future observations. All Massive stars play an important role in the chemical evolution of galaxies because of they have very high mass loss - up to 10-4M⊙/a year. Our targets are on different evolutionary stages and three of them are the members of massive binaries. We plan to do UBVRI photometric observations of these stars using the 48 cm Cassegrain telescope of the Abastumani Astrophisical Observatory.

  11. Extensive tumor reconstruction with massive allograft

    International Nuclear Information System (INIS)

    Zulmi Wan

    1999-01-01

    Massive deep-frozen bone allografts were implanted in four patients after wide tumor resection. Two cases were solitary proximal femur metastases, secondary to Thyroid cancer and breast cancer respectively; while the other two cases were primary in nature i.e. Chondrosarcoma proximal humerus and Osteosarcoma proximal femur. All were treated with a cemented alloprosthesis except in the upper limb where shoulder fusion was performed. Augmentation of these techniques were done with a segment 1 free vascularised fibular composite graft to the proximal femur of breast secondaries and proximal humerus Chondrosarcoma. Coverage of the wound of the latter was also contributed by lattisimus dorsi flap. The present investigations demonstrated the massive bone allografts were intimately anchored by host bone and there had been no evidence of aseptic loosening at the graft-cement interface. This study showed that with good effective tumor control, reconstructive surgery with massive allografts represented a good alternative to prosthetic implants in tumors of the limbs. No infection was seen in all four cases

  12. Heavy flavours production in quark-gluon plasma formed in high energy nuclear reactions

    Science.gov (United States)

    Kloskinski, J.

    1985-01-01

    Results on compression and temperatures of nuclear fireballs and on relative yield of strange and charmed hadrons are given . The results show that temperatures above 300 MeV and large compressions are unlikely achieved in average heavy ion collision. In consequence, thermal production of charm is low. Strange particle production is, however, substantial and indicates clear temperature - threshold behavior.

  13. Massive Splenomegaly in Children: Laparoscopic Versus Open Splenectomy

    OpenAIRE

    Hassan, Mohamed E.; Al Ali, Khalid

    2014-01-01

    Background and Objectives: Laparoscopic splenectomy for massive splenomegaly is still a controversial procedure as compared with open splenectomy. We aimed to compare the feasibility of laparoscopic splenectomy versus open splenectomy for massive splenomegaly from different surgical aspects in children. Methods: The data of children aged

  14. Cost Evaluation with G4-ECONS Program for SI based Nuclear Hydrogen Production Plant

    International Nuclear Information System (INIS)

    Kim, Jong-ho; Lee, Ki-young; Kim, Yong-wan

    2014-01-01

    Contemporary hydrogen is production is primarily based on fossil fuels, which is not considered as environments friendly and economically efficient. To achieve the hydrogen economy, it is very important to produce a massive amount of hydrogen in a clean, safe and efficient way. Nuclear production of hydrogen would allow massive production of hydrogen at economic prices while avoiding environments pollution reducing the release of carbon dioxide. Nuclear production of hydrogen could thus become the enabling technology for the hydrogen economy. The economic assessment was performed for nuclear hydrogen production plant consisting of VHTR coupled with SI cycle. For the study, G4-ECONS developed by EMWG of GIF was appropriately modified to calculate the LUHC, assuming 36 months of plant construction time, 5 % of annual interest rate and 12.6 % of fixed charge rate. In G4-ECONS program, LUHC is calculated by the following formula; LUHC = (Annualized TCIC + Annualized O-M Cost + Annualized Fuel Cycle Cost + Annualized D-D Cost) / Annual Hydrogen Production Rate

  15. An evaluation of the 'phasing out nuclear' cost in France

    International Nuclear Information System (INIS)

    2012-01-01

    This document proposes a synthesis of an assessment of additional investments which would be needed when phasing out nuclear, as well as a study of impacts in terms of increase of electricity production cost, energy transmission and energy bill. It also addresses questions raised by a massive use of renewable energies. Two scenarios are compared to assess the cost of replacement of the nuclear fleet, at constant consumption: keeping a high level of nuclear energy with the development of photovoltaic and wind energy, or phasing out nuclear with a carbon constraint (progressive closing down of nuclear reactors by 2025). The study is based on an economic modelling of the electric system according to some principles and hypotheses which are presented in appendix

  16. MASSIVE BLACK HOLES IN STELLAR SYSTEMS: 'QUIESCENT' ACCRETION AND LUMINOSITY

    International Nuclear Information System (INIS)

    Volonteri, M.; Campbell, D.; Mateo, M.; Dotti, M.

    2011-01-01

    Only a small fraction of local galaxies harbor an accreting black hole, classified as an active galactic nucleus. However, many stellar systems are plausibly expected to host black holes, from globular clusters to nuclear star clusters, to massive galaxies. The mere presence of stars in the vicinity of a black hole provides a source of fuel via mass loss of evolved stars. In this paper, we assess the expected luminosities of black holes embedded in stellar systems of different sizes and properties, spanning a large range of masses. We model the distribution of stars and derive the amount of gas available to a central black hole through a geometrical model. We estimate the luminosity of the black holes under simple, but physically grounded, assumptions on the accretion flow. Finally, we discuss the detectability of 'quiescent' black holes in the local universe.

  17. A Massively Parallel Face Recognition System

    Directory of Open Access Journals (Sweden)

    Lahdenoja Olli

    2007-01-01

    Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.

  18. A Massively Parallel Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ari Paasio

    2006-12-01

    Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.

  19. Spectral and Energy Efficient Low-Overhead Uplink and Downlink Channel Estimation for 5G Massive MIMO Systems

    Directory of Open Access Journals (Sweden)

    Imran Khan

    2018-01-01

    Full Text Available Uplink and Downlink channel estimation in massive Multiple Input Multiple Output (MIMO systems is an intricate issue because of the increasing channel matrix dimensions. The channel feedback overhead using traditional codebook schemes is very large, which consumes more bandwidth and decreases the overall system efficiency. The purpose of this paper is to decrease the channel estimation overhead by taking the advantage of sparse attributes and also to optimize the Energy Efficiency (EE of the system. To cope with this issue, we propose a novel approach by using Compressed-Sensing (CS, Block Iterative-Support-Detection (Block-ISD, Angle-of-Departure (AoD and Structured Compressive Sampling Matching Pursuit (S-CoSaMP algorithms to reduce the channel estimation overhead and compare them with the traditional algorithms. The CS uses temporal-correlation of time-varying channels to produce Differential-Channel Impulse Response (DCIR among two CIRs that are adjacent in time-slots. DCIR has greater sparsity than the conventional CIRs as it can be easily compressed. The Block-ISD uses spatial-correlation of the channels to obtain the block-sparsity which results in lower pilot-overhead. AoD quantizes the channels whose path-AoDs variation is slower than path-gains and such information is utilized for reducing the overhead. S-CoSaMP deploys structured-sparsity to obtain reliable Channel-State-Information (CSI. MATLAB simulation results show that the proposed CS based algorithms reduce the feedback and pilot-overhead by a significant percentage and also improve the system capacity as compared with the traditional algorithms. Moreover, the EE level increases with increasing Base Station (BS density, UE density and lowering hardware impairments level.

  20. Extramedullary hematopoiesis presented as cytopenia and massive paraspinal masses leading to cord compression in a patient with hereditary persistence of fetal hemoglobin

    OpenAIRE

    Katchi, Tasleem; Kolandaivel, Krishna; Khattar, Pallavi; Farooq, Taliya; Islam, Humayun; Liu, Delong

    2016-01-01

    Background Extramedullary hematopoeisis (EMH) can occur in various physiological and pathologic states. The spleen is the most common site of EMH. Case presentation We report a case with hereditary persistence of fetal hemoglobin with extramedullary hematopoiesis presented as cord compression and cytopenia secondary to multi-paraspinal masses. Conclusion Treatment can be a challenge. Relapse is a possibility.

  1. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  2. Gyroid Phase in Nuclear Pasta

    International Nuclear Information System (INIS)

    Nakazato, Ken'ichiro; Oyamatsu, Kazuhiro; Yamada, Shoichi

    2009-01-01

    Nuclear matter is considered to be inhomogeneous at subnuclear densities that are realized in supernova cores and neutron star crusts, and the structures of nuclear matter change from spheres to cylinders, slabs, cylindrical holes, and spherical holes as the density increases. In this Letter, we discuss other possible structures, that is, gyroid and double-diamond morphologies, which are periodic bicontinuous structures discovered in a block copolymer. Utilizing the compressible liquid drop model, we show that there is a chance of gyroid appearance near the transition point from a cylinder to a slab and the volume fraction at this point is also similar for nuclear and polymer systems. Although the five shapes listed initially have been long thought to be the only major constituents of so-called nuclear pasta at subnuclear densities, our findings imply that this belief needs to be reconsidered.

  3. Nuclearity, split-property and duality for the Klein-Gordon field in curved spacetime

    International Nuclear Information System (INIS)

    Verch, R.

    1993-05-01

    Nuclearity, Split-Property and Duality are establihed for the nets of von Neumann algebras associated with the representations of distinguished states of the massive Klein-Gordon field propagating in particular classes of curved spacetimes. (orig.)

  4. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  5. Primordial inhomogeneities from massive defects during inflation

    Energy Technology Data Exchange (ETDEWEB)

    Firouzjahi, Hassan; Karami, Asieh; Rostami, Tahereh, E-mail: firouz@ipm.ir, E-mail: karami@ipm.ir, E-mail: t.rostami@ipm.ir [School of Astronomy, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

    2016-10-01

    We consider the imprints of local massive defects, such as a black hole or a massive monopole, during inflation. The massive defect breaks the background homogeneity. We consider the limit that the physical Schwarzschild radius of the defect is much smaller than the inflationary Hubble radius so a perturbative analysis is allowed. The inhomogeneities induced in scalar and gravitational wave power spectrum are calculated. We obtain the amplitudes of dipole, quadrupole and octupole anisotropies in curvature perturbation power spectrum and identify the relative configuration of the defect to CMB sphere in which large observable dipole asymmetry can be generated. We observe a curious reflection symmetry in which the configuration where the defect is inside the CMB comoving sphere has the same inhomogeneous variance as its mirror configuration where the defect is outside the CMB sphere.

  6. Probing the compressibility of tumor cell nuclei by combined atomic force-confocal microscopy

    Science.gov (United States)

    Krause, Marina; te Riet, Joost; Wolf, Katarina

    2013-12-01

    The cell nucleus is the largest and stiffest organelle rendering it the limiting compartment during migration of invasive tumor cells through dense connective tissue. We here describe a combined atomic force microscopy (AFM)-confocal microscopy approach for measurement of bulk nuclear stiffness together with simultaneous visualization of the cantilever-nucleus contact and the fate of the cell. Using cantilevers functionalized with either tips or beads and spring constants ranging from 0.06-10 N m-1, force-deformation curves were generated from nuclear positions of adherent HT1080 fibrosarcoma cell populations at unchallenged integrity, and a nuclear stiffness range of 0.2 to 2.5 kPa was identified depending on cantilever type and the use of extended fitting models. Chromatin-decondensating agent trichostatin A (TSA) induced nuclear softening of up to 50%, demonstrating the feasibility of our approach. Finally, using a stiff bead-functionalized cantilever pushing at maximal system-intrinsic force, the nucleus was deformed to 20% of its original height which after TSA treatment reduced further to 5% remaining height confirming chromatin organization as an important determinant of nuclear stiffness. Thus, combined AFM-confocal microscopy is a feasible approach to study nuclear compressibility to complement concepts of limiting nuclear deformation in cancer cell invasion and other biological processes.

  7. The INSTN trains the future professionals of nuclear industry

    International Nuclear Information System (INIS)

    Correa, P.

    2017-01-01

    The INSTN (Institute for Nuclear Sciences and Nuclear Technologies) is the applied school in nuclear technologies that has been present for 60 years for specialized training and vocational training. The integration of numerical technologies has allowed INSTN to adapt its way of teaching and to overcome difficulties like distances and to propose for instance practical exercises on the ISIS experimental reactor through the web for foreign graduate schools. The INSTN has realized its first SPOC (Small Private Online Course) and is preparing 2 MOOC (Massive Open Online Course). Since 2016, the INSTN has become 1 of the 2 training centers appointed as 'collaborating center' by the IAEA in the field of nuclear technologies and their industrial and radio-pharmaceutical applications. (A.C.)

  8. A dearth of short-period massive binaries in the young massive star forming region M 17. Evidence for a large orbital separation at birth?

    Science.gov (United States)

    Sana, H.; Ramírez-Tannus, M. C.; de Koter, A.; Kaper, L.; Tramper, F.; Bik, A.

    2017-03-01

    Aims: The formation of massive stars remains poorly understood and little is known about their birth multiplicity properties. Here, we aim to quantitatively investigate the strikingly low radial-velocity dispersion measured for a sample of 11 massive pre- and near-main-sequence stars (σ1D= 5.6 ± 0.2 km s-1) in the very young massive star forming region M 17, in order to obtain first constraints on the multiplicity properties of young massive stellar objects. Methods: We compute the radial-velocity dispersion of synthetic populations of massive stars for various multiplicity properties and we compare the obtained σ1D distributions to the observed value. We specifically investigate two scenarios: a low binary fraction and a dearth of short-period binary systems. Results: Simulated populations with low binary fractions () or with truncated period distributions (Pcutoff > 9 months) are able to reproduce the low σ1D observed within their 68%-confidence intervals. Furthermore, parent populations with fbin > 0.42 or Pcutoff < 47 d can be rejected at the 5%-significance level. Both constraints are in stark contrast with the high binary fraction and plethora of short-period systems in few Myr-old, well characterized OB-type populations. To explain the difference in the context of the first scenario would require a variation of the outcome of the massive star formation process. In the context of the second scenario, compact binaries must form later on, and the cut-off period may be related to physical length-scales representative of the bloated pre-main-sequence stellar radii or of their accretion disks. Conclusions: If the obtained constraints for the M 17's massive-star population are representative of the multiplicity properties of massive young stellar objects, our results may provide support to a massive star formation process in which binaries are initially formed at larger separations, then harden or migrate to produce the typical (untruncated) power-law period

  9. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  10. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  11. Peering to the Heart of Massive Star Birth

    Science.gov (United States)

    Tan, Jonathan

    2015-10-01

    We propose a small survey of massive/intermediate-mass protostars with WFC3/IR to probe J and H band continuum emission, the Pa-beta and the [FeII] emission. The protostar sample is already the subject of approved SOFIA-FORCAST observations from 10-40 microns. Combined with sophisticated radiative transfer models, these observations are providing the most detailed constraints on the nature of massive protostars, their luminosities, outflow cavity structures and orientations, and distribution of surrounding dense core gas and dust. Recently, we were also awarded ALMA Cycle 3 time to study these sources at up to 0.14 resolution. The proposed HST observations, with very similar resolution, have three main goals: 1) Detect and characterize J and H band continuum emission from the massive/intermediate-mass protostars, which is expected to arise from jet and outflow knot features and from scattered light emerging from the outflow cavities; 2) Detect and characterize Pa-beta and [FeII] line emission tracing ionized and FUV-illuminated regions around the massive protostars, important diagnostics of the protostellar source and its outflow structure; 3) Search for lower-mass protostars that may be clustered around the forming massive protostar. All of these objectives will help test massive star formation theories. The high sensitivity and angular resolution of WFC3/IR enables these observations to be carried out efficiently in a timely fashion. Mid-Cycle observations are critical for near contemporaneous observation with ALMA, since jet/outflow knots may have large proper motions, and to maximize the potential time baseline for a future HST study of jet/outflow proper motions.

  12. Massive stars in the Sagittarius Dwarf Irregular Galaxy

    Science.gov (United States)

    Garcia, Miriam

    2018-02-01

    Low metallicity massive stars hold the key to interpret numerous processes in the past Universe including re-ionization, starburst galaxies, high-redshift supernovae, and γ-ray bursts. The Sagittarius Dwarf Irregular Galaxy [SagDIG, 12+log(O/H) = 7.37] represents an important landmark in the quest for analogues accessible with 10-m class telescopes. This Letter presents low-resolution spectroscopy executed with the Gran Telescopio Canarias that confirms that SagDIG hosts massive stars. The observations unveiled three OBA-type stars and one red supergiant candidate. Pending confirmation from high-resolution follow-up studies, these could be the most metal-poor massive stars of the Local Group.

  13. Massive IIA string theory and Matrix theory compactification

    International Nuclear Information System (INIS)

    Lowe, David A.; Nastase, Horatiu; Ramgoolam, Sanjaye

    2003-01-01

    We propose a Matrix theory approach to Romans' massive Type IIA supergravity. It is obtained by applying the procedure of Matrix theory compactifications to Hull's proposal of the massive Type IIA string theory as M-theory on a twisted torus. The resulting Matrix theory is a super-Yang-Mills theory on large N three-branes with a space-dependent noncommutativity parameter, which is also independently derived by a T-duality approach. We give evidence showing that the energies of a class of physical excitations of the super-Yang-Mills theory show the correct symmetry expected from massive Type IIA string theory in a lightcone quantization

  14. Efficient Simulation of Compressible, Viscous Fluids using Multi-rate Time Integration

    Science.gov (United States)

    Mikida, Cory; Kloeckner, Andreas; Bodony, Daniel

    2017-11-01

    In the numerical simulation of problems of compressible, viscous fluids with single-rate time integrators, the global timestep used is limited to that of the finest mesh point or fastest physical process. This talk discusses the application of multi-rate Adams-Bashforth (MRAB) integrators to an overset mesh framework to solve compressible viscous fluid problems of varying scale with improved efficiency, with emphasis on the strategy of timescale separation and the application of the resulting numerical method to two sample problems: subsonic viscous flow over a cylinder and a viscous jet in crossflow. The results presented indicate the numerical efficacy of MRAB integrators, outline a number of outstanding code challenges, demonstrate the expected reduction in time enabled by MRAB, and emphasize the need for proper load balancing through spatial decomposition in order for parallel runs to achieve the predicted time-saving benefit. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.

  15. 29 CFR 1917.154 - Compressed air.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  16. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  17. French presidential election: nuclear energy in candidates' program

    International Nuclear Information System (INIS)

    Le Ngoc, B.

    2017-01-01

    Generally right candidates consider nuclear energy as a chance for France because it is an industrial asset for the country, it releases no greenhouse gases and has given France its large energy independence. They are ready to reconsider the limitation imposed on the share of nuclear energy in the future energy mix and they want to reinforce research for next generations of reactors. The far-right candidate wishes to use nuclear energy massively to produce hydrogen in order to reduce by half the consumption of fossil energies in 20 years. Generally left candidates back the law on the energy transition that was passed during last legislature and that limits the nuclear power share to 50% while developing green energies. The far-left candidates wish a progressive and complete abandon of nuclear energy. All candidates wish a greater share of renewable energies in the future energy mix. (A.C.)

  18. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  19. A thermal active restrained shrinkage ring test to study the early age concrete behaviour of massive structures

    International Nuclear Information System (INIS)

    Briffaut, M.; Benboudjema, F.; Torrenti, J.M.; Nahas, G.

    2011-01-01

    In massive concrete structures, cracking may occur during hardening, especially if autogenous and thermal strains are restrained. The concrete permeability due to this cracking may rise significantly and thus increase leakage (in tank, nuclear containment...) and reduce the durability. The restrained shrinkage ring test is used to study the early age concrete behaviour (delayed strains evolution and cracking). This test shows, at 20 o C and without drying, for a concrete mix which is representative of a French nuclear power plant containment vessel (w/c ratio equal to 0.57), that the amplitude of autogenous shrinkage (about 40 μm/m for the studied concrete mix) is not high enough to cause cracking. Indeed, in this configuration, thermal shrinkage is not significant, whereas this is a major concern for massive structures. Therefore, an active test has been developed to study cracking due to restrained thermal shrinkage. This test is an evolution of the classical restrained shrinkage ring test. It allows to take into account both autogenous and thermal shrinkages. Its principle is to create the thermal strain effects by increasing the temperature of the brass ring (by a fluid circulation) in order to expand it. With this test, the early age cracking due to restrained shrinkage, the influence of reinforcement and construction joints have been experimentally studied. It shows that, as expected, reinforcement leads to an increase of the number of cracks but a decrease of crack widths. Moreover, cracking occurs preferentially at the construction joint.

  20. Learning Nuclear Science with Marbles

    Science.gov (United States)

    Constan, Zach

    2010-01-01

    Nuclei are "small": if an atom was the size of a football field, the nucleus would be an apple sitting on the 50-yd line. At the same time, nuclei are "dense": the Earth, compressed to nuclear density, could fit inside four Sears Towers. The subatomic level is strange and exotic. For that reason, it's not hard to get young minds excited about…

  1. Interactions between massive dark halos and warped disks

    NARCIS (Netherlands)

    Kuijken, K; Persic, M; Salucci, P

    1997-01-01

    The normal mode theory for warping of galaxy disks, in which disks are assumed to be tilted with respect to the equator of a massive, flattened dark halo, assumes a rigid, fixed halo. However, consideration of the back-reaction by a misaligned disk on a massive particle halo shows there to be strong

  2. Compressibility of the protein-water interface

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  3. Compressibility of the protein-water interface.

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  4. Reconstructing the massive black hole cosmic history through gravitational waves

    International Nuclear Information System (INIS)

    Sesana, Alberto; Gair, Jonathan; Berti, Emanuele; Volonteri, Marta

    2011-01-01

    The massive black holes we observe in galaxies today are the natural end-product of a complex evolutionary path, in which black holes seeded in proto-galaxies at high redshift grow through cosmic history via a sequence of mergers and accretion episodes. Electromagnetic observations probe a small subset of the population of massive black holes (namely, those that are active or those that are very close to us), but planned space-based gravitational wave observatories such as the Laser Interferometer Space Antenna (LISA) can measure the parameters of 'electromagnetically invisible' massive black holes out to high redshift. In this paper we introduce a Bayesian framework to analyze the information that can be gathered from a set of such measurements. Our goal is to connect a set of massive black hole binary merger observations to the underlying model of massive black hole formation. In other words, given a set of observed massive black hole coalescences, we assess what information can be extracted about the underlying massive black hole population model. For concreteness we consider ten specific models of massive black hole formation, chosen to probe four important (and largely unconstrained) aspects of the input physics used in structure formation simulations: seed formation, metallicity ''feedback'', accretion efficiency and accretion geometry. For the first time we allow for the possibility of 'model mixing', by drawing the observed population from some combination of the 'pure' models that have been simulated. A Bayesian analysis allows us to recover a posterior probability distribution for the ''mixing parameters'' that characterize the fractions of each model represented in the observed distribution. Our work shows that LISA has enormous potential to probe the underlying physics of structure formation.

  5. EFFECTIVENESS OF ADJUVANT USE OF POSTERIOR MANUAL COMPRESSION WITH GRADED COMPRESSION IN THE SONOGRAPHIC DIAGNOSIS OF ACUTE APPENDICITIS

    Directory of Open Access Journals (Sweden)

    Senthilnathan V

    2018-01-01

    Full Text Available BACKGROUND Diagnosing appendicitis by Graded Compression Ultrasonogram is a difficult task because of limiting factors such as operator– dependent technique, retrocaecal location of the appendix and patient obesity. Posterior manual compression technique visualizes the appendix better in the Grey-scale Ultrasonogram. The Aim of this study is to determine the accuracy of ultrasound in detecting or excluding acute appendicitis and to evaluate the usefulness of the adjuvant use of posterior manual compression technique in visualization of the appendix and in the diagnosis of acute appendicitis MATERIALS AND METHODS This prospective study involved a total of 240 patients in all age groups and both sexes. All these patients underwent USG for suspected appendicitis. Ultrasonography was performed with transverse and longitudinal graded compression sonography. If the appendix is not visualized on graded compression sonography, posterior manual compression technique was used to further improve the detection of appendix. RESULTS The vermiform appendix was visualized in 185 patients (77.1% out of 240 patients with graded compression alone. 55 out of 240 patients whose appendix could not be visualized by graded compression alone were subjected to both graded followed by posterior manual compression technique among that Appendix was visualized in 43 patients on posterior manual compression technique amounting to 78.2% of cases, Appendix could not be visualized in the remaining 12 patients (21.8% out of 55. CONCLUSION Combined method of graded compression with posterior manual compression technique is better than the graded compression technique alone in diagnostic accuracy and detection rate of the vermiform appendix.

  6. Massive vulval oedema in multiple pregnancies at Bugando Medical ...

    African Journals Online (AJOL)

    In this report we describe two cases of massive vulval oedema seen in two ... passage of yellow-whitish discharge per vagina (Figure 1). Examination revealed massive oedema, and digital vaginal examination was difficult due to tenderness.

  7. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  8. Climatic Effects of Regional Nuclear War

    Science.gov (United States)

    Oman, Luke D.

    2011-01-01

    We use a modern climate model and new estimates of smoke generated by fires in contemporary cities to calculate the response of the climate system to a regional nuclear war between emerging third world nuclear powers using 100 Hiroshima-size bombs (less than 0.03% of the explosive yield of the current global nuclear arsenal) on cities in the subtropics. We find significant cooling and reductions of precipitation lasting years, which would impact the global food supply. The climate changes are large and longlasting because the fuel loadings in modern cities are quite high and the subtropical solar insolation heats the resulting smoke cloud and lofts it into the high stratosphere, where removal mechanisms are slow. While the climate changes are less dramatic than found in previous "nuclear winter" simulations of a massive nuclear exchange between the superpowers, because less smoke is emitted, the changes seem to be more persistent because of improvements in representing aerosol processes and microphysical/dynamical interactions, including radiative heating effects, in newer global climate system models. The assumptions and calculations that go into these conclusions will be described.

  9. Nonlinear viscoelasticity of pre-compressed layered polymeric composite under oscillatory compression

    KAUST Repository

    Xu, Yangguang

    2018-05-03

    Describing nonlinear viscoelastic properties of polymeric composites when subjected to dynamic loading is essential for development of practical applications of such materials. An efficient and easy method to analyze nonlinear viscoelasticity remains elusive because the dynamic moduli (storage modulus and loss modulus) are not very convenient when the material falls into nonlinear viscoelastic range. In this study, we utilize two methods, Fourier transform and geometrical nonlinear analysis, to quantitatively characterize the nonlinear viscoelasticity of a pre-compressed layered polymeric composite under oscillatory compression. We discuss the influences of pre-compression, dynamic loading, and the inner structure of polymeric composite on the nonlinear viscoelasticity. Furthermore, we reveal the nonlinear viscoelastic mechanism by combining with other experimental results from quasi-static compressive tests and microstructural analysis. From a methodology standpoint, it is proved that both Fourier transform and geometrical nonlinear analysis are efficient tools for analyzing the nonlinear viscoelasticity of a layered polymeric composite. From a material standpoint, we consequently posit that the dynamic nonlinear viscoelasticity of polymeric composites with complicated inner structures can also be well characterized using these methods.

  10. Effect of compressibility on the hypervelocity penetration

    Science.gov (United States)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  11. Remarks on search methods for stable, massive, elementary particles

    International Nuclear Information System (INIS)

    Perl, Martin L.

    2001-01-01

    This paper was presented at the 69th birthday celebration of Professor Eugene Commins, honoring his research achievements. These remarks are about the experimental techniques used in the search for new stable, massive particles, particles at least as massive as the electron. A variety of experimental methods such as accelerator experiments, cosmic ray studies, searches for halo particles in the galaxy and searches for exotic particles in bulk matter are described. A summary is presented of the measured limits on the existence of new stable, massive particle

  12. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  13. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  14. Dual descriptions of massive spin-2 particles in D=3+1

    International Nuclear Information System (INIS)

    Dalmazi, Denis

    2013-01-01

    Full text: Since the sixties (last century) one speculates on the effects of a possible (tiny) mass for the graviton. One expects a decrease in the gravitational interaction at large distances which comes handy regarding the experimental data of the last 15 years on the accelerated expansion of the universe. There has been a growing interest in massive quantum gravity in the last years. Almost all recent works are built up on the top of a free (quadratic) action for a massive spin-2 particle known as massive Fierz-Pauli (FP) theory which has first appeared in 1939. In this theory the basic field is a symmetric rank-2 tensor. It is a common belief in the massive gravity community that the massive FP theory is the unique self-consistent (ghost free, Poincare covariant, correct number of degrees of freedom) description of massive spin-2 particles in terms of a rank-2 tensor. We have shown recently that there are other possibilities if we start with a general (non-symmetric) rank-2 tensor. Here we show how our previous work is related with the well known massive FP theory via the introduction of spectators fields of rank-0 (scalar) and rank-1 (vector). We comment on the introduction of interacting vertices and how they affect the free duality with the massive FP theory (author)

  15. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  16. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  17. Quark–hadron phase transition in massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Atazadeh, K., E-mail: atazadeh@azaruniv.ac.ir

    2016-11-15

    We study the quark–hadron phase transition in the framework of massive gravity. We show that the modification of the FRW cosmological equations leads to the quark–hadron phase transition in the early massive Universe. Using numerical analysis, we consider that a phase transition based on the chiral symmetry breaking after the electroweak transition, occurred at approximately 10 μs after the Big Bang to convert a plasma of free quarks and gluons into hadrons.

  18. A rare case of massive hepatosplenomegaly due to acute ...

    African Journals Online (AJOL)

    massive hepatosplenomegaly include chronic lymphoproliferative malignancies, infections (malaria, leishmaniasis) and glycogen storage diseases (Gaucher's disease).[4] In our case the probable causes of the massive hepatosplenomegaly were a combination of late presentation after symptom onset, leukaemic infiltration.

  19. Magnetic fields and massive star formation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Qizhou; Keto, Eric; Ho, Paul T. P.; Ching, Tao-Chung; Chen, How-Huan [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Qiu, Keping [School of Astronomy and Space Science, Nanjing University, 22 Hankou Road, Nanjing 210093 (China); Girart, Josep M.; Juárez, Carmen [Institut de Ciències de l' Espai, (CSIC-IEEC), Campus UAB, Facultat de Ciències, C5p 2, E-08193 Bellaterra, Catalonia (Spain); Liu, Hauyu; Tang, Ya-Wen; Koch, Patrick M.; Rao, Ramprasad; Lai, Shih-Ping [Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 106, Taiwan (China); Li, Zhi-Yun [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States); Frau, Pau [Observatorio Astronómico Nacional, Alfonso XII, 3 E-28014 Madrid (Spain); Li, Hua-Bai [Department of Physics, The Chinese University of Hong Kong, Hong Kong (China); Padovani, Marco [Laboratoire de Radioastronomie Millimétrique, UMR 8112 du CNRS, École Normale Supérieure et Observatoire de Paris, 24 rue Lhomond, F-75231 Paris Cedex 05 (France); Bontemps, Sylvain [OASU/LAB-UMR5804, CNRS, Université Bordeaux 1, F-33270 Floirac (France); Csengeri, Timea, E-mail: qzhang@cfa.harvard.edu [Max Planck Institute for Radioastronomy, Auf dem Hügel 69, D-53121 Bonn (Germany)

    2014-09-10

    Massive stars (M > 8 M {sub ☉}) typically form in parsec-scale molecular clumps that collapse and fragment, leading to the birth of a cluster of stellar objects. We investigate the role of magnetic fields in this process through dust polarization at 870 μm obtained with the Submillimeter Array (SMA). The SMA observations reveal polarization at scales of ≲0.1 pc. The polarization pattern in these objects ranges from ordered hour-glass configurations to more chaotic distributions. By comparing the SMA data with the single dish data at parsec scales, we found that magnetic fields at dense core scales are either aligned within 40° of or perpendicular to the parsec-scale magnetic fields. This finding indicates that magnetic fields play an important role during the collapse and fragmentation of massive molecular clumps and the formation of dense cores. We further compare magnetic fields in dense cores with the major axis of molecular outflows. Despite a limited number of outflows, we found that the outflow axis appears to be randomly oriented with respect to the magnetic field in the core. This result suggests that at the scale of accretion disks (≲ 10{sup 3} AU), angular momentum and dynamic interactions possibly due to close binary or multiple systems dominate over magnetic fields. With this unprecedentedly large sample of massive clumps, we argue on a statistical basis that magnetic fields play an important role during the formation of dense cores at spatial scales of 0.01-0.1 pc in the context of massive star and cluster star formation.

  20. Beznau nuclear power plant: comments of the Federal Department of Transport, Communications and Energy on the reproaches of Greenpeace

    International Nuclear Information System (INIS)

    Ogi, A.

    1995-01-01

    Answer of the chairman of the EVED (Federal Department of Transport, Communications and Energy) to the open letter in which Greenpeace Switzerland made massive accusations against the nuclear power plant Beznau and the HSK (Federal Nuclear Safety Inspectorate). All the charges are rebutted in this answer

  1. Massive antenatal fetomaternal hemorrhage

    DEFF Research Database (Denmark)

    Dziegiel, Morten Hanefeld; Koldkjaer, Ole; Berkowicz, Adela

    2005-01-01

    Massive fetomaternal hemorrhage (FMH) can lead to life-threatening anemia. Quantification based on flow cytometry with anti-hemoglobin F (HbF) is applicable in all cases but underestimation of large fetal bleeds has been reported. A large FMH from an ABO-compatible fetus allows an estimation...

  2. WHAT SETS THE INITIAL ROTATION RATES OF MASSIVE STARS?

    International Nuclear Information System (INIS)

    Rosen, Anna L.; Krumholz, Mark R.; Ramirez-Ruiz, Enrico

    2012-01-01

    The physical mechanisms that set the initial rotation rates in massive stars are a crucial unknown in current star formation theory. Observations of young, massive stars provide evidence that they form in a similar fashion to their low-mass counterparts. The magnetic coupling between a star and its accretion disk may be sufficient to spin down low-mass pre-main-sequence (PMS) stars to well below breakup at the end stage of their formation when the accretion rate is low. However, we show that these magnetic torques are insufficient to spin down massive PMS stars due to their short formation times and high accretion rates. We develop a model for the angular momentum evolution of stars over a wide range in mass, considering both magnetic and gravitational torques. We find that magnetic torques are unable to spin down either low-mass or high-mass stars during the main accretion phase, and that massive stars cannot be spun down significantly by magnetic torques during the end stage of their formation either. Spin-down occurs only if massive stars' disk lifetimes are substantially longer or their magnetic fields are much stronger than current observations suggest.

  3. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  4. Support of nuclear engineering education and research at the University of Michigan

    International Nuclear Information System (INIS)

    Martin, W.R.

    1993-03-01

    This report describes progress on four different projects in the fission reactor area that have been supported by the grant during the past year. These projects are: Accelerator transmutation of nuclear waste (Steve Pearson); neutronic analysis of the Ford Nuclear Reactor (Brent Renkema); developing Monte Carlo benchmarks for commercial LWR configurations (Jie Du); Monte Carlo depletion capability for massively parallel processors (Amit Majumdar); these tasks are briefly described and progress to date is presented

  5. Phases of massive gravity

    CERN Document Server

    Dubovsky, S L

    2004-01-01

    We systematically study the most general Lorentz-violating graviton mass invariant under three-dimensional Eucledian group using the explicitly covariant language. We find that at general values of mass parameters the massive graviton has six propagating degrees of freedom, and some of them are ghosts or lead to rapid classical instabilities. However, there is a number of different regions in the mass parameter space where massive gravity can be described by a consistent low-energy effective theory with cutoff $\\sim\\sqrt{mM_{Pl}}$ free of rapid instabilities and vDVZ discontinuity. Each of these regions is characterized by certain fine-tuning relations between mass parameters, generalizing the Fierz--Pauli condition. In some cases the required fine-tunings are consequences of the existence of the subgroups of the diffeomorphism group that are left unbroken by the graviton mass. We found two new cases, when the resulting theories have a property of UV insensitivity, i.e. remain well behaved after inclusion of ...

  6. SALT Spectroscopy of Evolved Massive Stars

    Science.gov (United States)

    Kniazev, A. Y.; Gvaramadze, V. V.; Berdnikov, L. N.

    2017-06-01

    Long-slit spectroscopy with the Southern African Large Telescope (SALT) of central stars of mid-infrared nebulae detected with the Spitzer Space Telescope and Wide-Field Infrared Survey Explorer (WISE) led to the discovery of numerous candidate luminous blue variables (cLBVs) and other rare evolved massive stars. With the recent advent of the SALT fiber-fed high-resolution echelle spectrograph (HRS), a new perspective for the study of these interesting objects is appeared. Using the HRS we obtained spectra of a dozen newly identified massive stars. Some results on the recently identified cLBV Hen 3-729 are presented.

  7. The Compressed Baryonic Matter experiment

    Directory of Open Access Journals (Sweden)

    Seddiki Sélim

    2014-04-01

    Full Text Available The Compressed Baryonic Matter (CBM experiment is a next-generation fixed-target detector which will operate at the future Facility for Antiproton and Ion Research (FAIR in Darmstadt. The goal of this experiment is to explore the QCD phase diagram in the region of high net baryon densities using high-energy nucleus-nucleus collisions. Its research program includes the study of the equation-of-state of nuclear matter at high baryon densities, the search for the deconfinement and chiral phase transitions and the search for the QCD critical point. The CBM detector is designed to measure both bulk observables with a large acceptance and rare diagnostic probes such as charm particles, multi-strange hyperons, and low mass vector mesons in their di-leptonic decay. The physics program of CBM will be summarized, followed by an overview of the detector concept, a selection of the expected physics performance, and the status of preparation of the experiment.

  8. Polarization enhancement in (d)over-right-arrow((p)over-right-arrow,(n)over-right-arrow)He-2 reaction : nuclear teleportation

    NARCIS (Netherlands)

    Hamieh, S

    2004-01-01

    I show that an experimental technique used in nuclear physics may be successfully applied to quantum teleportation (QT) of spin states of massive matter. A new non-local physical effect, the 'quantum-teleportation effect', is discovered for the nuclear polarization measurement. Enhancement of the

  9. Massive Multiplayer Online Gaming: A Research Framework for Military Training and Education

    Science.gov (United States)

    2005-03-01

    Effects of violent video games on aggressive behavior, aggressive cognition, physiological arousal, and prosocial behavior: A meta...Massive Multiplayer Online Games 2.1 Massive Multiplayer Online Games Defined Massive multiplayer online games (MMOGs) allow users to interact ...2002) suggested various principles for group design and interactions in “massively multiplayer games ” (p. 1). In particular, he agued that it

  10. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  11. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang

    2014-01-01

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  12. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  13. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  14. Compression experiments on the TOSKA tokamak

    International Nuclear Information System (INIS)

    Cima, G.; McGuire, K.M.; Robinson, D.C.; Wootton, A.J.

    1980-10-01

    Results from minor radius compression experiments on a tokamak plasma in TOSCA are reported. The compression is achieved by increasing the toroidal field up to twice its initial value in 200μs. Measurements show that particles and magnetic flux are conserved. When the initial energy confinement time is comparable with the compression time, energy gains are greater than for an adiabatic change of state. The total beta value increases. Central beta values approximately 3% are measured when a small major radius compression is superimposed on a minor radius compression. Magnetic field fluctuations are affected: both the amplitude and period decrease. Starting from low energy confinement times, approximately 200μs, increases in confinement times up to approximately 1 ms are measured. The increase in plasma energy results from a large reduction in the power losses during the compression. When the initial energy confinement time is much longer than the compression time, the parameter changes are those expected for an adiabatic change of state. (author)

  15. Biaxial behavior of plain concrete of nuclear containment building

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang-Keun E-mail: sklee0806@bcline.com; Song, Young-Chul; Han, Sang-Hoon

    2004-01-01

    To provide biaxial failure behavior characteristics of concrete of a standard Korean nuclear containment building, the concrete specimens with the dimensions of 200 mmx200 mmx60 mm were tested under different biaxial load combinations. The specimens were subjected to biaxial load combinations covering the three regions of compression-compression, compression-tension, nd tension-tension. To avoid a confining effect due to friction in the boundary surface between the concrete specimen and the loading platen, the loading platens with Teflon pads were used. The principal deformations in the specimens were recorded, and the failure modes along with each stress ratio were examined. Based on the strength data, the biaxial ultimate strength envelopes were developed and the biaxial stress-strain responses in three different biaxial loading regions were plotted. The test results indicated hat the concrete strength under equal biaxial compression, f{sub 1}=f{sub 2}, is higher by about 17% on the average than that under the uniaxial compression and the concrete strength under biaxial tension is almost independent of the stress ratio and is similar to that under the uniaxial tension.

  16. Biaxial behavior of plain concrete of nuclear containment building

    International Nuclear Information System (INIS)

    Lee, Sang-Keun; Song, Young-Chul; Han, Sang-Hoon

    2004-01-01

    To provide biaxial failure behavior characteristics of concrete of a standard Korean nuclear containment building, the concrete specimens with the dimensions of 200 mmx200 mmx60 mm were tested under different biaxial load combinations. The specimens were subjected to biaxial load combinations covering the three regions of compression-compression, compression-tension, nd tension-tension. To avoid a confining effect due to friction in the boundary surface between the concrete specimen and the loading platen, the loading platens with Teflon pads were used. The principal deformations in the specimens were recorded, and the failure modes along with each stress ratio were examined. Based on the strength data, the biaxial ultimate strength envelopes were developed and the biaxial stress-strain responses in three different biaxial loading regions were plotted. The test results indicated hat the concrete strength under equal biaxial compression, f 1 =f 2 , is higher by about 17% on the average than that under the uniaxial compression and the concrete strength under biaxial tension is almost independent of the stress ratio and is similar to that under the uniaxial tension

  17. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  18. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  19. Cosmological stability bound in massive gravity and bigravity

    International Nuclear Information System (INIS)

    Fasiello, Matteo; Tolley, Andrew J.

    2013-01-01

    We give a simple derivation of a cosmological bound on the graviton mass for spatially flat FRW solutions in massive gravity with an FRW reference metric and for bigravity theories. This bound comes from the requirement that the kinetic term of the helicity zero mode of the graviton is positive definite. The bound is dependent only on the parameters in the massive gravity potential and the Hubble expansion rate for the two metrics. We derive the decoupling limit of bigravity and FRW massive gravity, and use this to give an independent derivation of the cosmological bound. We recover our previous results that the tension between satisfying the Friedmann equation and the cosmological bound is sufficient to rule out all observationally relevant FRW solutions for massive gravity with an FRW reference metric. In contrast, in bigravity this tension is resolved due to different nature of the Vainshtein mechanism. We find that in bigravity theories there exists an FRW solution with late-time self-acceleration for which the kinetic terms for the helicity-2, helicity-1 and helicity-0 are generically nonzero and positive making this a compelling candidate for a model of cosmic acceleration. We confirm that the generalized bound is saturated for the candidate partially massless (bi)gravity theories but the existence of helicity-1/helicity-0 interactions implies the absence of the conjectured partially massless symmetry for both massive gravity and bigravity

  20. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  1. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  2. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  3. An Analysis Framework for Understanding the Origin of Nuclear Activity in Low-power Radio Galaxies

    Science.gov (United States)

    Lin, Yen-Ting; Huang, Hung-Jin; Chen, Yen-Chi

    2018-05-01

    Using large samples containing nearly 2300 active galaxies of low radio luminosity (1.4 GHz luminosity between 2 × 1023 and 3 × 1025 W Hz‑1, essentially low-excitation radio galaxies) at z ≲ 0.3, we present a self-contained analysis of the dependence of the nuclear radio activity on both intrinsic and extrinsic properties of galaxies, with the goal of identifying the best predictors of the nuclear radio activity. While confirming the established result that stellar mass must play a key role on the triggering of radio activities, we point out that for the central, most massive galaxies, the radio activity also shows a strong dependence on halo mass, which is not likely due to enhanced interaction rates in denser regions in massive, cluster-scale halos. We thus further investigate the effects of various properties of the intracluster medium (ICM) in massive clusters on the radio activities, employing two standard statistical tools, principle component analysis and logistic regression. It is found that ICM entropy, local cooling time, and pressure are the most effective in predicting the radio activity, pointing to the accretion of gas cooling out of a hot atmosphere to be the likely origin in triggering such activities in galaxies residing in massive dark matter halos. Our analysis framework enables us to logically discern the mechanisms responsible for the radio activity separately for central and satellite galaxies.

  4. The Distinction of Hot Herbal Compress, Hot Compress, and Topical Diclofenac as Myofascial Pain Syndrome Treatment.

    Science.gov (United States)

    Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara

    2018-01-01

    This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.

  5. Compression of the digitized X-ray images

    International Nuclear Information System (INIS)

    Terae, Satoshi; Miyasaka, Kazuo; Fujita, Nobuyuki; Takamura, Akio; Irie, Goro; Inamura, Kiyonari.

    1987-01-01

    Medical images are using an increased amount of space in the hospitals, while they are not accessed easily. Thus, suitable data filing system and precise data compression will be necessitated. Image quality was evaluated before and after image data compression, using local filing system (MediFile 1000, NEC Co.) and forty-seven modes of compression parameter. For this study X-ray images of 10 plain radiographs and 7 contrast examinations were digitized using a film reader of CCD sensor in MediFile 1000. Those images were compressed into forty-seven kinds of image data to save in an optical disc and then the compressed images were reconstructed. Each reconstructed image was compared with non-compressed images in respect to several regions of our interest by four radiologists. Compression and extension of radiological images were promptly made by employing the local filing system. Image quality was much more affected by the ratio of data compression than by the mode of parameter itself. In another word, the higher compression ratio became, the worse the image quality were. However, image quality was not significantly degraded until the compression ratio was about 15: 1 on plain radiographs and about 8: 1 on contrast studies. Image compression by this technique will be admitted by diagnostic radiology. (author)

  6. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  7. Development and assessment of compression technique for medical images using neural network. I. Assessment of lossless compression

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi

    2007-01-01

    This paper describes assessment of the lossless compression of a new efficient compression technique (JIS system) using neural network that the author and co-workers have recently developed. At first, theory is explained for encoding and decoding the data. Assessment is done on 55 images each of chest digital roentgenography, digital mammography, 64-row multi-slice CT, 1.5 Tesla MRI, positron emission tomography (PET) and digital subtraction angiography, which are lossless-compressed by the present JIS system to see the compression rate and loss. For comparison, those data are also JPEG lossless-compressed. Personal computer (PC) is an Apple MacBook Pro with configuration of Boot Camp for Windows environment. The present JIS system is found to have a more than 4 times higher efficiency than the usual compressions which compressing the file volume to only 1/11 in average, and thus to be importantly responsible to the increasing medical imaging data. (R.T.)

  8. A comparative experimental study on engine operating on premixed charge compression ignition and compression ignition mode

    Directory of Open Access Journals (Sweden)

    Bhiogade Girish E.

    2017-01-01

    Full Text Available New combustion concepts have been recently developed with the purpose to tackle the problem of high emissions level of traditional direct injection Diesel engines. A good example is the premixed charge compression ignition combustion. A strategy in which early injection is used causing a burning process in which the fuel burns in the premixed condition. In compression ignition engines, soot (particulate matter and NOx emissions are an extremely unsolved issue. Premixed charge compression ignition is one of the most promising solutions that combine the advantages of both spark ignition and compression ignition combustion modes. It gives thermal efficiency close to the compression ignition engines and resolves the associated issues of high NOx and particulate matter, simultaneously. Premixing of air and fuel preparation is the challenging part to achieve premixed charge compression ignition combustion. In the present experimental study a diesel vaporizer is used to achieve premixed charge compression ignition combustion. A vaporized diesel fuel was mixed with the air to form premixed charge and inducted into the cylinder during the intake stroke. Low diesel volatility remains the main obstacle in preparing premixed air-fuel mixture. Exhaust gas re-circulation can be used to control the rate of heat release. The objective of this study is to reduce exhaust emission levels with maintaining thermal efficiency close to compression ignition engine.

  9. Pulsed Compression Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  10. Is nuclear necessary to struggle against climate disruption?

    International Nuclear Information System (INIS)

    2015-01-01

    Nuclear energy is generally considered as non-carbonated, and is therefore considered as one of the options to struggle against climate disruption, and even sometimes as the only solution to massively produce electricity while limiting greenhouse gas emissions. Thus, this article examines whether the use of nuclear energy is so inescapable. It discusses the indirect CO 2 content and avoided emissions, and outlines that these avoided emissions represent a small part with respect to those generated by the world electric system. In other words, nuclear energy has a marginal impact on greenhouse gas emissions. Besides, nuclear energy is used to produce electricity and its development can therefore impact emissions related to the electric sector only, i.e. one third of emissions related to energy. Thus nuclear energy is generally assigned a minor role in scenarios of struggle against climate change. The article then outlines that a dynamics exists in favour of other options

  11. Compression measurement in laser driven implosion experiments

    International Nuclear Information System (INIS)

    Attwood, D.T.; Cambell, E.M.; Ceglio, N.M.; Lane, S.L.; Larsen, J.T.; Matthews, D.M.

    1981-01-01

    This paper discusses the measurement of compression in the context of the Inertial Confinement Fusion Programs' transition from thin-walled exploding pusher targets, to thicker walled targets which are designed to lead the way towards ablative type implosions which will result in higher fuel density and pR at burn time. These experiments promote desirable reactor conditions but pose diagnostic problems because of reduced multi-kilovolt x-ray and reaction product emissions, as well as increasingly more difficult transport problems for these emissions as they pass through the thicker pR pusher conditions. Solutions to these problems, pointing the way toward higher energy twodimensional x-ray images, new reaction product imaging ideas and the use of seed gases for both x-ray spectroscopic and nuclear activation techniques are identified

  12. Optimum injection pressure of a cavitating jet on introduction of compressive residual stress into stainless steel

    International Nuclear Information System (INIS)

    Soyama, Hitoshi; Nagasaka, Kazuya; Takakuwa, Osamu; Naito, Akima

    2011-01-01

    In order to mitigate stress corrosion cracking of components used for nuclear power plants, introduction of compressive residual stress into sub-surface of the components is an effective maintenance method. The introduction of compressive residual stress using cavitation impact generated by injecting a high speed water jet into water was proposed. Water jet peening is now applying to reduce stress corrosion cracking of shrouds in the nuclear power plants. However, accidental troubles such as dropping off the components and cutting of the pipes by the jet occurred at the maintenance. In order to peen by the jet without damage, optimum injection pressure of the jet should be revealed. In the case of 'cavitation peening', cavitation is generated by injecting the high speed water jet into water. As working pressure at the cavitation peening is the pressure at cavitation bubble collapse, the injection pressure of the jet is not main parameter. The cavitation impact is increasing with the scale of the jet, i.e., scaling effect of the cavitation. It was revealed that the large scale jet at low injection pressure can introduce compressive residual stress into stainless steel comparing with the small scale jet at high injection pressure. As expected, a water jet at high injection pressure might make damage of the components. Namely, in order to avoid damage of the components, the jet at the low injection pressure will be suit for the introduction of compressive residual stress. In the present paper, in order to make clear optimum injection pressure of the cavitating jet for the introduction of compressive residual stress without damage, the residual stress of stainless steel treated by the jet at various injection pressure was measured by using an X-ray diffraction method. The injection pressure of the jet p 1 was varied from 5 MPa to 300 MPa. The diameter of the nozzle throat of the jet d was varied from 0.35 mm to 2.0 mm. The residual stress changing with depth was

  13. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  14. Air box shock absorber for a nuclear reactor

    International Nuclear Information System (INIS)

    Pradhan, A.V.; George, J.A.

    1977-01-01

    Disclosed is an air box type shock absorber primarily for use in an ice condenser compartment of a nuclear reactor. The shock absorber includes a back plate member and sheet metal top, bottom, and front members. The front member is prefolded, and controlled clearances are provided among the members for predetermined escape of air under impact and compression. Prefolded internal sheet metal stiffeners also absorb a portion of the kinetic energy imparted to the shock absorber, and limit rebound. An external restraining rod guided by restraining straps insures that the sheet metal front member compresses inward upon impact. 6 claims, 11 figures

  15. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  16. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  17. Fixation of waste materials in grouts. Part II. An empirical equation for estimating compressive strength for grouts from different wastes

    International Nuclear Information System (INIS)

    Tallent, O.K.; McDaniel, E.W.; Godsey, T.T.

    1986-04-01

    Compressive strength data for grouts prepared from three different nuclear waste materials have been correlated. The wastes include ORNL low-level waste (LLW) solution, Hanford Facility Waste (HFW) solution, and Hanford cladding removal waste (CRW) slurry. Data for the three wastes can be represented with a 0.96 coefficient of correlation by the following equation: S = -9.56 + 9.27 D/I + 18.11/C + 0.010 R, where S denotess 28-d compressive strength, in mPa; D designates Waste concentration, fraction of the original; I is ionic strength; C denotes Attapulgite-150 clay content of dry blend, in wt %; and R is the mix ratio, kg/m 3 . The equation may be used to estimate 28-d compressive strengths of grouts prepared within the compositional range of this investigation

  18. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  19. Massive gravity and Fierz-Pauli theory

    Energy Technology Data Exchange (ETDEWEB)

    Blasi, Alberto [Universita di Genova, Dipartimento di Fisica, Genova (Italy); Maggiore, Nicola [I.N.F.N.-Sezione di Genova, Genoa (Italy)

    2017-09-15

    Linearized gravity is considered as an ordinary gauge field theory. This implies the need for gauge fixing in order to have well-defined propagators. Only after having achieved this, the most general mass term is added. The aim of this paper is to study of the degrees of freedom of the gauge fixed theory of linearized gravity with mass term. The main result is that, even outside the usual Fierz-Pauli constraint on the mass term, it is possible to choose a gauge fixing belonging to the Landau class, which leads to a massive theory of gravity with the five degrees of freedom of a spin-2 massive particle. (orig.)

  20. Massive gravity and Fierz-Pauli theory

    International Nuclear Information System (INIS)

    Blasi, Alberto; Maggiore, Nicola

    2017-01-01

    Linearized gravity is considered as an ordinary gauge field theory. This implies the need for gauge fixing in order to have well-defined propagators. Only after having achieved this, the most general mass term is added. The aim of this paper is to study of the degrees of freedom of the gauge fixed theory of linearized gravity with mass term. The main result is that, even outside the usual Fierz-Pauli constraint on the mass term, it is possible to choose a gauge fixing belonging to the Landau class, which leads to a massive theory of gravity with the five degrees of freedom of a spin-2 massive particle. (orig.)

  1. Dynamical Processes Near the Super Massive Black Hole at the Galactic Center

    Science.gov (United States)

    Antonini, Fabio

    2011-01-01

    Observations of the stellar environment near the Galactic center provide the strongest empirical evidence for the existence of massive black holes in the Universe. Theoretical models of the Milky Way nuclear star cluster fail to explain numerous properties of such environment, including the presence of very young stars close to the super massive black hole (SMBH) and the more recent discovery of a parsec-scale core in the central distribution of the bright late-type (old) stars. In this thesis we present a theoretical study of dynamical processes near the Galactic center, strongly related to these issues. Using different numerical techniques we explore the close environment of a SMBH as catalyst for stellar collisions and mergers. We study binary stars that remain bound for several revolutions around the SMBH, finding that in the case of highly inclined binaries the Kozai resonance can lead to large periodic oscillations in the internal binary eccentricity and inclination. Collisions and mergers of the binary elements are found to increase significantly for multiple orbits around the SMBH. In collisions involving a low-mass and a high-mass star, the merger product acquires a high core hydrogen abundance from the smaller star, effectively resetting the nuclear evolution clock to a younger age. This process could serve as an important source of young stars at the Galactic center. We then show that a core in the old stars can be naturally explained in a scenario in which the Milky Way nuclear star cluster (NSC) is formed via repeated inspiral of globular clusters into the Galactic center. We present results from a set of N -body simulations of this process, which show that the fundamental properties of the NSC, including its mass, outer density profile and velocity structure, are also reproduced. Chandrasekhar's dynamical friction formula predicts no frictional force on a test body in a low-density core, regardless of its density, due to the absence of stars moving

  2. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  3. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  4. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  5. Study of CSR longitudinal bunch compression cavity

    International Nuclear Information System (INIS)

    Yin Dayu; Li Peng; Liu Yong; Xie Qingchun

    2009-01-01

    The scheme of longitudinal bunch compression cavity for the Cooling Storage Ring (CSR)is an important issue. Plasma physics experiments require high density heavy ion beam and short pulsed bunch,which can be produced by non-adiabatic compression of bunch implemented by a fast compression with 90 degree rotation in the longitudinal phase space. The phase space rotation in fast compression is initiated by a fast jump of the RF-voltage amplitude. For this purpose, the CSR longitudinal bunch compression cavity, loaded with FINEMET-FT-1M is studied and simulated with MAFIA code. In this paper, the CSR longitudinal bunch compression cavity is simulated and the initial bunch length of 238 U 72+ with 250 MeV/u will be compressed from 200 ns to 50 ns.The construction and RF properties of the CSR longitudinal bunch compression cavity are simulated and calculated also with MAFIA code. The operation frequency of the cavity is 1.15 MHz with peak voltage of 80 kV, and the cavity can be used to compress heavy ions in the CSR. (authors)

  6. Direct photons as a potential probe for the triangle -resonance in compressed nuclear matter

    International Nuclear Information System (INIS)

    Simon, R.S.

    1994-04-01

    Pions are trapped in the compressed hadronic matter formed in relativistic heavy-ion collisions for the time periods of 15 fm/c. Such time scales are long compared to the width of the Δ-resonance and result in an enhancement of the Δ/π o γ-ratio over the free value. Simulations for the acceptance of the photon spectrometer TAPS indicate that the photon signal from the Δ-resonance might be observable. (orig.)

  7. On the spontaneous breakdown of massive gravities in 2 + 1 dimension

    International Nuclear Information System (INIS)

    Aragone, C.; Aria, P.J.; Andes Merida, Univ.; Khoudeir, A.

    1997-01-01

    This paper shows that locally Lorentz-invariant, third-order, topological massive gravity cannot be broken down either to the local diffeomorphism subgroup or to the rigid Poincare' group. On the other hand, the recently formulated, locally diffeomorphism-invariant, second order massive tradic (translational) Chern-Simons gravity breaks down on rigid Minkowski space to a double massive spin-two system. This flat double massive action is the uniform spin-two generalization of the Maxwell-Chern-Simons-Proca system which one is left with after U(1) Abelian gauge invariance breaks down in the presence of a sextic Higgs potential

  8. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  9. Formation of massive seed black holes via collisions and accretion

    Science.gov (United States)

    Boekholt, T. C. N.; Schleicher, D. R. G.; Fellhauer, M.; Klessen, R. S.; Reinoso, B.; Stutz, A. M.; Haemmerlé, L.

    2018-05-01

    Models aiming to explain the formation of massive black hole seeds, and in particular the direct collapse scenario, face substantial difficulties. These are rooted in rather ad hoc and fine-tuned initial conditions, such as the simultaneous requirements of extremely low metallicities and strong radiation backgrounds. Here, we explore a modification of such scenarios where a massive primordial star cluster is initially produced. Subsequent stellar collisions give rise to the formation of massive (104-105 M⊙) objects. Our calculations demonstrate that the interplay among stellar dynamics, gas accretion, and protostellar evolution is particularly relevant. Gas accretion on to the protostars enhances their radii, resulting in an enhanced collisional cross-section. We show that the fraction of collisions can increase from 0.1 to 1 per cent of the initial population to about 10 per cent when compared to gas-free models or models of protostellar clusters in the local Universe. We conclude that very massive objects can form in spite of initial fragmentation, making the first massive protostellar clusters viable candidate birth places for observed supermassive black holes.

  10. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  11. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  12. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  13. Limiting density ratios in piston-driven compressions

    International Nuclear Information System (INIS)

    Lee, S.

    1985-07-01

    By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximum compression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)

  14. Modelling concrete behaviour at early-age: multi-scale analysis and simulation of a massive disposal structure

    International Nuclear Information System (INIS)

    Honorio-De-Faria, Tulio

    2015-01-01

    The accurate prediction of the long and short-term behaviour of concrete structures in the nuclear domain is essential to ensure optimal performances (integrity, containment properties) during their service life. In the particular case of massive concrete structures, at early age the heat produced by hydration reactions cannot be evacuated fast enough so that high temperatures may be reached and the resulting gradients of temperature might lead to cracking according to the external and internal restraints to which the structures are subjected. The goals of this study are (1) to perform numerical simulations in order to describe and predict the thermo-chemo-mechanical behaviour at early-age of a massive concrete structure devoted to nuclear waste disposal on surface, and (2) to develop and apply up-scaling tools to estimate rigorously the key properties of concrete needed in an early-age analysis from the composition of the material. Firstly, a chemo-thermal analysis aims at determining the influence of convection, solar radiation, re-radiation and hydration heat on the thermal response of the structure. Practical recommendations regarding concreting temperatures are provided in order to limit the maximum temperature reached within the structure. Then, by means of a mechanical analysis, simplified and more complex (i.e. accounting for coupled creep and damage) modelling strategies are used to assess scenarios involving different boundary conditions defined from the previous chemo-thermal analysis. Secondly, a study accounting for the multi-scale character of concrete is performed. A simplified model of cement hydration kinetics is proposed. The evolution of the different phases at the cement paste level can be estimated. Then, analytical and numerical tools to upscale the ageing properties are presented and applied to estimate the mechanical and thermal properties of cement based materials. Finally, the input data used in the structural analysis are compared with

  15. The dynamics of massive starless cores with ALMA

    Energy Technology Data Exchange (ETDEWEB)

    Tan, Jonathan C. [Departments of Astronomy and Physics, University of Florida, Gainesville, FL 32611 (United States); Kong, Shuo; Butler, Michael J. [Department of Astronomy, University of Florida, Gainesville, FL 32611 (United States); Caselli, Paola [School of Physics and Astronomy, The University of Leeds, Leeds LS2 9JT (United Kingdom); Fontani, Francesco [INAF-Osservatorio Astrofisico di Arcetri, Largo Enrico Fermi 5, I-50125 Firenze (Italy)

    2013-12-20

    How do stars that are more massive than the Sun form, and thus how is the stellar initial mass function (IMF) established? Such intermediate- and high-mass stars may be born from relatively massive pre-stellar gas cores, which are more massive than the thermal Jeans mass. The turbulent core accretion model invokes such cores as being in approximate virial equilibrium and in approximate pressure equilibrium with their surrounding clump medium. Their internal pressure is provided by a combination of turbulence and magnetic fields. Alternatively, the competitive accretion model requires strongly sub-virial initial conditions that then lead to extensive fragmentation to the thermal Jeans scale, with intermediate- and high-mass stars later forming by competitive Bondi-Hoyle accretion. To test these models, we have identified four prime examples of massive (∼100 M {sub ☉}) clumps from mid-infrared extinction mapping of infrared dark clouds. Fontani et al. found high deuteration fractions of N{sub 2}H{sup +} in these objects, which are consistent with them being starless. Here we present ALMA observations of these four clumps that probe the N{sub 2}D{sup +} (3-2) line at 2.''3 resolution. We find six N{sub 2}D{sup +} cores and determine their dynamical state. Their observed velocity dispersions and sizes are broadly consistent with the predictions of the turbulent core model of self-gravitating, magnetized (with Alfvén Mach number m{sub A} ∼ 1) and virialized cores that are bounded by the high pressures of their surrounding clumps. However, in the most massive cores, with masses up to ∼60 M {sub ☉}, our results suggest that moderately enhanced magnetic fields (so that m{sub A} ≅ 0.3) may be needed for the structures to be in virial and pressure equilibrium. Magnetically regulated core formation may thus be important in controlling the formation of massive cores, inhibiting their fragmentation, and thus helping to establish the stellar IMF.

  16. Nuclear power for the new millennium

    International Nuclear Information System (INIS)

    Hucik, S.A.; Redding, J.R.

    1998-01-01

    Advanced nuclear technology is being commercially deployed. Two ABWR's have been constructed in Japan and are reliably generating large amounts of low cost electricity. Taiwan is now in the process of licensing and constructing two more ABWR's, which will enter commercial operation in 2004 and 2005. Other countries have similar strategies to deploy advanced nuclear plants and the successful deployment of ABWR's in Japan and Taiwan, coupled with international agreements to limit CO 2 emissions, will only reinforce these plans. The ABWR will play an important role in meeting the conflicting needs of developed and developing economies for more massive amounts of electricity and the need worldwide to limit CO 2 emissions. Successful ABWR projects in Japan arid Taiwan, coupled with licensing approval in the United States, represent the new approach to the design, licensing, construction and operation of nuclear power in the new millennium. (author)

  17. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2013-01-01

    Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and an extensive discussion of the various approaches used in predicting both free shear and wall bounded flows is presented. Experimental measurement techniques common to the compressible flow regime are introduced with particular emphasis on the unique challenges presented by high speed flows. Both experimental and numerical simulation work is supplied throughout to provide the reader with an overall perspective of current tre...

  18. Massive and mass-less Yang-Mills and gravitational fields

    NARCIS (Netherlands)

    Veltman, M.J.G.; Dam, H. van

    1970-01-01

    Massive and mass-less Yang-Mills and gravitational fields are considered. It is found that there is a discrete difference between the zero-mass theories and the very small, but non-zero mass theories. In the case of gravitation, comparison of massive and mass-less theories with experiment, in

  19. Massive weight loss-induced mechanical plasticity in obese gait

    NARCIS (Netherlands)

    Hortobagyi, Tibor; Herring, Cortney; Pories, Walter J.; Rider, Patrick; DeVita, Paul

    2011-01-01

    Hortobagyi T, Herring C, Pories WJ, Rider P, DeVita P. Massive weight loss-induced mechanical plasticity in obese gait. J Appl Physiol 111: 1391-1399, 2011. First published August 18, 2011; doi:10.1152/japplphysiol.00291.2011.-We examined the hypothesis that metabolic surgery-induced massive weight

  20. The Uncertain Future of Nuclear Energy

    OpenAIRE

    Bunn, Matthew G.; von Hippel, Frank; Diakov, Anatoli; Ding, Ming; Katsuta, Tadahiro; McCombie, Charles; Ramana, M.V.; Suzuki, Tatsujiro; Voss, Susan; Yu, Suyuan

    2010-01-01

    In the 1970s, nuclear energy was expected to quickly become the dominant generator of electrical power. Its fuel costs are remarkably low because a million times more energy is released per unit weight by fission than by combustion. But its capital costs have proven to be high. Safety requires redundant cooling and control systems, massive leak-tight containment structures, very conservative seismic design and extremely stringent quality control. The routine health risks and greenhouse-gas...

  1. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  2. Simulating nonlinear cosmological structure formation with massive neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Arka; Dalal, Neal, E-mail: abanerj6@illinois.edu, E-mail: dalaln@illinois.edu [Department of Physics, University of Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, IL 61801-3080 (United States)

    2016-11-01

    We present a new method for simulating cosmologies that contain massive particles with thermal free streaming motion, such as massive neutrinos or warm/hot dark matter. This method combines particle and fluid descriptions of the thermal species to eliminate the shot noise known to plague conventional N-body simulations. We describe this method in detail, along with results for a number of test cases to validate our method, and check its range of applicability. Using this method, we demonstrate that massive neutrinos can produce a significant scale-dependence in the large-scale biasing of deep voids in the matter field. We show that this scale-dependence may be quantitatively understood using an extremely simple spherical expansion model which reproduces the behavior of the void bias for different neutrino parameters.

  3. Simulating nonlinear cosmological structure formation with massive neutrinos

    International Nuclear Information System (INIS)

    Banerjee, Arka; Dalal, Neal

    2016-01-01

    We present a new method for simulating cosmologies that contain massive particles with thermal free streaming motion, such as massive neutrinos or warm/hot dark matter. This method combines particle and fluid descriptions of the thermal species to eliminate the shot noise known to plague conventional N-body simulations. We describe this method in detail, along with results for a number of test cases to validate our method, and check its range of applicability. Using this method, we demonstrate that massive neutrinos can produce a significant scale-dependence in the large-scale biasing of deep voids in the matter field. We show that this scale-dependence may be quantitatively understood using an extremely simple spherical expansion model which reproduces the behavior of the void bias for different neutrino parameters.

  4. Stochastic spin-one massive field

    International Nuclear Information System (INIS)

    Lim, S.C.

    1984-01-01

    Stochastic quantization schemes of Nelson and Parisi and Wu are applied to a spin-one massive field. Unlike the scalar case Nelson's stochastic spin-one massive field cannot be identified with the corresponding euclidean field even if the fourth component of the euclidean coordinate is taken as equal to the real physical time. In the Parisi-Wu quantization scheme the stochastic Proca vector field has a similar property as the scalar field; which has an asymptotically stationary part and a transient part. The large equal-time limit of the expectation values of the stochastic Proca field are equal to the expectation values of the corresponding euclidean field. In the Stueckelberg formalism the Parisi-Wu scheme gives rise to a stochastic vector field which differs from the massless gauge field in that the gauge cannot be fixed by the choice of boundary condition. (orig.)

  5. Minimal theory of massive gravity

    International Nuclear Information System (INIS)

    De Felice, Antonio; Mukohyama, Shinji

    2016-01-01

    We propose a new theory of massive gravity with only two propagating degrees of freedom. While the homogeneous and isotropic background cosmology and the tensor linear perturbations around it are described by exactly the same equations as those in the de Rham–Gabadadze–Tolley (dRGT) massive gravity, the scalar and vector gravitational degrees of freedom are absent in the new theory at the fully nonlinear level. Hence the new theory provides a stable nonlinear completion of the self-accelerating cosmological solution that was originally found in the dRGT theory. The cosmological solution in the other branch, often called the normal branch, is also rendered stable in the new theory and, for the first time, makes it possible to realize an effective equation-of-state parameter different from (either larger or smaller than) −1 without introducing any extra degrees of freedom.

  6. 30 CFR 77.412 - Compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  7. Accelerating Full Configuration Interaction Calculations for Nuclear Structure

    International Nuclear Information System (INIS)

    Yang, Chao; Sternberg, Philip; Maris, Pieter; Ng, Esmond; Sosonkina, Masha; Le, Hung Viet; Vary, James; Yang, Chao

    2008-01-01

    One of the emerging computational approaches in nuclear physics is the full configuration interaction (FCI) method for solving the many-body nuclear Hamiltonian in a sufficiently large single-particle basis space to obtain exact answers - either directly or by extrapolation. The lowest eigenvalues and corresponding eigenvectors for very large, sparse and unstructured nuclear Hamiltonian matrices are obtained and used to evaluate additional experimental quantities. These matrices pose a significant challenge to the design and implementation of efficient and scalable algorithms for obtaining solutions on massively parallel computer systems. In this paper, we describe the computational strategies employed in a state-of-the-art FCI code MFDn (Many Fermion Dynamics - nuclear) as well as techniques we recently developed to enhance the computational efficiency of MFDn. We will demonstrate the current capability of MFDn and report the latest performance improvement we have achieved. We will also outline our future research directions

  8. Two divergent paths: compression vs. non-compression in deep venous thrombosis and post thrombotic syndrome

    Directory of Open Access Journals (Sweden)

    Eduardo Simões Da Matta

    Full Text Available Abstract Use of compression therapy to reduce the incidence of postthrombotic syndrome among patients with deep venous thrombosis is a controversial subject and there is no consensus on use of elastic versus inelastic compression, or on the levels and duration of compression. Inelastic devices with a higher static stiffness index, combine relatively small and comfortable pressure at rest with pressure while standing strong enough to restore the “valve mechanism” generated by plantar flexion and dorsiflexion of the foot. Since the static stiffness index is dependent on the rigidity of the compression system and the muscle strength within the bandaged area, improvement of muscle mass with muscle-strengthening programs and endurance training should be encouraged. Therefore, in the acute phase of deep venous thrombosis events, anticoagulation combined with inelastic compression therapy can reduce the extension of the thrombus. Notwithstanding, prospective studies evaluating the effectiveness of inelastic therapy in deep venous thrombosis and post-thrombotic syndrome are needed.

  9. Generalized massive gravity in arbitrary dimensions and its Hamiltonian formulation

    International Nuclear Information System (INIS)

    Huang, Qing-Guo; Zhang, Ke-Chao; Zhou, Shuang-Yong

    2013-01-01

    We extend the four-dimensional de Rham-Gabadadze-Tolley (dRGT) massive gravity model to a general scalar massive-tensor theory in arbitrary dimensions, coupling a dRGT massive graviton to multiple scalars and allowing for generic kinetic and mass matrix mixing between the massive graviton and the scalars, and derive its Hamiltonian formulation and associated constraint system. When passing to the Hamiltonian formulation, two different sectors arise: a general sector and a special sector. Although obtained via different ways, there are two second class constraints in either of the two sectors, eliminating the BD ghost. However, for the special sector, there are still ghost instabilities except for the case of two dimensions. In particular, for the special sector with one scalar, there is a ''second BD ghost''

  10. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  11. Massive Black Hole Binaries: Dynamical Evolution and Observational Signatures

    Directory of Open Access Journals (Sweden)

    M. Dotti

    2012-01-01

    Full Text Available The study of the dynamical evolution of massive black hole pairs in mergers is crucial in the context of a hierarchical galaxy formation scenario. The timescales for the formation and the coalescence of black hole binaries are still poorly constrained, resulting in large uncertainties in the expected rate of massive black hole binaries detectable in the electromagnetic and gravitational wave spectra. Here, we review the current theoretical understanding of the black hole pairing in galaxy mergers, with a particular attention to recent developments and open issues. We conclude with a review of the expected observational signatures of massive binaries and of the candidates discussed in literature to date.

  12. QUALITY OF LIFE IN PATIENTS AFTER MASSIVE PULMONARY EMBOLISM

    Directory of Open Access Journals (Sweden)

    Dragan Kovačić

    2004-04-01

    Full Text Available Background. Pulmonary embolism is a disease, which has a 30% mortality if untreated, while an early diagnosis and treatment lowers it to 2–8%. Health related quality of life (HRQL of patients who survived massive pulmonary embolism is unknown in published literature. In our research we tried to apply experience of foreign experts in estimation of quality of life in some other diseases to the field of massive pulmonary embolism.Patients and methods. Eighteen patients with shock or hypotension due to massive pulmonary embolism, treated with thrombolysis, between July 1993 and November 2000, were prospectively included in the study. Control group included 18 gender and age matched persons. There were no significant differences regarding demographic data between the groups. The HRQL and aerobic capacity of patients and control group were tested with short questions and questionnaires (Veterans brief, self administered questionnaire (VSAQ, EuroQuality questionnaire (EQ, Living with heart failure questionnaire (LlhHF. With LlhHF physical (F-LlhHF and emotional (E-LlhHF HRQL was assessed at hospitalization and 12 months later.Results. One year after massive pulmonary embolism aerobic capacity (–9.5%, p < 0.017 and HRQL (EQ (–34.5%, F-LlhHF (–85.4%, E-LlhHF (–48.7% decreased in massive pulmonary embolism group compared to aerobic capacity 6 months before massive pulmonary embolism and HRQL. Heart rate before thrombolysis correlated with aerobic capacity (r = 0.627, p < 0.01, EQ (r = 0.479, p < 0.01 and F-LlhHF (r = 0.479, p = 0.04 1 year after massive pulmonary embolism. Total pulmonary resistance at 12 hours after start of treatment correlated with aerobic capacity at 1 year (r = 0.354, p < 0.01.With short question (»Did you need any help in everyday activities in last 2 weeks?« we successfully separated patients with decreased HRQL in EQ (74.3 ± 20.8 vs. 24.5 ± 20.7, p < 0.001 and F-LlhHF (21.7 ± 6.7 vs. 32.8 ± 4.3, p < 0.01, but we

  13. Collaborative Calibrated Peer Assessment in Massive Open Online Courses

    Science.gov (United States)

    Boudria, Asma; Lafifi, Yacine; Bordjiba, Yamina

    2018-01-01

    The free nature and open access courses in the Massive Open Online Courses (MOOC) allow the facilities of disseminating information for a large number of participants. However, the "massive" propriety can generate many pedagogical problems, such as the assessment of learners, which is considered as the major difficulty facing in the…

  14. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  15. Poor chest compression quality with mechanical compressions in simulated cardiopulmonary resuscitation: a randomized, cross-over manikin study.

    Science.gov (United States)

    Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob

    2011-10-01

    Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  17. Radiology in massive hemoptysis

    International Nuclear Information System (INIS)

    Marini, M.; Castro, J.M.; Gayol, A.; Aguilera, C.; Blanco, M.; Beraza, A.; Torres, J.

    1995-01-01

    We have reviewed our experience in diseases involving massive hemoptysis, systematizing the most common causes which include tuberculosis, bronchiectasis and cancer of the lung. Other less frequent causes, such as arteriovenous fistula, Aspergilloma, aneurysm, etc.; are also evaluated, and the most demonstrative images of each produced by the most precise imaging methods for their assessment are presented

  18. Nuclear structure and double beta decay

    International Nuclear Information System (INIS)

    Vogel, P.

    1988-01-01

    Double beta decay is a rare transition between two nuclei of the same mass number A involving a change of the nuclear charge Z by two units. It has long been recognized that the Oν mode of double beta decay, where two electrons and no neutrinos are emitted, is a powerful tool for the study of neutrino properties. Its observation would constitute a convincing proof that there exists a massive Majorana neutrino which couples to electrons. Double beta decay is a process involving an intricate mixture of particle physics and physics of the nucleus. The principal nuclear physics issues have to do with the evaluation of the nuclear matrix elements responsible for the decay. If the authors wish to arrive at quantitative answers for the neutrino properties the authors have no choice but to learn first how to understand the nuclear mechanisms. The authors describe first the calculation of the decay rate of the 2ν mode of double beta decay, in which two electrons and two antineutrinos are emitted

  19. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  20. Medullary compression syndrome

    International Nuclear Information System (INIS)

    Barriga T, L.; Echegaray, A.; Zaharia, M.; Pinillos A, L.; Moscol, A.; Barriga T, O.; Heredia Z, A.

    1994-01-01

    The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

  1. Fragmentation of massive dense cores down to ≲ 1000 AU: Relation between fragmentation and density structure

    International Nuclear Information System (INIS)

    Palau, Aina; Girart, Josep M.; Estalella, Robert; Fuente, Asunción; Fontani, Francesco; Sánchez-Monge, Álvaro; Commerçon, Benoit; Hennebelle, Patrick; Busquet, Gemma; Bontemps, Sylvain; Zapata, Luis A.; Zhang, Qizhou; Di Francesco, James

    2014-01-01

    In order to shed light on the main physical processes controlling fragmentation of massive dense cores, we present a uniform study of the density structure of 19 massive dense cores, selected to be at similar evolutionary stages, for which their relative fragmentation level was assessed in a previous work. We inferred the density structure of the 19 cores through a simultaneous fit of the radial intensity profiles at 450 and 850 μm (or 1.2 mm in two cases) and the spectral energy distribution, assuming spherical symmetry and that the density and temperature of the cores decrease with radius following power-laws. Even though the estimated fragmentation level is strictly speaking a lower limit, its relative value is significant and several trends could be explored with our data. We find a weak (inverse) trend of fragmentation level and density power-law index, with steeper density profiles tending to show lower fragmentation, and vice versa. In addition, we find a trend of fragmentation increasing with density within a given radius, which arises from a combination of flat density profile and high central density and is consistent with Jeans fragmentation. We considered the effects of rotational-to-gravitational energy ratio, non-thermal velocity dispersion, and turbulence mode on the density structure of the cores, and found that compressive turbulence seems to yield higher central densities. Finally, a possible explanation for the origin of cores with concentrated density profiles, which are the cores showing no fragmentation, could be related with a strong magnetic field, consistent with the outcome of radiation magnetohydrodynamic simulations.

  2. Fragmentation of massive dense cores down to ≲ 1000 AU: Relation between fragmentation and density structure

    Energy Technology Data Exchange (ETDEWEB)

    Palau, Aina; Girart, Josep M. [Institut de Ciències de l' Espai (CSIC-IEEC), Campus UAB-Facultat de Ciències, Torre C5-parell 2, E-08193 Bellaterra, Catalunya (Spain); Estalella, Robert [Departament d' Astronomia i Meteorologia (IEEC-UB), Institut de Ciències del Cosmos, Universitat de Barcelona, Martí i Franquès, 1, E-08028 Barcelona (Spain); Fuente, Asunción [Observatorio Astronómico Nacional, P.O. Box 112, E-28803 Alcalá de Henares, Madrid (Spain); Fontani, Francesco; Sánchez-Monge, Álvaro [Osservatorio Astrofisico di Arcetri, INAF, Lago E. Fermi 5, I-50125 Firenze (Italy); Commerçon, Benoit; Hennebelle, Patrick [Laboratoire de Radioastronomie, UMR CNRS 8112, École Normale Supérieure et Observatoire de Paris, 24 rue Lhomond, F-75231 Paris Cedex 05 (France); Busquet, Gemma [INAF-Istituto di Astrofisica e Planetologia Spaziali, Area di Recerca di Tor Vergata, Via Fosso Cavaliere 100, I-00133 Roma (Italy); Bontemps, Sylvain [Université de Bordeaux, LAB, UMR 5804, F-33270 Floirac (France); Zapata, Luis A. [Centro de Radioastronomía y Astrofísica, Universidad Nacional Autónoma de México, P.O. Box 3-72, 58090 Morelia, Michoacán (Mexico); Zhang, Qizhou [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Di Francesco, James, E-mail: palau@ieec.uab.es [Department of Physics and Astronomy, University of Victoria, P.O. Box 355, STN CSC, Victoria, BC, V8W 3P6 (Canada)

    2014-04-10

    In order to shed light on the main physical processes controlling fragmentation of massive dense cores, we present a uniform study of the density structure of 19 massive dense cores, selected to be at similar evolutionary stages, for which their relative fragmentation level was assessed in a previous work. We inferred the density structure of the 19 cores through a simultaneous fit of the radial intensity profiles at 450 and 850 μm (or 1.2 mm in two cases) and the spectral energy distribution, assuming spherical symmetry and that the density and temperature of the cores decrease with radius following power-laws. Even though the estimated fragmentation level is strictly speaking a lower limit, its relative value is significant and several trends could be explored with our data. We find a weak (inverse) trend of fragmentation level and density power-law index, with steeper density profiles tending to show lower fragmentation, and vice versa. In addition, we find a trend of fragmentation increasing with density within a given radius, which arises from a combination of flat density profile and high central density and is consistent with Jeans fragmentation. We considered the effects of rotational-to-gravitational energy ratio, non-thermal velocity dispersion, and turbulence mode on the density structure of the cores, and found that compressive turbulence seems to yield higher central densities. Finally, a possible explanation for the origin of cores with concentrated density profiles, which are the cores showing no fragmentation, could be related with a strong magnetic field, consistent with the outcome of radiation magnetohydrodynamic simulations.

  3. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation.

    Science.gov (United States)

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-07-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects were positioned in a 30° lateral recumbent position, and a 2-kgf compression was applied. For expiratory rib cage compression, the rib cage was compressed unilaterally; for expiratory abdominal compression, the area directly above the navel was compressed. Tidal volume values were the actual measured values divided by body weight. [Results] Tidal volume values were as follows: at rest, 7.2 ± 1.7 mL/kg; during expiratory rib cage compression, 8.3 ± 2.1 mL/kg; during expiratory abdominal compression, 9.1 ± 2.2 mL/kg. There was a significant difference between the tidal volume during expiratory abdominal compression and that at rest. The tidal volume in expiratory rib cage compression was strongly correlated with that in expiratory abdominal compression. [Conclusion] These results indicate that expiratory abdominal compression may be an effective alternative to the manual breathing assist procedure.

  4. A spin-4 analog of 3D massive gravity

    NARCIS (Netherlands)

    Bergshoeff, Eric A.; Kovacevic, Marija; Rosseel, Jan; Townsend, Paul K.; Yin, Yihao

    2011-01-01

    A sixth-order, but ghost-free, gauge-invariant action is found for a fourth-rank symmetric tensor potential in a three-dimensional (3D) Minkowski spacetime. It propagates two massive modes of spin 4 that are interchanged by parity and is thus a spin-4 analog of linearized 'new massive gravity'. Also

  5. Energy-range relations for hadrons in nuclear matter

    Science.gov (United States)

    Strugalski, Z.

    1985-01-01

    Range-energy relations for hadrons in nuclear matter exist similarly to the range-energy relations for charged particles in materials. When hadrons of GeV kinetic energies collide with atomic nuclei massive enough, events occur in which incident hadron is stopped completely inside the target nucleus without causing particle production - without pion production in particular. The stoppings are always accompanied by intensive emission of nucleons with kinetic energy from about 20 up to about 400 MeV. It was shown experimentally that the mean number of the emitted nucleons is a measure of the mean path in nuclear matter in nucleons on which the incident hadrons are stopped.

  6. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  7. Massive target nuclei as disc-shaped slabs and spherical objects of intranuclear matter in high-energy nuclear collisions

    International Nuclear Information System (INIS)

    Zewislawski, Z.; Strugalski, Z.; Mausa, M.

    1990-01-01

    It has been found experimentally that a definite number of emitted nucleons corresponds to a definite impact parameter in hadron-nucleus collisions. This finding allows one: to treat the massive target nucleus as a piece of intranuclear matter of a definite thickness; to treat a numerous sample of collisions of monoenergetic identical hadrons with the nucleus as collection of interactions of a homogeneous beam of hadrons with disc-shaped slabs of intranuclear matter of definite thicknesses. 17 refs.; 1 fig

  8. MP3 compression of Doppler ultrasound signals.

    Science.gov (United States)

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  9. Description of hot compressed hadronic matter based on an effective chiral Lagrangian

    Energy Technology Data Exchange (ETDEWEB)

    Florkowski, W. [Institute of Nuclear Physics, Cracow (Poland)

    1996-11-01

    In this report we give the review of the recent results obtained in the Nambu-Jona-Lasinio (NJL) model, describing the properties of hot compressed matter. The first large class problems concerns the behaviour of static meson correlation functions. In particular, this includes the investigation of the screening of meson fields at finite temperature or density. Another wide range of problems presented in our report concerns the formulation of the transport theory for the NJL model and its applications to the description of high energy nuclear collision. 86 refs, 35 figs.

  10. Description of hot compressed hadronic matter based on an effective chiral Lagrangian

    International Nuclear Information System (INIS)

    Florkowski, W.

    1996-11-01

    In this report we give the review of the recent results obtained in the Nambu-Jona-Lasinio (NJL) model, describing the properties of hot compressed matter. The first large class problems concerns the behaviour of static meson correlation functions. In particular, this includes the investigation of the screening of meson fields at finite temperature or density. Another wide range of problems presented in our report concerns the formulation of the transport theory for the NJL model and its applications to the description of high energy nuclear collision. 86 refs, 35 figs

  11. NEW APPROACHES TO EFFICIENCY OF MASSIVE ONLINE COURSE

    Directory of Open Access Journals (Sweden)

    Liubov S. Lysitsina

    2014-09-01

    Full Text Available This paper is focused on efficiency of e-learning, in general, and massive online course in programming and information technology, in particular. Several innovative approaches and scenarios have been proposed, developed, implemented and verified by the authors, including 1 a new approach to organize and use automatic immediate feedback that significantly helps a learner to verify developed code and increases an efficiency of learning, 2 a new approach to construct learning interfaces – it is based on “develop a code – get a result – validate a code” technique, 3 three scenarios of visualization and verification of developed code, 4 a new multi-stage approach to solve complex programming assignments, 5 a new implementation of “perfectionism” game mechanics in a massive online course. Overall, due to implementation of proposed and developed approaches, the efficiency of massive online course has been considerably increased, particularly 1 the additional 27.9 % of students were able to complete successfully “Web design and development using HTML5 and CSS3” massive online course at ITMO University, and 2 based on feedback from 5588 students a “perfectionism” game mechanics noticeably improves students’ involvement into course activities and retention factor.

  12. Plasma heating by adiabatic compression

    International Nuclear Information System (INIS)

    Ellis, R.A. Jr.

    1972-01-01

    These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

  13. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  14. Photon emission from massive projectile impacts on solids.

    Science.gov (United States)

    Fernandez-Lima, F A; Pinnick, V T; Della-Negra, S; Schweikert, E A

    2011-01-01

    First evidence of photon emission from individual impacts of massive gold projectiles on solids for a number of projectile-target combinations is reported. Photon emission from individual impacts of massive Au(n) (+q) (1 ≤ n ≤ 400; q = 1-4) projectiles with impact energies in the range of 28-136 keV occurs in less than 10 ns after the projectile impact. Experimental observations show an increase in the photon yield from individual impacts with the projectile size and velocity. Concurrently with the photon emission, electron emission from the impact area has been observed below the kinetic emission threshold and under unlikely conditions for potential electron emission. We interpret the puzzling electron emission and correlated luminescence observation as evidence of the electronic excitation resulting from the high-energy density deposited by massive cluster projectiles during the impact.

  15. Compressible Fluid Suspension Performance Testing

    National Research Council Canada - National Science Library

    Hoogterp, Francis

    2003-01-01

    ... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

  16. The abolition of nuclear weapons: realistic or not? For physicians, a world without nuclear weapons is possible and above all necessary. To abolish, did you say abolish? Is the elimination of the nuclear weapon realistic?

    International Nuclear Information System (INIS)

    Behar, Abraham; Gere, Francois; Lalanne, Dominique

    2010-06-01

    In a first article, a physician explains that eliminating nuclear weapons would be a way to get rid of the temptation for some persons to use this arm of massive destruction, and that it would be better for mankind to live without this threat. The author of the second article discusses the effect abolition could have, and, with a reference to President Obama's position about zero nuclear weapons, outlines that it could be at the benefit of peaceful uses of nuclear energy. He also discusses the perspectives of this 'global zero logics' with a new approach to arms control, and comments the relationships between abolition and non proliferation. He finally discusses the reserved attitude of France on these issues. In the next contribution, a nuclear physicist wanders whether the elimination of nuclear weapons is realistic: whereas it has always been a political objective, nuclear states refused to commit themselves in this direction in 2010 and keep on developing military-oriented tools to design new weapons

  17. Systolic Compression of Epicardial Coronary and Intramural Arteries

    Science.gov (United States)

    Mohiddin, Saidi A.; Fananapazir, Lameh

    2002-01-01

    It has been suggested that systolic compression of epicardial coronary arteries is an important cause of myocardial ischemia and sudden death in children with hypertrophic cardiomyopathy. We examined the associations between sudden death, systolic coronary compression of intra- and epicardial arteries, myocardial perfusion abnormalities, and severity of hypertrophy in children with hypertrophic cardiomyopathy. We reviewed the angiograms from 57 children with hypertrophic cardiomyopathy for the presence of coronary and septal artery compression; coronary compression was present in 23 (40%). The left anterior descending artery was most often affected, and multiple sites were found in 4 children. Myocardial perfusion abnormalities were more frequently present in children with coronary compression than in those without (94% vs 47%, P = 0.002). Coronary compression was also associated with more severe septal hypertrophy and greater left ventricular outflow gradient. Septal branch compression was present in 65% of the children and was significantly associated with coronary compression, severity of septal hypertrophy, and outflow obstruction. Multivariate analysis showed that septal thickness and septal branch compression, but not coronary compression, were independent predictors of perfusion abnormalities. Coronary compression was not associated with symptom severity, ventricular tachycardia, or a worse prognosis. We conclude that compression of coronary arteries and their septal branches is common in children with hypertrophic cardiomyopathy and is related to the magnitude of left ventricular hypertrophy. Our findings suggest that coronary compression does not make an important contribution to myocardial ischemia in hypertrophic cardiomyopathy; however, left ventricular hypertrophy and compression of intramural arteries may contribute significantly. (Tex Heart Inst J 2002;29:290–8) PMID:12484613

  18. Insertion profiles of 4 headless compression screws.

    Science.gov (United States)

    Hart, Adam; Harvey, Edward J; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A

    2013-09-01

    In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. The peak compression occurs at an insertion depth of -3.1 mm, -2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of -2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of -2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws, and enable the surgeon to optimize compression. Copyright

  19. Energy Conservation In Compressed Air Systems

    International Nuclear Information System (INIS)

    Yusuf, I.Y.; Dewu, B.B.M.

    2004-01-01

    Compressed air is an essential utility that accounts for a substantial part of the electricity consumption (bill) in most industrial plants. Although the general saying Air is free of charge is not true for compressed air, the utility's cost is not accorded the rightful importance due to its by most industries. The paper will show that the cost of 1 unit of energy in the form of compressed air is at least 5 times the cost electricity (energy input) required to produce it. The paper will also provide energy conservation tips in compressed air systems

  20. Compressed Data Structures for Range Searching

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vind, Søren Juhl

    2015-01-01

    matrices and web graphs. Our contribution is twofold. First, we show how to compress geometric repetitions that may appear in standard range searching data structures (such as K-D trees, Quad trees, Range trees, R-trees, Priority R-trees, and K-D-B trees), and how to implement subsequent range queries...... on the compressed representation with only a constant factor overhead. Secondly, we present a compression scheme that efficiently identifies geometric repetitions in point sets, and produces a hierarchical clustering of the point sets, which combined with the first result leads to a compressed representation...