WorldWideScience

Sample records for compressed massive nuclear

  1. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  2. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  3. Massive data compression for parameter-dependent covariance matrices

    Science.gov (United States)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  4. Nuclear transmutation by flux compression

    International Nuclear Information System (INIS)

    Seifritz, W.

    2001-01-01

    A new idea for the transmutation of minor actinides, long (and even short) lived fission products is presented. It is based an the property of neutron flux compression in nuclear (fast and/or thermal) reactors possessing spatially non-stationary critical masses. An advantage factor for the burn-up fluence of the elements to be transmuted in the order of magnitude of 100 and more is obtainable compared with the classical way of transmutation. Three typical examples of such transmuters (a subcritical ringreactor with a rotating reflector, a sub-critical ring reactor with a rotating spallation source, the socalled ''pulsed energy amplifier'', and a fast burn-wave reactor) are presented and analysed with regard to this purpose. (orig.) [de

  5. Massive-scale RDF Processing Using Compressed Bitmap Indexes

    Energy Technology Data Exchange (ETDEWEB)

    Madduri, Kamesh; Wu, Kesheng

    2011-05-26

    The Resource Description Framework (RDF) is a popular data model for representing linked data sets arising from the web, as well as large scienti c data repositories such as UniProt. RDF data intrinsically represents a labeled and directed multi-graph. SPARQL is a query language for RDF that expresses subgraph pattern- nding queries on this implicit multigraph in a SQL- like syntax. SPARQL queries generate complex intermediate join queries; to compute these joins e ciently, we propose a new strategy based on bitmap indexes. We store the RDF data in column-oriented structures as compressed bitmaps along with two dictionaries. This paper makes three new contributions. (i) We present an e cient parallel strategy for parsing the raw RDF data, building dictionaries of unique entities, and creating compressed bitmap indexes of the data. (ii) We utilize the constructed bitmap indexes to e ciently answer SPARQL queries, simplifying the join evaluations. (iii) To quantify the performance impact of using bitmap indexes, we compare our approach to the state-of-the-art triple-store RDF-3X. We nd that our bitmap index-based approach to answering queries is up to an order of magnitude faster for a variety of SPARQL queries, on gigascale RDF data sets.

  6. Deep venous thrombosis due to massive compression by uterine myoma

    Directory of Open Access Journals (Sweden)

    Aleksandra Brucka

    2010-10-01

    Full Text Available A 42-year-old woman, gravida 3, para 3 was admitted to hospital because of painful oedema of her right lower extremity. Initial physical examination revealed a gross, firm tumour filling the entire peritoneal cavity. Doppler ultrasound scan revealed a thrombus in the right common iliac vein, extending to the right femur and popliteal veins, and partially into the calf deep vein. Computed tomography confirmed the existence of an abdominal tumour probably deriving from the genital organs and the presence of a thrombus in the said veins.The patient underwent hysterectomy where a myomatous uterine was removed. She was put on subcutaneous enoxaparine and compressive therapy of the lower extremities. Such symptoms as pain and oedema receded. Control Doppler scan showed fibrinolysis, partial organization of the thrombus and final vein recanalisation. After exclusion of other risk factors of deep vein thrombosis apart from stasis, we conclude that the described pathology was the effect of compression of regional pelvic structures by a uterine myoma.

  7. Optimization of multi-phase compressible lattice Boltzmann codes on massively parallel multi-core systems

    NARCIS (Netherlands)

    Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Schifano, S.F.; Toschi, F.; Tripiccione, R.

    2011-01-01

    We develop a Lattice Boltzmann code for computational fluid-dynamics and optimize it for massively parallel systems based on multi-core processors. Our code describes 2D multi-phase compressible flows. We analyze the performance bottlenecks that we find as we gradually expose a larger fraction of

  8. Compression modes and the nuclear matter incompressibility ...

    Indian Academy of Sciences (India)

    We review the current status of the nuclear matter ( = and no Coulomb interaction) incompressibility coefficient, , and describe the theoretical and the experimental methods used to determine from properties of compression modes in nuclei. In particular we consider the long standing problem of the conflicting ...

  9. Efficient Bayesian Compressed Sensing-based Channel Estimation Techniques for Massive MIMO-OFDM Systems

    OpenAIRE

    Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza

    2017-01-01

    Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...

  10. Massive-MIMO Sparse Uplink Channel Estimation Using Implicit Training and Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Babar Mansoor

    2017-01-01

    Full Text Available Massive multiple-input multiple-output (massive-MIMO is foreseen as a potential technology for future 5G cellular communication networks due to its substantial benefits in terms of increased spectral and energy efficiency. These advantages of massive-MIMO are a consequence of equipping the base station (BS with quite a large number of antenna elements, thus resulting in an aggressive spatial multiplexing. In order to effectively reap the benefits of massive-MIMO, an adequate estimate of the channel impulse response (CIR between each transmit–receive link is of utmost importance. It has been established in the literature that certain specific multipath propagation environments lead to a sparse structured CIR in spatial and/or delay domains. In this paper, implicit training and compressed sensing based CIR estimation techniques are proposed for the case of massive-MIMO sparse uplink channels. In the proposed superimposed training (SiT based techniques, a periodic and low power training sequence is superimposed (arithmetically added over the information sequence, thus avoiding any dedicated time/frequency slots for the training sequence. For the estimation of such massive-MIMO sparse uplink channels, two greedy pursuits based compressed sensing approaches are proposed, viz: SiT based stage-wise orthogonal matching pursuit (SiT-StOMP and gradient pursuit (SiT-GP. In order to demonstrate the validity of proposed techniques, a performance comparison in terms of normalized mean square error (NCMSE and bit error rate (BER is performed with a notable SiT based least squares (SiT-LS channel estimation technique. The effect of channels’ sparsity, training-to-information power ratio (TIR and signal-to-noise ratio (SNR on BER and NCMSE performance of proposed schemes is thoroughly studied. For a simulation scenario of: 4 × 64 massive-MIMO with a channel sparsity level of 80 % and signal-to-noise ratio (SNR of 10 dB , a performance gain of 18 dB and 13 d

  11. Searches for massive neutrinos in nuclear beta decay

    International Nuclear Information System (INIS)

    Jaros, J.A.

    1992-10-01

    The status of searches for massive neutrinos in nuclear beta decay is reviewed. The claim by an ITEP group that the electron antineutrino mass > 17eV has been disputed by all the subsequent experiments. Current measurements of the tritium beta spectrum limit m bar νe < 10 eV. The status of the 17 keV neutrino is reviewed. The strong null results from INS Tokyo and Argonne, and deficiencies in the experiments which reported positive effects, make it unreasonable to ascribe the spectral distortions seen by Simpson, Hime, and others to a 17keV neutrino. Several new ideas on how to search for massive neutrinos in nuclear beta decay are discussed

  12. Are Nuclear Star Clusters the Precursors of Massive Black Holes?

    Directory of Open Access Journals (Sweden)

    Nadine Neumayer

    2012-01-01

    Full Text Available We present new upper limits for black hole masses in extremely late type spiral galaxies. We confirm that this class of galaxies has black holes with masses less than 106M⊙, if any. We also derive new upper limits for nuclear star cluster masses in massive galaxies with previously determined black hole masses. We use the newly derived upper limits and a literature compilation to study the low mass end of the global-to-nucleus relations. We find the following. (1 The MBH-σ relation cannot flatten at low masses, but may steepen. (2 The MBH-Mbulge relation may well flatten in contrast. (3 The MBH-Sersic n relation is able to account for the large scatter in black hole masses in low-mass disk galaxies. Outliers in the MBH-Sersic n relation seem to be dwarf elliptical galaxies. When plotting MBH versus MNC we find three different regimes: (a nuclear cluster dominated nuclei, (b a transition region, and (c black hole-dominated nuclei. This is consistent with the picture, in which black holes form inside nuclear clusters with a very low-mass fraction. They subsequently grow much faster than the nuclear cluster, destroying it when the ratio MBH/MNC grows above 100. Nuclear star clusters may thus be the precursors of massive black holes in galaxy nuclei.

  13. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-03-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.

  14. Compression of the Right Pulmonary Artery by a Massive Defects on Pulmonary Scintigraphy

    Energy Technology Data Exchange (ETDEWEB)

    Makis, William [Brandon Regional Health Centre, Brandon (Canada); Derbekyan, Vilma [McGill Univ. Health Centre, Montreal (Canada)

    2012-03-15

    A 67 year old woman, who presented with a 2 month history of dyspnea, had a vectilation and perfusion lung scan that showed absent perfusion of the entire right lung scan that showed absent perfusion of the entire right lung with normal ventilation, as well as a rounded matched defect in the left lower lung adjacent to mialine, suspicious for an aortic aneurysm or dissection. CT pulmonary angiography revealed a massive descending aortic aneurysm compressing the right pulmonary artery as well as the left lung parenchyma, accounting for the bilateral perfusion scan defects. We present the Xe 133 ventilation, Tc 99m MAA perfusion and CT pulmonary angiography imaging findings of this rare case.

  15. A data compression algorithm for nuclear spectrum files

    International Nuclear Information System (INIS)

    Mika, J.F.; Martin, L.J.; Johnston, P.N.

    1990-01-01

    The total space occupied by computer files of spectra generated in nuclear spectroscopy systems can lead to problems of storage, and transmission time. An algorithm is presented which significantly reduces the space required to store nuclear spectra, without loss of any information content. Testing indicates that spectrum files can be routinely compressed by a factor of 5. (orig.)

  16. Compressed beam directed particle nuclear energy generator

    International Nuclear Information System (INIS)

    Salisbury, W.W.

    1985-01-01

    This invention relates to the generation of energy from the fusion of atomic nuclei which are caused to travel towards each other along collision courses, orbiting in common paths having common axes and equal radii. High velocity fusible ion beams are directed along head-on circumferential collision paths in an annular zone wherein beam compression by electrostatic focusing greatly enhances head-on fusion-producing collisions. In one embodiment, a steady radial electric field is imposed on the beams to compress the beams and reduce the radius of the spiral paths for enhancing the particle density. Beam compression is achieved through electrostatic focusing to establish and maintain two opposing beams in a reaction zone

  17. Giant negative linear compression positively coupled to massive thermal expansion in a metal-organic framework.

    Science.gov (United States)

    Cai, Weizhao; Katrusiak, Andrzej

    2014-07-04

    Materials with negative linear compressibility are sought for various technological applications. Such effects were reported mainly in framework materials. When heated, they typically contract in the same direction of negative linear compression. Here we show that this common inverse relationship rule does not apply to a three-dimensional metal-organic framework crystal, [Ag(ethylenediamine)]NO3. In this material, the direction of the largest intrinsic negative linear compression yet observed in metal-organic frameworks coincides with the strongest positive thermal expansion. In the perpendicular direction, the large linear negative thermal expansion and the strongest crystal compressibility are collinear. This seemingly irrational positive relationship of temperature and pressure effects is explained and the mechanism of coupling of compressibility with expansivity is presented. The positive coupling between compression and thermal expansion in this material enhances its piezo-mechanical response in adiabatic process, which may be used for designing new artificial composites and ultrasensitive measuring devices.

  18. Compressed Baryonic Matter of Astrophysics

    OpenAIRE

    Guo, Yanjun; Xu, Renxin

    2013-01-01

    Baryonic matter in the core of a massive and evolved star is compressed significantly to form a supra-nuclear object, and compressed baryonic matter (CBM) is then produced after supernova. The state of cold matter at a few nuclear density is pedagogically reviewed, with significant attention paid to a possible quark-cluster state conjectured from an astrophysical point of view.

  19. Nuclear compression effects on pion production in nuclear collisions

    International Nuclear Information System (INIS)

    Sano, M.; Gyulassy, M.; Wakai, M.; Kitazoe, Y.

    1984-11-01

    The pion multiplicity produced in nuclear collisions between 0.2 and 2 AGeV is calculated assuming shock formation. We also correct the procedure of extracting the nuclear equation of state as proposed by Stock et al. The nuclear equation of state would have to be extremely stiff for this model to reproduce the observed multiplicities. The assumptions of this model are critically analyzed. (author)

  20. Mathematical analysis of compressive/tensile molecular and nuclear structures

    Science.gov (United States)

    Wang, Dayu

    Mathematical analysis in chemistry is a fascinating and critical tool to explain experimental observations. In this dissertation, mathematical methods to present chemical bonding and other structures for many-particle systems are discussed at different levels (molecular, atomic, and nuclear). First, the tetrahedral geometry of single, double, or triple carbon-carbon bonds gives an unsatisfying demonstration of bond lengths, compared to experimental trends. To correct this, Platonic solids and Archimedean solids were evaluated as atoms in covalent carbon or nitrogen bond systems in order to find the best solids for geometric fitting. Pentagonal solids, e.g. the dodecahedron and icosidodecahedron, give the best fit with experimental bond lengths; an ideal pyramidal solid which models covalent bonds was also generated. Second, the macroscopic compression/tension architectural approach was applied to forces at the molecular level, considering atomic interactions as compressive (repulsive) and tensile (attractive) forces. Two particle interactions were considered, followed by a model of the dihydrogen molecule (H2; two protons and two electrons). Dihydrogen was evaluated as two different types of compression/tension structures: a coaxial spring model and a ring model. Using similar methods, covalent diatomic molecules (made up of C, N, O, or F) were evaluated. Finally, the compression/tension model was extended to the nuclear level, based on the observation that nuclei with certain numbers of protons/neutrons (magic numbers) have extra stability compared to other nucleon ratios. A hollow spherical model was developed that combines elements of the classic nuclear shell model and liquid drop model. Nuclear structure and the trend of the "island of stability" for the current and extended periodic table were studied.

  1. Nuclear compression effects on pion production in nuclear collisions

    International Nuclear Information System (INIS)

    Sano, M.; Gyulassy, M.; Wakai, M.; Kitazoe, Y.

    1985-01-01

    We show that the method of analyzing the pion excitation function proposed by Stock et al. may determine only a part of the nuclear matter equation of state. With the addition of missing kinetic energy terms the implied high density nuclear equation of state would be much stiffer than expected from conventional theory. A stiff equation of state would also follow if shock dynamics with early chemical freeze out were valid. (orig.)

  2. Negative linear compressibility and massive anisotropic thermal expansion in methanol monohydrate.

    Science.gov (United States)

    Fortes, A Dominic; Suard, Emmanuelle; Knight, Kevin S

    2011-02-11

    The vast majority of materials shrink in all directions when hydrostatically compressed; exceptions include certain metallic or polymer foam structures, which may exhibit negative linear compressibility (NLC) (that is, they expand in one or more directions under hydrostatic compression). Materials that exhibit this property at the molecular level--crystalline solids with intrinsic NLC--are extremely uncommon. With the use of neutron powder diffraction, we have discovered and characterized both NLC and extremely anisotropic thermal expansion, including negative thermal expansion (NTE) along the NLC axis, in a simple molecular crystal (the deuterated 1:1 compound of methanol and water). Apically linked rhombuses, which are formed by the bridging of hydroxyl-water chains with methyl groups, extend along the axis of NLC/NTE and lead to the observed behavior.

  3. The evolution of American nuclear doctrine 1945-1980: from massive retaliation to limited nuclear war

    International Nuclear Information System (INIS)

    Richani, N.

    1983-01-01

    This thesis attempts to demonstrate the evolutionary character of American nuclear doctrine from the beginning of the nuclear age in 1945 until 1980. It also aims at disclosing some of the most important factors that contributed to the doctrine's evolution, namely, technological progress and developments in weaponry and the shifts that were taking place in the correlation of forces between the two superpowers, the Soviet Union and the United States. The thesis tries to establish the relation, if any, between these two variables (technology and balance of forces) and the evolution of the doctrine from Massive Retaliation to limited nuclear war. There are certainly many other factors which influenced military doctrine, but this thesis focuses on the above mentioned factors. touching on others when it was thought essential.The thesis concludes by trying to answer the question of whether the purpose of the limited nuclear war doctrine is to keep the initiative in US hands, that is putting itself on the side with the positive purpose, or not. Refs

  4. The evolution of American nuclear doctrine 1945-1980: from massive retaliation to limited nuclear war

    Energy Technology Data Exchange (ETDEWEB)

    Richani, N [Public Administration Dpt. American Univ. of Beirut (Lebanon)

    1983-12-31

    This thesis attempts to demonstrate the evolutionary character of American nuclear doctrine from the beginning of the nuclear age in 1945 until 1980. It also aims at disclosing some of the most important factors that contributed to the doctrine`s evolution, namely, technological progress and developments in weaponry and the shifts that were taking place in the correlation of forces between the two superpowers, the Soviet Union and the United States. The thesis tries to establish the relation, if any, between these two variables (technology and balance of forces) and the evolution of the doctrine from Massive Retaliation to limited nuclear war. There are certainly many other factors which influenced military doctrine, but this thesis focuses on the above mentioned factors. touching on others when it was thought essential.The thesis concludes by trying to answer the question of whether the purpose of the limited nuclear war doctrine is to keep the initiative in US hands, that is putting itself on the side with the positive purpose, or not. Refs.

  5. Massive global ozone loss predicted following regional nuclear conflict

    Science.gov (United States)

    Mills, Michael J.; Toon, Owen B.; Turco, Richard P.; Kinnison, Douglas E.; Garcia, Rolando R.

    2008-01-01

    We use a chemistry-climate model and new estimates of smoke produced by fires in contemporary cities to calculate the impact on stratospheric ozone of a regional nuclear war between developing nuclear states involving 100 Hiroshima-size bombs exploded in cities in the northern subtropics. We find column ozone losses in excess of 20% globally, 25–45% at midlatitudes, and 50–70% at northern high latitudes persisting for 5 years, with substantial losses continuing for 5 additional years. Column ozone amounts remain near or <220 Dobson units at all latitudes even after three years, constituting an extratropical “ozone hole.” The resulting increases in UV radiation could impact the biota significantly, including serious consequences for human health. The primary cause for the dramatic and persistent ozone depletion is heating of the stratosphere by smoke, which strongly absorbs solar radiation. The smoke-laden air rises to the upper stratosphere, where removal mechanisms are slow, so that much of the stratosphere is ultimately heated by the localized smoke injections. Higher stratospheric temperatures accelerate catalytic reaction cycles, particularly those of odd-nitrogen, which destroy ozone. In addition, the strong convection created by rising smoke plumes alters the stratospheric circulation, redistributing ozone and the sources of ozone-depleting gases, including N2O and chlorofluorocarbons. The ozone losses predicted here are significantly greater than previous “nuclear winter/UV spring” calculations, which did not adequately represent stratospheric plume rise. Our results point to previously unrecognized mechanisms for stratospheric ozone depletion. PMID:18391218

  6. A method of loss free compression for the data of nuclear spectrum

    International Nuclear Information System (INIS)

    Sun Mingshan; Wu Shiying; Chen Yantao; Xu Zurun

    2000-01-01

    A new method of loss free compression based on the feature of the data of nuclear spectrum is provided, from which a practicable algorithm is successfully derived. A compression rate varying from 0.50 to 0.25 is obtained and the distribution of the processed data becomes even more suitable to be reprocessed by another compression such as Huffman Code to improve the compression rate

  7. Nuclear data compression and reconstruction via discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Park, Young Ryong; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Discrete Wavelet Transforms (DWTs) are recent mathematics, and begin to be used in various fields. The wavelet transform can be used to compress the signal and image due to its inherent properties. We applied the wavelet transform compression and reconstruction to the neutron cross section data. Numerical tests illustrate that the signal compression using wavelet is very effective to reduce the data saving spaces. 7 refs., 4 figs., 3 tabs. (Author)

  8. Nuclear data compression and reconstruction via discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Park, Young Ryong; Cho, Nam Zin [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    Discrete Wavelet Transforms (DWTs) are recent mathematics, and begin to be used in various fields. The wavelet transform can be used to compress the signal and image due to its inherent properties. We applied the wavelet transform compression and reconstruction to the neutron cross section data. Numerical tests illustrate that the signal compression using wavelet is very effective to reduce the data saving spaces. 7 refs., 4 figs., 3 tabs. (Author)

  9. THE VERY MASSIVE STAR CONTENT OF THE NUCLEAR STAR CLUSTERS IN NGC 5253

    Energy Technology Data Exchange (ETDEWEB)

    Smith, L. J. [Space Telescope Science Institute and European Space Agency, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Crowther, P. A. [Department of Physics and Astronomy, University of Sheffield, Sheffield S3 7RH (United Kingdom); Calzetti, D. [Department of Astronomy, University of Massachusetts—Amherst, Amherst, MA 01003 (United States); Sidoli, F., E-mail: lsmith@stsci.edu [London Centre for Nanotechnology, University College London, London WC1E 6BT (United Kingdom)

    2016-05-20

    The blue compact dwarf galaxy NGC 5253 hosts a very young starburst containing twin nuclear star clusters, separated by a projected distance of 5 pc. One cluster (#5) coincides with the peak of the H α emission and the other (#11) with a massive ultracompact H ii region. A recent analysis of these clusters shows that they have a photometric age of 1 ± 1 Myr, in apparent contradiction with the age of 3–5 Myr inferred from the presence of Wolf-Rayet features in the cluster #5 spectrum. We examine Hubble Space Telescope ultraviolet and Very Large Telescope optical spectroscopy of #5 and show that the stellar features arise from very massive stars (VMSs), with masses greater than 100 M {sub ⊙}, at an age of 1–2 Myr. We further show that the very high ionizing flux from the nuclear clusters can only be explained if VMSs are present. We investigate the origin of the observed nitrogen enrichment in the circumcluster ionized gas and find that the excess N can be produced by massive rotating stars within the first 1 Myr. We find similarities between the NGC 5253 cluster spectrum and those of metal-poor, high-redshift galaxies. We discuss the presence of VMSs in young, star-forming galaxies at high redshift; these should be detected in rest-frame UV spectra to be obtained with the James Webb Space Telescope . We emphasize that population synthesis models with upper mass cutoffs greater than 100 M {sub ⊙} are crucial for future studies of young massive star clusters at all redshifts.

  10. Techniques for data compression in experimental nuclear physics problems

    International Nuclear Information System (INIS)

    Byalko, A.A.; Volkov, N.G.; Tsupko-Sitnikov, V.M.

    1984-01-01

    Techniques and ways for data compression during physical experiments are estimated. Data compression algorithms are divided into three groups: the first one includes the algorithms based on coding and which posses only average indexes by data files, the second group includes algorithms with data processing elements, the third one - algorithms for converted data storage. The greatest promise for the techniques connected with data conversion is concluded. The techniques possess high indexes for compression efficiency and for fast response, permit to store information close to the source one

  11. Biliary-duodenal anastomosis using magnetic compression following massive resection of small intestine due to strangulated ileus after living donor liver transplantation: a case report.

    Science.gov (United States)

    Saito, Ryusuke; Tahara, Hiroyuki; Shimizu, Seiichi; Ohira, Masahiro; Ide, Kentaro; Ishiyama, Kohei; Kobayashi, Tsuyoshi; Ohdan, Hideki

    2017-12-01

    Despite the improvements of surgical techniques and postoperative management of patients with liver transplantation, biliary complications are one of the most common and important adverse events. We present a first case of choledochoduodenostomy using magnetic compression following a massive resection of the small intestine due to strangulated ileus after living donor liver transplantation. The 54-year-old female patient had end-stage liver disease, secondary to liver cirrhosis, due to primary sclerosing cholangitis with ulcerative colitis. Five years earlier, she had received living donor liver transplantation using a left lobe graft, with resection of the extrahepatic bile duct and Roux-en-Y anastomosis. The patient experienced sudden onset of intense abdominal pain. An emergency surgery was performed, and the diagnosis was confirmed as strangulated ileus due to twisting of the mesentery. Resection of the massive small intestine, including choledochojejunostomy, was performed. Only 70 cm of the small intestine remained. She was transferred to our hospital with an external drainage tube from the biliary cavity and jejunostomy. We initiated total parenteral nutrition, and percutaneous transhepatic biliary drainage was established to treat the cholangitis. Computed tomography revealed that the biliary duct was close to the duodenum; hence, we planned magnetic compression anastomosis of the biliary duct and the duodenum. The daughter magnet was placed in the biliary drainage tube, and the parent magnet was positioned in the bulbus duodeni using a fiberscope. Anastomosis between the left hepatic duct and the duodenum was accomplished after 25 days, and the biliary drainage stent was placed over the anastomosis to prevent re-stenosis. Contributions to the successful withdrawal of parenteral nutrition were closure of the ileostomy in the adaptive period, preservation of the ileocecal valve, internal drainage of bile, and side-to-side anastomosis

  12. New filterability and compressibility test cell design for nuclear products

    Energy Technology Data Exchange (ETDEWEB)

    Féraud, J.P. [CEA Marcoule, DTEC/SGCS/LGCI, BP 17171, 30207 Bagnols-sur-Cèze (France); Bourcier, D., E-mail: damien.bourcier@cea.fr [CEA Marcoule, DTEC/SGCS/LGCI, BP 17171, 30207 Bagnols-sur-Cèze (France); Ode, D. [CEA Marcoule, DTEC/SGCS/LGCI, BP 17171, 30207 Bagnols-sur-Cèze (France); Puel, F. [Université Lyon 1, Villeurbanne (France); CNRS, UMR5007, Laboratoire d‘Automatique et de Génie des Procédés (LAGEP), CPE-Lyon, 43 bd du 11 Novembre 1918, 69100 Villeurbanne (France)

    2013-12-15

    Highlights: • Test easily usable without tools in a glove box. • The test minimizes the slurry volume necessary for this type of study. • The test characterizes the flow resistance in a porous medium in formation. • The test is performed at four pressure levels to determine the compressibility. • The technical design ensures reproducible flow resistance measurements. -- Abstract: Filterability and compressibility tests are often carried out at laboratory scale to obtain data required to scale up solid/liquid separation processes. Current technologies, applied with a constant pressure drop, enable specific resistance and cake formation rate measurement in accordance with a modified Darcy's law. The new test cell design described in this paper is easily usable without tools in a glove box and minimizes the slurry volume necessary for this type of study. This is an advantage for investigating toxic and hazardous products such as radioactive materials. Uranium oxalate precipitate slurries were used to test and validate this new cell. In order to reduce the test cell volume, a statistical approach was applied on 8 results obtained with cylindrical test cells of 1.8 cm and 3 cm in diameter. Wall effects can therefore be ignored despite the small filtration cell diameter, allowing tests to be performed with only about one-tenth of the slurry volume of a standard commercial cell. The significant reduction in the size of this experimental device does not alter the consistency of filtration data which may be used in the design of industrial equipment.

  13. New filterability and compressibility test cell design for nuclear products

    International Nuclear Information System (INIS)

    Féraud, J.P.; Bourcier, D.; Ode, D.; Puel, F.

    2013-01-01

    Highlights: • Test easily usable without tools in a glove box. • The test minimizes the slurry volume necessary for this type of study. • The test characterizes the flow resistance in a porous medium in formation. • The test is performed at four pressure levels to determine the compressibility. • The technical design ensures reproducible flow resistance measurements. -- Abstract: Filterability and compressibility tests are often carried out at laboratory scale to obtain data required to scale up solid/liquid separation processes. Current technologies, applied with a constant pressure drop, enable specific resistance and cake formation rate measurement in accordance with a modified Darcy's law. The new test cell design described in this paper is easily usable without tools in a glove box and minimizes the slurry volume necessary for this type of study. This is an advantage for investigating toxic and hazardous products such as radioactive materials. Uranium oxalate precipitate slurries were used to test and validate this new cell. In order to reduce the test cell volume, a statistical approach was applied on 8 results obtained with cylindrical test cells of 1.8 cm and 3 cm in diameter. Wall effects can therefore be ignored despite the small filtration cell diameter, allowing tests to be performed with only about one-tenth of the slurry volume of a standard commercial cell. The significant reduction in the size of this experimental device does not alter the consistency of filtration data which may be used in the design of industrial equipment

  14. ORIGIN AND GROWTH OF NUCLEAR STAR CLUSTERS AROUND MASSIVE BLACK HOLES

    International Nuclear Information System (INIS)

    Antonini, Fabio

    2013-01-01

    The centers of stellar spheroids less luminous than ∼10 10 L ☉ are often marked by the presence of nucleated central regions, called 'nuclear star clusters' (NSCs). The origin of NSCs is still unclear. Here we investigate the possibility that NSCs originate from the migration and merger of stellar clusters at the center of galaxies where a massive black hole (MBH) may sit. We show that the observed scaling relation between NSC masses and the velocity dispersion of their host spheroids cannot be reconciled with a purely 'in situ' dissipative formation scenario. On the other hand, the observed relation appears to be in agreement with the predictions of the cluster merger model. A dissipationless formation model also reproduces the observed relation between the size of NSCs and their total luminosity, R∝√(L NSC ). When an MBH is included at the center of the galaxy, such dependence becomes substantially weaker than the observed correlation, since the size of the NSC is mainly determined by the fixed tidal field of the MBH. We evolve through dynamical friction a population of stellar clusters in a model of a galactic bulge taking into account dynamical dissolution due to two-body relaxation, starting from a power-law cluster initial mass function and adopting an initial total mass in stellar clusters consistent with the present-day cluster formation efficiency of the Milky Way (MW). The most massive clusters reach the center of the galaxy and merge to form a compact nucleus; after 10 10 years, the resulting NSC has properties that are consistent with the observed distribution of stars in the MW NSC. When an MBH is included at the center of a galaxy, globular clusters are tidally disrupted during inspiral, resulting in NSCs with lower densities than those of NSCs forming in galaxies with no MBHs. We suggest this as a possible explanation for the lack of NSCs in galaxies containing MBHs more massive than ∼10 8 M ☉ . Finally, we investigate the orbital

  15. ORIGIN AND GROWTH OF NUCLEAR STAR CLUSTERS AROUND MASSIVE BLACK HOLES

    Energy Technology Data Exchange (ETDEWEB)

    Antonini, Fabio, E-mail: antonini@cita.utoronto.ca [Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto, Ontario M5S 3H8 (Canada)

    2013-01-20

    The centers of stellar spheroids less luminous than {approx}10{sup 10} L {sub Sun} are often marked by the presence of nucleated central regions, called 'nuclear star clusters' (NSCs). The origin of NSCs is still unclear. Here we investigate the possibility that NSCs originate from the migration and merger of stellar clusters at the center of galaxies where a massive black hole (MBH) may sit. We show that the observed scaling relation between NSC masses and the velocity dispersion of their host spheroids cannot be reconciled with a purely 'in situ' dissipative formation scenario. On the other hand, the observed relation appears to be in agreement with the predictions of the cluster merger model. A dissipationless formation model also reproduces the observed relation between the size of NSCs and their total luminosity, R{proportional_to}{radical}(L{sub NSC}). When an MBH is included at the center of the galaxy, such dependence becomes substantially weaker than the observed correlation, since the size of the NSC is mainly determined by the fixed tidal field of the MBH. We evolve through dynamical friction a population of stellar clusters in a model of a galactic bulge taking into account dynamical dissolution due to two-body relaxation, starting from a power-law cluster initial mass function and adopting an initial total mass in stellar clusters consistent with the present-day cluster formation efficiency of the Milky Way (MW). The most massive clusters reach the center of the galaxy and merge to form a compact nucleus; after 10{sup 10} years, the resulting NSC has properties that are consistent with the observed distribution of stars in the MW NSC. When an MBH is included at the center of a galaxy, globular clusters are tidally disrupted during inspiral, resulting in NSCs with lower densities than those of NSCs forming in galaxies with no MBHs. We suggest this as a possible explanation for the lack of NSCs in galaxies containing MBHs more massive

  16. Binding Energy and Compression Modulus of Infinite Nuclear Matter ...

    African Journals Online (AJOL)

    ... MeV at the normal nuclear matter saturation density consistent with the best available density-dependent potentials derived from the G-matrix approach. The results of the incompressibility modulus, k∞ is in excellent agreement with the results of other workers. Journal of the Nigerian Association of Mathematical Physics, ...

  17. The investigation on compressed air quality analysis results of nuclear power plants

    International Nuclear Information System (INIS)

    Sung, K. B.; Kim, H. K.; Kim, W. S.

    2000-01-01

    The compressed air system of nuclear power plants provides pneumatic power for both operation and control of various plant equipment, tools, and instrumentation. Included in the air supply systems are the compressors, coolers, moisture separators, dryers, filters and air receiver tanks that make up the major items of equipment. The service air system provides oil-free compressed air for general plant and maintenance use and the instrument air system provides dry, oil-free, compressed air for both nonessential and essential components and instruments. NRC recommended the periodic checks on GL88-14 'Instrument air supply system problems affecting safety-related equipment'. To ensure that the quality of the instrument air is equivalent to or exceeds the requirement s of ISA-S7.3(1975), air samples are taken at every refueling outage and analyzed for moisture, oil and particulate content. The over all results are satisfied the requirements of ISA-S7.3

  18. Time of flight measurements of unirradiated and irradiated nuclear graphite under cyclic compressive load

    Energy Technology Data Exchange (ETDEWEB)

    Bodel, W., E-mail: william.bodel@hotmail.com [Nuclear Graphite Research Group, The University of Manchester (United Kingdom); Atkin, C. [Health and Safety Laboratory, Buxton (United Kingdom); Marsden, B.J. [Nuclear Graphite Research Group, The University of Manchester (United Kingdom)

    2017-04-15

    The time-of-flight technique has been used to investigate the stiffness of nuclear graphite with respect to the grade and grain direction. A loading rig was developed to collect time-of-flight measurements during cycled compressive loading up to 80% of the material's compressive strength and subsequent unloading of specimens along the axis of the applied stress. The transmission velocity (related to Young's modulus), decreased with increasing applied stress; and depending on the graphite grade and orientation, the modulus then increased, decreased or remained constant upon unloading. These tests were repeated while observing the microstructure during the load/unload cycles. Initial decreases in transmission velocity with compressive load are attributed to microcrack formation within filler and binder phases. Three distinct types of behaviour occur on unloading, depending on the grade, irradiation, and loading direction. These different behaviours can be explained in terms of the material microstructure observed from the microscopy performed during loading.

  19. Compressed-air and backup nitrogen systems in nuclear power plants

    International Nuclear Information System (INIS)

    Hagen, E.W.

    1982-07-01

    This report reviews and evaluates the performance of the compressed-air and pressurized-nitrogen gas systems in commercial nuclear power units. The information was collected from readily available operating experiences, licensee event reports, system designs in safety analysis reports, and regulatory documents. The results are collated and analyzed for significance and impact on power plant safety performance. Under certain circumstances, the fail-safe philosophy for a piece of equipment or subsystem of the compressed-air systems initiated a series of actions culminating in reactor transient or unit scram. However, based on this study of prevailing operating experiences, reclassifying the compressed-gas systems to a higher safety level will neither prevent (nor mitigate) the reoccurrences of such happenings nor alleviate nuclear power plant problems caused by inadequate maintenance, operating procedures, and/or practices. Conversely, because most of the problems were derived from the sources listed previously, upgrading of both maintenance and operating procedures will not only result in substantial improvement in the performance and availability of the compressed-air (and backup nitrogen) systems but in improved overall plant performance

  20. Feasibility of Ericsson type isothermal expansion/compression gas turbine cycle for nuclear energy use

    International Nuclear Information System (INIS)

    Shimizu, Akihiko

    2007-01-01

    A gas turbine with potential demand for the next generation nuclear energy use such as HTGR power plants, a gas cooled FBR, a gas cooled nuclear fusion reactor uses helium as working gas and with a closed cycle. Materials constituting a cycle must be set lower than allowable temperature in terms of mechanical strength and radioactivity containment performance and so expansion inlet temperature is remarkably limited. For thermal efficiency improvement, isothermal expansion/isothermal compression Ericsson type gas turbine cycle should be developed using wet surface of an expansion/compressor casing and a duct between stators without depending on an outside heat exchanger performing multistage re-heat/multistage intermediate cooling. Feasibility of an Ericsson cycle in comparison with a Brayton cycle and multi-stage compression/expansion cycle was studied and technologies to be developed were clarified. (author)

  1. Opportunities and challenges in applying the compressive sensing framework to nuclear science and engineering

    International Nuclear Information System (INIS)

    Mille, Matthew; Su, Lin; Yazici, Birsen; Xu, X. George

    2011-01-01

    Compressive sensing is a 5-year old theory that has already resulted in an extremely large number of publications in the literature and that has the potential to impact every field of engineering and applied science that has to do with data acquisition and processing. This paper introduces the mathematics, presents a simple demonstration of radiation dose reduction in x-ray CT imaging, and discusses potential application in nuclear science and engineering. (author)

  2. Observation of Compressive Deformation Behavior of Nuclear Graphite by Digital Image Correlation

    International Nuclear Information System (INIS)

    Kim, Hyunju; Kim, Eungseon; Kim, Minhwan; Kim, Yongwan

    2014-01-01

    Polycrystalline nuclear graphite has been proposed as a fuel element, moderator and reflector blocks, and core support structures in a very high temperature gas-cooled reactor. During reactor operation, graphite core components and core support structures are subjected to various stresses. It is therefore important to understand the mechanism of deformation and fracture of nuclear graphites, and their significance to structural integrity assessment methods. Digital image correlation (DIC) is a powerful tool to measure the full field displacement distribution on the surface of the specimens. In this study, to gain an understanding of compressive deformation characteristic, the formation of strain field during a compression test was examined using a commercial DIC system. An examination was made to characterize the compressive deformation behavior of nuclear graphite by a digital image correlation. The non-linear load-displacement characteristic prior to the peak load was shown to be mainly dominated by the presence of localized strains, which resulted in a permanent displacement. Young's modulus was properly calculated from the measured strain

  3. Analysis on Japan's long-term energy outlook considering massive deployment of variable renewable energy under nuclear energy scenario

    International Nuclear Information System (INIS)

    Komiyama, Ryoichi; Fujii, Yasumasa

    2012-01-01

    This paper investigates Japan's long-term energy outlook to 2050 considering massive deployment of solar photovoltaic (PV) system and wind power generation under nuclear energy scenario. The extensive introduction of PV system and wind power system are expected to play an important role in enhancing electricity supply security after Fukushima Nuclear Power Accident which has increased the uncertainty of future additional construction of nuclear power plant in Japan. On these backgrounds, we develop integrated energy assessment model comprised of both econometric energy demand and supply model and optimal power generation mix model. The latter model is able to explicitly analyze the impact of output fluctuation in variable renewable in detailed time resolution at 10 minutes on consecutive 365 days, incorporating the role of stationary battery technology. Simulation results reveal that intermittent fluctuation derived from high penetration level of those renewables is controlled by quick load following operation by natural gas combined cycle power plant, pumped-storage hydro power, stationary battery technology and the output suppression of PV and wind power. The results show as well that massive penetration of the renewables does not necessarily require the comparable scale of stationary battery capacity. Additionally, on the scenario which assumes the decommissioning of nuclear power plants which lifetime are over 40 years, required PV capacity in 2050 amounts to more than double of PV installment potential in both building and abandoned farmland area. (author)

  4. The numerical study of the compressible rising of nuclear fireball at low altitude

    International Nuclear Information System (INIS)

    Wang Lin; Zheng Yi; Cheng Xianyou

    2010-01-01

    To study the evolution of nuclear fireball during the phase of compressible rising, the pressure and density were computed with numerical method. It can be concluded that the distribution of parameters of fireball changed during its rising the pressure in the upper part of fireball increased while the one of lower part decreased. the dilute area lied in the middle of fireball initially moved upward, on the other hand, the gradient of density in the upper and side part increased and is contrary to the changing of density beneath the fireball. The computational results of density agreed with experimental shadow graphs very well. (authors)

  5. Prediction of concrete compressive strength considering humidity and temperature in the construction of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seung Hee; Jang, Kyung Pil [Department of Civil and Environmental Engineering, Myongji University, Yongin (Korea, Republic of); Bang, Jin-Wook [Department of Civil Engineering, Chungnam National University, Daejeon (Korea, Republic of); Lee, Jang Hwa [Structural Engineering Research Division, Korea Institute of Construction Technology (Korea, Republic of); Kim, Yun Yong, E-mail: yunkim@cnu.ac.kr [Structural Engineering Research Division, Korea Institute of Construction Technology (Korea, Republic of)

    2014-08-15

    Highlights: • Compressive strength tests for three concrete mixes were performed. • The parameters of the humidity-adjusted maturity function were determined. • Strength can be predicted considering temperature and relative humidity. - Abstract: This study proposes a method for predicting compressive strength developments in the early ages of concretes used in the construction of nuclear power plants. Three representative mixes with strengths of 6000 psi (41.4 MPa), 4500 psi (31.0 MPa), and 4000 psi (27.6 MPa) were selected and tested under various curing conditions; the temperature ranged from 10 to 40 °C, and the relative humidity from 40 to 100%. In order to consider not only the effect of the temperature but also that of humidity, an existing model, i.e. the humidity-adjusted maturity function, was adopted and the parameters used in the function were determined from the test results. A series of tests were also performed in the curing condition of a variable temperature and constant humidity, and a comparison between the measured and predicted strengths were made for the verification.

  6. Effects of nuclear structure in the spin-dependent scattering of weakly interacting massive particles

    Science.gov (United States)

    Nikolaev, M. A.; Klapdor-Kleingrothaus, H. V.

    1993-06-01

    We present calculations of the nuclear from factors for spin-dependent elastic scattering of dark matter WIMPs from123Te and131Xe isotopes, proposed to be used for dark matter detection. A method based on the theory of finite Fermi systems was used to describe the reduction of the single-particle spin-dependent matrix elements in the nuclear medium. Nucleon single-particle states were calculated in a realistic shell model potential; pairing effects were treated within the BCS model. The coupling of the lowest single-particle levels in123Te to collective 2+ excitations of the core was taken into account phenomenologically. The calculated nuclear form factors are considerably less then the single-particle ones for low momentum transfer. At high momentum transfer some dynamical amplification takes place due to the pion exchange term in the effective nuclear interaction. But as the momentum transfer increases, the difference disappears, the momentum transfer increases and the quenching effect disappears. The shape of the nuclear form factor for the131Xe isotope differs from the one obtained using an oscillator basis.

  7. Effects of nuclear structure in the spin-dependent scattering of weakly interacting massive particles

    International Nuclear Information System (INIS)

    Nikolaev, M.A.; Klapdor-Kleingrothaus, H.V.

    1993-01-01

    We present calculations of the nuclear from factors for spin-dependent elastic scattering of dark matter WIMPs from 123 Te and 131 Xe isotopes, proposed to be used for dark matter detection. A method based on the theory of finite Fermi systems was used to describe the reduction of the single-particle spin-dependent matrix elements in the nuclear medium. Nucelon single-particle states were calculated in a realistic shell model potential; pairing effects were treated within the BCS model. The coupling of the lowest single-particle levels in 123 Te to collective 2 + excitations of the core was taken into account phenomenologically. The calculated nuclear form factors are considerably less then the single-particle ones for low momentum transfer. At high momentum transfer some dynamical amplification takes place due to the pion exchange term in the effective nuclear interaction. But as the momentum transfer increases, the difference disappears, the momentum transfer increases and quenching effect disappears. The shape of the nuclear form factor for the 131 Xe isotope differs from the one obtained using an oscillator basis. (orig.)

  8. The space distribution of neutrons generated in massive lead target by relativistic nuclear beam

    International Nuclear Information System (INIS)

    Chultem, D.; Damdinsuren, Ts.; Enkh-Gin, L.; Lomova, L.; Perelygin, V.; Tolstov, K.

    1993-01-01

    The present paper is devoted to implementation of solid state nuclear track detectors in the research of the neutron generation in extended lead spallation target. Measured neutrons space distribution inside the lead target and neutron distribution in the thick water moderator are assessed. (Author)

  9. Use of massively parallel computing to improve modelling accuracy within the nuclear sector

    Directory of Open Access Journals (Sweden)

    L M Evans

    2016-06-01

    This work presents recent advancements in three techniques: Uncertainty quantification (UQ; Cellular automata finite element (CAFE; Image based finite element methods (IBFEM. Case studies are presented demonstrating their suitability for use in nuclear engineering made possible by advancements in parallel computing hardware that is projected to be available for industry within the next decade costing of the order of $100k.

  10. Element Production in the S-Cl Region During Carbon Burning in Massive Stars. Using Computer Systems for Modeling of the Nuclear-Reaction Network

    CERN Document Server

    Szalanski, P; Marganeic, A; Gledenov, Yu M; Sedyshev, P V; Machrafi, R; Oprea, A; Padureanu, I; Aranghel, D

    2002-01-01

    This paper presents results of calculations for nuclear network in S-Cl region during helium burning in massive stars (25 M_{\\odot}) using integrated mathematical systems. The authors also examine other application of presented method in different physical tasks.

  11. Element production in the S - Cl region during carbon burning in massive stars. Using computer systems for modeling of the nuclear-reaction network

    International Nuclear Information System (INIS)

    Szalanski, P.; Stepinski, M.; Marganiec, A.; Gledenov, Yu.M.; Sedyshev, P.V.; Machrafi, R.; Oprea, A.; Padureanu, I.; Aranghel, D.

    2002-01-01

    This paper presents results of calculations for nuclear network in S - Cl region during helium burning in massive stars (25 solar mass) using integrated mathematical systems. The authors also examine other application of the presented method in different physical tasks. (author)

  12. Nuclear β decay with a massive neutrino in an external electromagnetic field

    International Nuclear Information System (INIS)

    Ternov, I.M.; Rodionov, V.N.; Zhulego, V.G.; Lobanov, A.E.; Pavlova, O.S.; Dorofeev, O.F.

    1986-01-01

    Beta decay in the presence of an external electromagnetic field is investigated, taking into account the non-zero neutrino rest mass. The spectrum of electrons and polarisation effects of different orientations of nuclear spin are considered. It is shown that the electromagnetic wave substantially modifies the boundaries of the spectrum of β electrons. The results, which include an analysis of the total decay probability in intense magnetic fields, may have various astrophysical implications. (author)

  13. The discovery of nuclear compression phenomena in relativistic heavy-ion collisions

    International Nuclear Information System (INIS)

    Schmidt, H.R.

    1991-01-01

    This article has attempted to review more than 15 years of research on shock compression phenomena, which is closely related to the goal of determining the nuclear EOS. Exciting progress has been made in this field over the last years and the fundamental physics of relativistic heavy ion-collisions has been well established. Overwhelming experimental evidence for the existence of shock compression has been extracted from the data. While early, inclusive measurements had been rather inconclusive, the advent of 4π-detectors like the GSI-LBL Plastic Ball had enabled the outstanding discovery of collective flow effects, as they were predicted by fluid-dynamical calculations. The particular case of conical Mach shock waves, anticipated for asymmetric collisions, has not been observed. What are the reasons? Surprisingly, the maximum energy of 2.1 GeV/nucleon for heavy ions at the BEVALAC had been found to be too low for Mach shock waves to occur. The small 20 Ne-nucleus is stopped in the heavy Au target. A Mach cone, however, if it had developed in the early stage of the collision will be wiped out by thermal motion in the process of slowing the projectile down to rest. A comparison of the data with models hints towards a rather hard EOS, although a soft one cannot be excluded definitively. A quantitative extraction is aggravated by a number in-medium and final-state effects which influence the calculated observables in a similar fashion as different choices of an EOS. Thus, as of now, the precise knowledge of the EOS of hot and dense matter is still an open question and needs further investigation. (orig.)

  14. Massively parallel Monte Carlo. Experiences running nuclear simulations on a large condor cluster

    International Nuclear Information System (INIS)

    Tickner, James; O'Dwyer, Joel; Roach, Greg; Uher, Josef; Hitchen, Greg

    2010-01-01

    The trivially-parallel nature of Monte Carlo (MC) simulations make them ideally suited for running on a distributed, heterogeneous computing environment. We report on the setup and operation of a large, cycle-harvesting Condor computer cluster, used to run MC simulations of nuclear instruments ('jobs') on approximately 4,500 desktop PCs. Successful operation must balance the competing goals of maximizing the availability of machines for running jobs whilst minimizing the impact on users' PC performance. This requires classification of jobs according to anticipated run-time and priority and careful optimization of the parameters used to control job allocation to host machines. To maximize use of a large Condor cluster, we have created a powerful suite of tools to handle job submission and analysis, as the manual creation, submission and evaluation of large numbers (hundred to thousands) of jobs would be too arduous. We describe some of the key aspects of this suite, which has been interfaced to the well-known MCNP and EGSnrc nuclear codes and our in-house PHOTON optical MC code. We report on our practical experiences of operating our Condor cluster and present examples of several large-scale instrument design problems that have been solved using this tool. (author)

  15. Nucleolus association of chromosomal domains is largely maintained in cellular senescence despite massive nuclear reorganisation.

    Science.gov (United States)

    Dillinger, Stefan; Straub, Tobias; Németh, Attila

    2017-01-01

    Mammalian chromosomes are organized in structural and functional domains of 0.1-10 Mb, which are characterized by high self-association frequencies in the nuclear space and different contact probabilities with nuclear sub-compartments. They exhibit distinct chromatin modification patterns, gene expression levels and replication timing. Recently, nucleolus-associated chromosomal domains (NADs) have been discovered, yet their precise genomic organization and dynamics are still largely unknown. Here, we use nucleolus genomics and single-cell experiments to address these questions in human embryonic fibroblasts during replicative senescence. Genome-wide mapping reveals 1,646 NADs in proliferating cells, which cover about 38% of the annotated human genome. They are mainly heterochromatic and correlate with late replicating loci. Using Hi-C data analysis, we show that interactions of NADs dominate interphase chromosome contacts in the 10-50 Mb distance range. Interestingly, only minute changes in nucleolar association are observed upon senescence. These spatial rearrangements in subdomains smaller than 100 kb are accompanied with local transcriptional changes. In contrast, large centromeric and pericentromeric satellite repeat clusters extensively dissociate from nucleoli in senescent cells. Accordingly, H3K9me3-marked heterochromatin gets remodelled at the perinucleolar space as revealed by immunofluorescence analyses. Collectively, this study identifies connections between the nucleolus, 3D genome structure, and cellular aging at the level of interphase chromosome organization.

  16. Nucleolus association of chromosomal domains is largely maintained in cellular senescence despite massive nuclear reorganisation.

    Directory of Open Access Journals (Sweden)

    Stefan Dillinger

    Full Text Available Mammalian chromosomes are organized in structural and functional domains of 0.1-10 Mb, which are characterized by high self-association frequencies in the nuclear space and different contact probabilities with nuclear sub-compartments. They exhibit distinct chromatin modification patterns, gene expression levels and replication timing. Recently, nucleolus-associated chromosomal domains (NADs have been discovered, yet their precise genomic organization and dynamics are still largely unknown. Here, we use nucleolus genomics and single-cell experiments to address these questions in human embryonic fibroblasts during replicative senescence. Genome-wide mapping reveals 1,646 NADs in proliferating cells, which cover about 38% of the annotated human genome. They are mainly heterochromatic and correlate with late replicating loci. Using Hi-C data analysis, we show that interactions of NADs dominate interphase chromosome contacts in the 10-50 Mb distance range. Interestingly, only minute changes in nucleolar association are observed upon senescence. These spatial rearrangements in subdomains smaller than 100 kb are accompanied with local transcriptional changes. In contrast, large centromeric and pericentromeric satellite repeat clusters extensively dissociate from nucleoli in senescent cells. Accordingly, H3K9me3-marked heterochromatin gets remodelled at the perinucleolar space as revealed by immunofluorescence analyses. Collectively, this study identifies connections between the nucleolus, 3D genome structure, and cellular aging at the level of interphase chromosome organization.

  17. Luminous Infrared Galaxies. III. Multiple Merger, Extended Massive Star Formation, Galactic Wind, and Nuclear Inflow in NGC 3256

    Science.gov (United States)

    Lípari, S.; Díaz, R.; Taniguchi, Y.; Terlevich, R.; Dottori, H.; Carranza, G.

    2000-08-01

    -line ratios (N II/Hα, S II/Hα, S II/S II), and FWHM (Hα) maps for the central region (30''×30'' rmax~22''~4 kpc), with a spatial resolution of 1". In the central region (r~5-6 kpc) we detected that the nuclear starburst and the extended giant H II regions (in the spiral arms) have very similar properties, i.e., high metallicity and low-ionization spectra, with Teff=35,000 K, solar abundance, a range of Te~6000-7000 K, and Ne~100-1000 cm-3. The nuclear and extended outflow shows properties typical of galactic wind/shocks, associated with the nuclear starburst. We suggest that the interaction between dynamical effects, the galactic wind (outflow), low-energy cosmic rays, and the molecular+ionized gas (probably in the inflow phase) could be the possible mechanism that generate the ``similar extended properties in the massive star formation, at a scale of 5-6 kpc!'' We have also studied the presence of the close merger/interacting systems NGC 3256C (at ~150 kpc, ΔV=-100 km s-1) and the possible association between the NGC 3256 and 3263 groups of galaxies. In conclusion, these results suggest that NGC 3256 is the product of a multiple merger, which generated an extended massive star formation process with an associated galactic wind plus a nuclear inflow. Therefore, NGC 3256 is another example in which the relation between mergers and extreme starburst (and the powerful galactic wind, ``multiple'' Type II supernova explosions) play an important role in the evolution of galaxies (the hypothesis of Rieke et al., Joseph et al., Terlevich et al., Heckman et al., and Lípari et al.). Based on observations obtained at the Hubble Space Telescope (HST; Wide Field Planetary Camera 2 [WFPC2] and NICMOS) satellite; International Ultraviolet Explorer (IUE) satellite; European Southern Observatory (ESO, NTT); Chile, Cerro Tololo Inter-American Observatory (CTIO), Chile; Complejo Astronómico el Leoncito (CASLEO), Argentina; Estación Astrofísica de Bosque Alegre (BALEGRE), Argentina.

  18. Agmatine inhibits nuclear factor-κB nuclear translocation in acute spinal cord compression injury rat model

    Directory of Open Access Journals (Sweden)

    Doaa M. Samy

    2016-09-01

    Full Text Available Secondary damage after acute spinal cord compression injury (SCCI exacerbates initial insult. Nuclear factor kappa-B (NF-κB-p65 activation is involved in SCCI deleterious effects. Agmatine (Agm showed neuroprotection against various CNS injuries. However, Agm impact on NF-κB signaling in acute SCCI remains to be investigated. The present study compared the effectiveness of Agm therapy and decompression laminectomy (DL in functional recovery, oxidative stress, inflammatory and apoptotic responses, and modulation of NF-κB activation in acute SCCI rat model. Rats were either sham-operated or subjected to SCCI at T8–9, using 2-Fr. catheter. SCCI rats were randomly treated with DL at T8–9, intraperitoneal Agm (100 mg/kg/day, combined (DL/Agm treatment or saline (n = 16/group. After 28-days of neurological follow-up, spinal cords were either subjected to biochemical measurement of oxidative stress and inflammatory markers or histopathology and immuno-histochemistry for NF-κB-p65 and caspase-3 expression (n = 8/group. Agm was comparable to DL in facilitating neurological functions recovery, reducing inflammation (TNF-α/interleukin-6, and apoptosis. Agm was distinctive in combating oxidative stress. Agm neuroprotective effects were paralleled with inhibition of NF-κB-p65 nuclear translocation. Combined pharmacological and surgical interventions were proved superior in functional recovery. In conclusion, present research suggested a new mechanism for Agm neuroprotection in rats SCCI through inhibition of NF-κB activation.

  19. arXiv Isothermal compressibility of hadronic matter formed in relativistic nuclear collisions

    CERN Document Server

    Mukherjee, Maitreyee; Chatterjee, Arghya; Chatterjee, Sandeep; Adhya, Souvik Priyam; Thakur, Sanchari; Nayak, Tapan K.

    We present the first estimates of isothermal compressibility (\\kT) of hadronic matter formed in relativistic nuclear collisions (\\sNN=7.7~GeV to 2.76~TeV) using experimentally observed quantities. \\kT~is related to the fluctuation in particle multiplicity, temperature and volume of the system formed in the collisions. Multiplicity fluctuations are obtained from the event-by-event distributions of charged particle multiplicities in narrow centrality bins. The dynamical components of the fluctuations are extracted by removing the contributions to the fluctuations from the number of participating nucleons. From the available experimental data, a constant value of \\kT~has been observed as a function of collision energy. The results are compared with calculations from UrQMD, AMPT and EPOS event generators, and estimations of \\kT~are made for Pb-Pb collisions at the CERN Large Hadron Collider. A hadron resonance gas (HRG) model has been used to calculate \\kT~as a function of collision energy. Our results show a dec...

  20. Extramedullary hematopoiesis presented as cytopenia and massive paraspinal masses leading to cord compression in a patient with hereditary persistence of fetal hemoglobin

    OpenAIRE

    Katchi, Tasleem; Kolandaivel, Krishna; Khattar, Pallavi; Farooq, Taliya; Islam, Humayun; Liu, Delong

    2016-01-01

    Background Extramedullary hematopoeisis (EMH) can occur in various physiological and pathologic states. The spleen is the most common site of EMH. Case presentation We report a case with hereditary persistence of fetal hemoglobin with extramedullary hematopoiesis presented as cord compression and cytopenia secondary to multi-paraspinal masses. Conclusion Treatment can be a challenge. Relapse is a possibility.

  1. Extramedullary hematopoiesis presented as cytopenia and massive paraspinal masses leading to cord compression in a patient with hereditary persistence of fetal hemoglobin.

    Science.gov (United States)

    Katchi, Tasleem; Kolandaivel, Krishna; Khattar, Pallavi; Farooq, Taliya; Islam, Humayun; Liu, Delong

    2016-01-01

    Extramedullary hematopoeisis (EMH) can occur in various physiological and pathologic states. The spleen is the most common site of EMH. We report a case with hereditary persistence of fetal hemoglobin with extramedullary hematopoiesis presented as cord compression and cytopenia secondary to multi-paraspinal masses. Treatment can be a challenge. Relapse is a possibility.

  2. The effect of compressive stress on the Young's modulus of unirradiated and irradiated nuclear graphites

    International Nuclear Information System (INIS)

    Oku, T.; Usui, T.; Ero, M.; Fukuda, Y.

    1977-01-01

    The Young's moduli of unirradiated and high temperature (800 to 1000 0 C) irradiated graphites for HTGR were measured by the ultrasonic method in the direction of applied compressive stress during and after stressing. The Young's moduli of all the tested graphites decreased with increasing compressive stress both during and after stressing. In order to investigate the reason for the decrease in Young's modulus by applying compressive stress, the mercury pore diameter distributions of a part of the unirradiated and irradiated specimens were measured. The change in pore distribution is believed to be associated with structural changes produced by irradiation and compressive stressing. The residual strain, after removing the compressive stress, showed a good correlation with the decrease in Young's modulus caused by the compressive stress. The decrease in Young's modulus by applying compressive stress was considered to be due to the increase in the mobile dislocation density and the growth or formation of cracks. The results suggest, however, that the mechanism giving the larger contribution depends on the brand of graphite, and in anisotropic graphite it depends on the direction of applied stress and the irradiation conditions. (author)

  3. Massive Gravity

    OpenAIRE

    de Rham, Claudia

    2014-01-01

    We review recent progress in massive gravity. We start by showing how different theories of massive gravity emerge from a higher-dimensional theory of general relativity, leading to the Dvali–Gabadadze–Porrati model (DGP), cascading gravity, and ghost-free massive gravity. We then explore their theoretical and phenomenological consistency, proving the absence of Boulware–Deser ghosts and reviewing the Vainshtein mechanism and the cosmological solutions in these models. Finally, we present alt...

  4. Massive branes

    International Nuclear Information System (INIS)

    Bergshoeff, E.; Ortin, T.

    1998-01-01

    We investigate the effective world-volume theories of branes in a background given by (the bosonic sector of) 10-dimensional massive IIA supergravity (''''massive branes'''') and their M-theoretic origin. In the case of the solitonic 5-brane of type IIA superstring theory the construction of the Wess-Zumino term in the world-volume action requires a dualization of the massive Neveu-Schwarz/Neveu-Schwarz target space 2-form field. We find that, in general, the effective world-volume theory of massive branes contains new world-volume fields that are absent in the massless case, i.e. when the mass parameter m of massive IIA supergravity is set to zero. We show how these new world-volume fields can be introduced in a systematic way. (orig.)

  5. Method and device for the powerful compression of laser-produced plasmas for nuclear fusion

    International Nuclear Information System (INIS)

    Hora, H.

    1975-01-01

    According to the invention, more than 10% of the laser energy are converted into mechanical energy of compression, in that the compression is produced by non-linear excessive radiation pressure. The time and local spectral and intensity distribution of the laser pulse must be controlled. The focussed laser beams must increase to over 10 15 W/cm 2 in less than 10 -9 seconds and the time variation of the intensities must be carried out so that the dynamic absorption of the outer plasma corona by rippling consumes less than 90% of the laser energy. (GG) [de

  6. Direct photons as a potential probe for the triangle -resonance in compressed nuclear matter

    International Nuclear Information System (INIS)

    Simon, R.S.

    1994-04-01

    Pions are trapped in the compressed hadronic matter formed in relativistic heavy-ion collisions for the time periods of 15 fm/c. Such time scales are long compared to the width of the Δ-resonance and result in an enhancement of the Δ/π o γ-ratio over the free value. Simulations for the acceptance of the photon spectrometer TAPS indicate that the photon signal from the Δ-resonance might be observable. (orig.)

  7. Massive target nuclei as disc-shaped slabs and spherical objects of intranuclear matter in high-energy nuclear collisions

    International Nuclear Information System (INIS)

    Zewislawski, Z.; Strugalski, Z.; Mausa, M.

    1990-01-01

    It has been found experimentally that a definite number of emitted nucleons corresponds to a definite impact parameter in hadron-nucleus collisions. This finding allows one: to treat the massive target nucleus as a piece of intranuclear matter of a definite thickness; to treat a numerous sample of collisions of monoenergetic identical hadrons with the nucleus as collection of interactions of a homogeneous beam of hadrons with disc-shaped slabs of intranuclear matter of definite thicknesses. 17 refs.; 1 fig

  8. An Experimental Investigation On Minimum Compressive Strength Of Early Age Concrete To Prevent Frost Damage For Nuclear Power Plant Structures In Cold Climates

    International Nuclear Information System (INIS)

    Koh, Kyungtaek; Kim, Dogyeum; Park, Chunjin; Ryu, Gumsung; Park, Jungjun; Lee, Janghwa

    2013-01-01

    Concrete undergoing early frost damage in cold weather will experience significant loss of not only strength, but also of permeability and durability. Accordingly, concrete codes like ACI-306R prescribe a minimum compressive strength and duration of curing to prevent frost damage at an early age and secure the quality of concrete. Such minimum compressive strength and duration of curing are mostly defined based on the strength development of concrete. However, concrete subjected to frost damage at early age may not show a consistent relationship between its strength and durability. Especially, since durability of concrete is of utmost importance in nuclear power plant structures, this relationship should be imperatively clarified. Therefore, this study verifies the feasibility of the minimum compressive strength specified in the codes like ACI-306R by evaluating the strength development and the durability preventing the frost damage of early age concrete for nuclear power plant. The results indicate that the value of 5 MPa specified by the concrete standards like ACI-306R as the minimum compressive strength to prevent the early frost damage is reasonable in terms of the strength development, but seems to be inappropriate in the viewpoint of the resistance to chloride ion penetration and freeze-thaw. Consequently, it is recommended to propose a minimum compressive strength preventing early frost damage in terms of not only the strength development, but also in terms of the durability to secure the quality of concrete for nuclear power plants in cold climates

  9. An Experimental Investigation On Minimum Compressive Strength Of Early Age Concrete To Prevent Frost Damage For Nuclear Power Plant Structures In Cold Climates

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kyungtaek; Kim, Dogyeum; Park, Chunjin; Ryu, Gumsung; Park, Jungjun; Lee, Janghwa [Korea Institute Construction Technology, Goyang (Korea, Republic of)

    2013-06-15

    Concrete undergoing early frost damage in cold weather will experience significant loss of not only strength, but also of permeability and durability. Accordingly, concrete codes like ACI-306R prescribe a minimum compressive strength and duration of curing to prevent frost damage at an early age and secure the quality of concrete. Such minimum compressive strength and duration of curing are mostly defined based on the strength development of concrete. However, concrete subjected to frost damage at early age may not show a consistent relationship between its strength and durability. Especially, since durability of concrete is of utmost importance in nuclear power plant structures, this relationship should be imperatively clarified. Therefore, this study verifies the feasibility of the minimum compressive strength specified in the codes like ACI-306R by evaluating the strength development and the durability preventing the frost damage of early age concrete for nuclear power plant. The results indicate that the value of 5 MPa specified by the concrete standards like ACI-306R as the minimum compressive strength to prevent the early frost damage is reasonable in terms of the strength development, but seems to be inappropriate in the viewpoint of the resistance to chloride ion penetration and freeze-thaw. Consequently, it is recommended to propose a minimum compressive strength preventing early frost damage in terms of not only the strength development, but also in terms of the durability to secure the quality of concrete for nuclear power plants in cold climates.

  10. The life and death of massive stars revealed by the observation of nuclear gamma-ray lines with the Integral/SPI spectrometer

    International Nuclear Information System (INIS)

    Martin, P.

    2008-11-01

    The aim of this research thesis is to bring up observational constraints on the mechanisms which govern life and death of massive stars, i.e. stars having an initial mass greater than eight times the Sun's mass, and smaller than 120 to 150 solar masses. Thus, it aims at detecting the vestiges of recent and close supernovae in order to find out the traces of the dynamics of their first instants. The author has explored the radiation of three radio-isotopes accessible to the nuclear gamma astronomy ( 44 Ti, 60 Fe, 26 Al) using observations performed with high resolution gamma spectrometer (SPI) on the INTEGRAL international observatory. After an overview of the present knowledge on the massive star explosion mechanism, the author presents the specificities and potential of the investigated radio-isotopes. He describes the data treatment methods and a population synthesis programme for the prediction of decay gamma streaks, and then reports its work on the inner dynamics of Cassiopeia A explosion, the stellar activity of the galaxy revealed by the radioisotope observation, the nucleo-synthetic activity of the Swan region

  11. Progress in the Development of Compressible, Multiphase Flow Modeling Capability for Nuclear Reactor Flow Applications

    Energy Technology Data Exchange (ETDEWEB)

    R. A. Berry; R. Saurel; F. Petitpas; E. Daniel; O. Le Metayer; S. Gavrilyuk; N. Dovetta

    2008-10-01

    In nuclear reactor safety and optimization there are key issues that rely on in-depth understanding of basic two-phase flow phenomena with heat and mass transfer. Within the context of multiphase flows, two bubble-dynamic phenomena – boiling (heterogeneous) and flashing or cavitation (homogeneous boiling), with bubble collapse, are technologically very important to nuclear reactor systems. The main difference between boiling and flashing is that bubble growth (and collapse) in boiling is inhibited by limitations on the heat transfer at the interface, whereas bubble growth (and collapse) in flashing is limited primarily by inertial effects in the surrounding liquid. The flashing process tends to be far more explosive (and implosive), and is more violent and damaging (at least in the near term) than the bubble dynamics of boiling. However, other problematic phenomena, such as crud deposition, appear to be intimately connecting with the boiling process. In reality, these two processes share many details.

  12. Parallel inversion of a massive ERT data set to characterize deep vadose zone contamination beneath former nuclear waste infiltration galleries at the Hanford Site B-Complex (Invited)

    Science.gov (United States)

    Johnson, T.; Rucker, D. F.; Wellman, D.

    2013-12-01

    The Hanford Site, located in south-central Washington, USA, originated in the early 1940's as part of the Manhattan Project and produced plutonium used to build the United States nuclear weapons stockpile. In accordance with accepted industrial practice of that time, a substantial portion of relatively low-activity liquid radioactive waste was disposed of by direct discharge to either surface soil or into near-surface infiltration galleries such as cribs and trenches. This practice was supported by early investigations beginning in the 1940s, including studies by Geological Survey (USGS) experts, whose investigations found vadose zone soils at the site suitable for retaining radionuclides to the extent necessary to protect workers and members of the general public based on the standards of that time. That general disposal practice has long since been discontinued, and the US Department of Energy (USDOE) is now investigating residual contamination at former infiltration galleries as part of its overall environmental management and remediation program. Most of the liquid wastes released into the subsurface were highly ionic and electrically conductive, and therefore present an excellent target for imaging by Electrical Resistivity Tomography (ERT) within the low-conductivity sands and gravels comprising Hanford's vadose zone. In 2006, USDOE commissioned a large scale surface ERT survey to characterize vadose zone contamination beneath the Hanford Site B-Complex, which contained 8 infiltration trenches, 12 cribs, and one tile field. The ERT data were collected in a pole-pole configuration with 18 north-south trending lines, and 18 east-west trending lines ranging from 417m to 816m in length. The final data set consisted of 208,411 measurements collected on 4859 electrodes, covering an area of 600m x 600m. Given the computational demands of inverting this massive data set as a whole, the data were initially inverted in parts with a shared memory inversion code, which

  13. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  15. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  16. Compression stockings

    Science.gov (United States)

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  17. The nuclear equation of state

    International Nuclear Information System (INIS)

    Kahana, S.

    1986-01-01

    The role of the nuclear equation of state in determining the fate of the collapsing cores of massive stars is examined in light of both recent theoretical advances in this subject and recent experimental measurements with relativistic heavy ions. The difficulties existing in attempts to bring the softer nuclear matter apparently required by the theory of Type II supernovae into consonance with the heavy ion data are discussed. Relativistic mean field theory is introduced as a candidate for derivation of the equation of state, and a simple form for the saturation compressibility is obtained. 28 refs., 4 figs., 1 tab

  18. The nuclear equation of state

    Energy Technology Data Exchange (ETDEWEB)

    Kahana, S.

    1986-01-01

    The role of the nuclear equation of state in determining the fate of the collapsing cores of massive stars is examined in light of both recent theoretical advances in this subject and recent experimental measurements with relativistic heavy ions. The difficulties existing in attempts to bring the softer nuclear matter apparently required by the theory of Type II supernovae into consonance with the heavy ion data are discussed. Relativistic mean field theory is introduced as a candidate for derivation of the equation of state, and a simple form for the saturation compressibility is obtained. 28 refs., 4 figs., 1 tab.

  19. The SINS/zC-SINF survey of z ∼ 2 galaxy kinematics: Evidence for powerful active galactic nucleus-driven nuclear outflows in massive star-forming galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Förster Schreiber, N. M.; Genzel, R.; Kurk, J. D.; Lutz, D.; Tacconi, L. J.; Wuyts, S.; Bandara, K.; Buschkamp, P.; Davies, R.; Eisenhauer, F.; Lang, P. [Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Newman, S. F. [Department of Astronomy, Hearst Field Annex, University of California, Berkeley, CA 94720 (United States); Burkert, A. [Universitäts-Sternwarte, Ludwig-Maximilians-Universität, Scheinerstrasse 1, D-81679 München (Germany); Carollo, C. M.; Lilly, S. J. [Institute for Astronomy, Department of Physics, Eidgenössische Technische Hochschule, 8093-CH Zürich (Switzerland); Cresci, G. [Istituto Nazionale di Astrofisica—Osservatorio Astronomico di Bologna, Via Ranzani 1, I-40127 Bologna (Italy); Daddi, E. [CEA Saclay, DSM/IRFU/SAp, F-91191 Gif-sur-Yvette (France); Hicks, E. K. S. [Department of Astronomy, University of Washington, P.O. Box 351580, Seattle, WA 98195-1580 (United States); Mainieri, V. [European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching (Germany); Mancini, C. [Istituto Nazionale di Astrofisica—Osservatorio Astronomico di Padova, Vicolo dell' Osservatorio 5, I-35122 Padova (Italy); and others

    2014-05-20

    We report the detection of ubiquitous powerful nuclear outflows in massive (≥10{sup 11} M {sub ☉}) z ∼ 2 star-forming galaxies (SFGs), which are plausibly driven by an active galactic nucleus (AGN). The sample consists of the eight most massive SFGs from our SINS/zC-SINF survey of galaxy kinematics with the imaging spectrometer SINFONI, six of which have sensitive high-resolution adaptive optics-assisted observations. All of the objects are disks hosting a significant stellar bulge. The spectra in their central regions exhibit a broad component in Hα and forbidden [N II] and [S II] line emission, with typical velocity FWHM ∼ 1500 km s{sup –1}, [N II]/Hα ratio ≈ 0.6, and intrinsic extent of 2-3 kpc. These properties are consistent with warm ionized gas outflows associated with Type 2 AGN, the presence of which is confirmed via independent diagnostics in half the galaxies. The data imply a median ionized gas mass outflow rate of ∼60 M {sub ☉} yr{sup –1} and mass loading of ∼3. At larger radii, a weaker broad component is detected but with lower FWHM ∼485 km s{sup –1} and [N II]/Hα ≈ 0.35, characteristic for star formation-driven outflows as found in the lower-mass SINS/zC-SINF galaxies. The high inferred mass outflow rates and frequent occurrence suggest that the nuclear outflows efficiently expel gas out of the centers of the galaxies with high duty cycles and may thus contribute to the process of star formation quenching in massive galaxies. Larger samples at high masses will be crucial in confirming the importance and energetics of the nuclear outflow phenomenon and its connection to AGN activity and bulge growth.

  20. New massive gravity

    NARCIS (Netherlands)

    Bergshoeff, Eric A.; Hohm, Olaf; Townsend, Paul K.

    2012-01-01

    We present a brief review of New Massive Gravity, which is a unitary theory of massive gravitons in three dimensions obtained by considering a particular combination of the Einstein-Hilbert and curvature squared terms.

  1. Massive hydraulic fracturing gas stimulation project

    International Nuclear Information System (INIS)

    Appledorn, C.R.; Mann, R.L.

    1977-01-01

    The Rio Blanco Massive Hydraulic Fracturing Project was fielded in 1974 as a joint Industry/ERDA demonstration to test the relative formations that were stimulated by the Rio Blanco Nuclear fracturing experiment. The project is a companion effort to and a continuation of the preceding nuclear stimulation project, which took place in May 1973. 8 figures

  2. Systematic study of the giant monopolar resonance via inelastic scattering of 108.5 MeV 3He. Measurement of the nuclear compressibility

    International Nuclear Information System (INIS)

    Lebrun, Didier.

    1981-09-01

    The giant monopole resonance has been studied via inelastic scattering of 108.5 MeV 3 He at very small angles (including 0 0 ) on approximately 50 nuclei. Its angular distribution reaches its maximum in this region and leads to clear separation with GQR. DWBA analysis shows a smooth increase of the strength from few per cent of the sum rule in light nuclei up to 100% in heavier ones. The excitation energy analysis shows a crossing effect of the monopole and quadrupole frequencies in A = 40-50 region, a coupling effect between the two modes in deformed nuclei, an asymmetry effect in several series of isotopes. Compressibility moduli of nuclear matter Ksub(infinity), surface Ksub(s) and asymmetry Ksub(tau) have seen extracted, as well as the Landau parameter F 0 at saturation [fr

  3. Massive Conformal Gravity

    International Nuclear Information System (INIS)

    Faria, F. F.

    2014-01-01

    We construct a massive theory of gravity that is invariant under conformal transformations. The massive action of the theory depends on the metric tensor and a scalar field, which are considered the only field variables. We find the vacuum field equations of the theory and analyze its weak-field approximation and Newtonian limit.

  4. Intelligent transportation systems data compression using wavelet decomposition technique.

    Science.gov (United States)

    2009-12-01

    Intelligent Transportation Systems (ITS) generates massive amounts of traffic data, which posts : challenges for data storage, transmission and retrieval. Data compression and reconstruction technique plays an : important role in ITS data procession....

  5. Topological massive sigma models

    International Nuclear Information System (INIS)

    Lambert, N.D.

    1995-01-01

    In this paper we construct topological sigma models which include a potential and are related to twisted massive supersymmetric sigma models. Contrary to a previous construction these models have no central charge and do not require the manifold to admit a Killing vector. We use the topological massive sigma model constructed here to simplify the calculation of the observables. Lastly it is noted that this model can be viewed as interpolating between topological massless sigma models and topological Landau-Ginzburg models. ((orig.))

  6. Massive neutrinos in astrophysics

    International Nuclear Information System (INIS)

    Qadir, A.

    1982-08-01

    Massive neutrinos are among the big hopes of cosmologists. If they happen to have the right mass they can close the Universe, explain the motion of galaxies in clusters, provide galactic halos and even, possibly, explain galaxy formation. Tremaine and Gunn have argued that massive neutrinos cannot do all these things. I will explain, here, what some of us believe is wrong with their arguments. (author)

  7. Massive graviton geons

    Science.gov (United States)

    Aoki, Katsuki; Maeda, Kei-ichi; Misonoh, Yosuke; Okawa, Hirotada

    2018-02-01

    We find vacuum solutions such that massive gravitons are confined in a local spacetime region by their gravitational energy in asymptotically flat spacetimes in the context of the bigravity theory. We call such self-gravitating objects massive graviton geons. The basic equations can be reduced to the Schrödinger-Poisson equations with the tensor "wave function" in the Newtonian limit. We obtain a nonspherically symmetric solution with j =2 , ℓ=0 as well as a spherically symmetric solution with j =0 , ℓ=2 in this system where j is the total angular momentum quantum number and ℓ is the orbital angular momentum quantum number, respectively. The energy eigenvalue of the Schrödinger equation in the nonspherical solution is smaller than that in the spherical solution. We then study the perturbative stability of the spherical solution and find that there is an unstable mode in the quadrupole mode perturbations which may be interpreted as the transition mode to the nonspherical solution. The results suggest that the nonspherically symmetric solution is the ground state of the massive graviton geon. The massive graviton geons may decay in time due to emissions of gravitational waves but this timescale can be quite long when the massive gravitons are nonrelativistic and then the geons can be long-lived. We also argue possible prospects of the massive graviton geons: applications to the ultralight dark matter scenario, nonlinear (in)stability of the Minkowski spacetime, and a quantum transition of the spacetime.

  8. Simulation analysis of the possibility of introducing massive renewable energy and nuclear fuel cycle in the scenario to halve global CO2 emissions by the year 2050

    International Nuclear Information System (INIS)

    Hosoya, Yoshifumi; Komiyama, Ryoichi; Fujii, Yasumasa

    2011-01-01

    There is growing attention to the regulation of greenhouse gas (GHG) emissions to mitigate the global warming. Hence, the target of 50% reduction of global GHG emissions by the year 2050 has been investigated in this paper. The authors have been revising the regionally disaggregated world energy model which is formulated as a large scale linear optimization model from the aspect of nuclear and photovoltaic power generation technologies. This paper explains the structure of the revised world energy model considering the intermittent characteristics of photovoltaic power generation derived from the changes in weather conditions. And also this paper shows the simulation results to halve global CO 2 emissions by the year 2050 and evaluates the long-term technological options such as nuclear fuel cycle and renewable energies. Finally the authors discuss the future step for extensive revision of the energy model. (author)

  9. The formation of massive molecular filaments and massive stars triggered by a magnetohydrodynamic shock wave

    Science.gov (United States)

    Inoue, Tsuyoshi; Hennebelle, Patrick; Fukui, Yasuo; Matsumoto, Tomoaki; Iwasaki, Kazunari; Inutsuka, Shu-ichiro

    2018-05-01

    Recent observations suggest an that intensive molecular cloud collision can trigger massive star/cluster formation. The most important physical process caused by the collision is a shock compression. In this paper, the influence of a shock wave on the evolution of a molecular cloud is studied numerically by using isothermal magnetohydrodynamics simulations with the effect of self-gravity. Adaptive mesh refinement and sink particle techniques are used to follow the long-time evolution of the shocked cloud. We find that the shock compression of a turbulent inhomogeneous molecular cloud creates massive filaments, which lie perpendicularly to the background magnetic field, as we have pointed out in a previous paper. The massive filament shows global collapse along the filament, which feeds a sink particle located at the collapse center. We observe a high accretion rate \\dot{M}_acc> 10^{-4} M_{⊙}yr-1 that is high enough to allow the formation of even O-type stars. The most massive sink particle achieves M > 50 M_{⊙} in a few times 105 yr after the onset of the filament collapse.

  10. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  11. Redesign of a pilot international online course on accelerator driven systems for nuclear transmutation to implement a massive open online course

    Energy Technology Data Exchange (ETDEWEB)

    Alonso-Ramos, M.; Fernandez-Luna, A. J.; Gonzalez-Romero, E. M.; Sanchez-Elvira, A.; Castro, M.; Ogando, F.; Sanz, J.; Martin, S.

    2014-07-01

    In April 2013, a full-distance international pilot course on ADS (Accelerator Driven Systems) for advanced nuclear waste transmutation was taught by UNED-CIEMAT within FP7 ENEN-III project. The experience ran with 10 trainees from the project, using UNED virtual learning platform a LF. Video classes, web-conferences and recorded simulations of case studies were the main learning materials. Asynchronous and synchronous communication tools were used for tutoring purposes, and a final examination for online submission and a final survey were included. (Author)

  12. Redesign of a pilot international online course on accelerator driven systems for nuclear transmutation to implement a massive open online course

    International Nuclear Information System (INIS)

    Alonso-Ramos, M.; Fernandez-Luna, A. J.; Gonzalez-Romero, E. M.; Sanchez-Elvira, A.; Castro, M.; Ogando, F.; Sanz, J.; Martin, S.

    2014-01-01

    In April 2013, a full-distance international pilot course on ADS (Accelerator Driven Systems) for advanced nuclear waste transmutation was taught by UNED-CIEMAT within FP7 ENEN-III project. The experience ran with 10 trainees from the project, using UNED virtual learning platform a LF. Video classes, web-conferences and recorded simulations of case studies were the main learning materials. Asynchronous and synchronous communication tools were used for tutoring purposes, and a final examination for online submission and a final survey were included. (Author)

  13. The evolution of massive stars

    International Nuclear Information System (INIS)

    Loore, C. de

    1980-01-01

    The evolution of stars with masses between 15 M 0 and 100 M 0 is considered. Stars in this mass range lose a considerable fraction of their matter during their evolution. The treatment of convection, semi-convection and the influence of mass loss by stellar winds at different evolutionary phases are analysed as well as the adopted opacities. Evolutionary sequences computed by various groups are examined and compared with observations, and the advanced evolution of a 15 M 0 and a 25 M 0 star from zero-age main sequence (ZAMS) through iron collapse is discussed. The effect of centrifugal forces on stellar wind mass loss and the influence of rotation on evolutionary models is examined. As a consequence of the outflow of matter deeper layers show up and when the mass loss rates are large enough layers with changed composition, due to interior nuclear reactions, appear on the surface. The evolution of massive close binaries as well during the phase of mass loss by stellar wind as during the mass exchange and mass loss phase due to Roche lobe overflow is treated in detail, and the value of the parameters governing mass and angular momentum losses are discussed. The problem of the Wolf-Rayet stars, their origin and the possibilities of their production either as single stars or as massive binaries is examined. Finally, the origin of X-ray binaries is discussed and the scenario for the formation of these objects (starting from massive ZAMS close binaries, through Wolf-Rayet binaries leading to OB-stars with a compact companion after a supernova explosion) is reviewed and completed, including stellar wind mass loss. (orig.)

  14. Epidemiology of Massive Transfusion

    DEFF Research Database (Denmark)

    Halmin, Märit; Chiesa, Flaminia; Vasan, Senthil K

    2016-01-01

    in Sweden from 1987 and in Denmark from 1996. A total of 92,057 patients were included. Patients were followed until the end of 2012. MEASUREMENTS AND MAIN RESULTS: Descriptive statistics were used to characterize the patients and indications. Post transfusion mortality was expressed as crude 30-day...... mortality and as long-term mortality using the Kaplan-Meier method and using standardized mortality ratios. The incidence of massive transfusion was higher in Denmark (4.5 per 10,000) than in Sweden (2.5 per 10,000). The most common indication for massive transfusion was major surgery (61.2%) followed...

  15. Topologically massive supergravity

    Directory of Open Access Journals (Sweden)

    S. Deser

    1983-01-01

    Full Text Available The locally supersymmetric extension of three-dimensional topologically massive gravity is constructed. Its fermionic part is the sum of the (dynamically trivial Rarita-Schwinger action and a gauge-invariant topological term, of second derivative order, analogous to the gravitational one. It is ghost free and represents a single massive spin 3/2 excitation. The fermion-gravity coupling is minimal and the invariance is under the usual supergravity transformations. The system's energy, as well as that of the original topological gravity, is therefore positive.

  16. Epidemiology of massive transfusion

    DEFF Research Database (Denmark)

    Halmin, M A; Chiesa, F; Vasan, S K

    2015-01-01

    and to describe characteristics and mortality of massively transfused patients. Methods: We performed a retrospective cohort study based on the Scandinavian Donations and Transfusions (SCANDAT2) database, linking data on blood donation, blood components and transfused patients with inpatient- and population.......4% among women transfused for obstetrical bleeding. Mortality increased gradually with age and among all patients massively transfused at age 80 years, only 26% were alive [TABLE PRESENTED] after 5 years. The relative mortality, early after transfusion, was high and decreased with time since transfusion...

  17. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  18. Radiology in massive hemoptysis

    International Nuclear Information System (INIS)

    Marini, M.; Castro, J.M.; Gayol, A.; Aguilera, C.; Blanco, M.; Beraza, A.; Torres, J.

    1995-01-01

    We have reviewed our experience in diseases involving massive hemoptysis, systematizing the most common causes which include tuberculosis, bronchiectasis and cancer of the lung. Other less frequent causes, such as arteriovenous fistula, Aspergilloma, aneurysm, etc.; are also evaluated, and the most demonstrative images of each produced by the most precise imaging methods for their assessment are presented

  19. Massive Supergravity and Deconstruction

    CERN Document Server

    Gregoire, T; Shadmi, Y; Gregoire, Thomas; Schwartz, Matthew D; Shadmi, Yael

    2004-01-01

    We present a simple superfield Lagrangian for massive supergravity. It comprises the minimal supergravity Lagrangian with interactions as well as mass terms for the metric superfield and the chiral compensator. This is the natural generalization of the Fierz-Pauli Lagrangian for massive gravity which comprises mass terms for the metric and its trace. We show that the on-shell bosonic and fermionic fields are degenerate and have the appropriate spins: 2, 3/2, 3/2 and 1. We then study this interacting Lagrangian using goldstone superfields. We find that a chiral multiplet of goldstones gets a kinetic term through mixing, just as the scalar goldstone does in the non-supersymmetric case. This produces Planck scale (Mpl) interactions with matter and all the discontinuities and unitarity bounds associated with massive gravity. In particular, the scale of strong coupling is (Mpl m^4)^1/5, where m is the multiplet's mass. Next, we consider applications of massive supergravity to deconstruction. We estimate various qu...

  20. Update on massive transfusion.

    Science.gov (United States)

    Pham, H P; Shaz, B H

    2013-12-01

    Massive haemorrhage requires massive transfusion (MT) to maintain adequate circulation and haemostasis. For optimal management of massively bleeding patients, regardless of aetiology (trauma, obstetrical, surgical), effective preparation and communication between transfusion and other laboratory services and clinical teams are essential. A well-defined MT protocol is a valuable tool to delineate how blood products are ordered, prepared, and delivered; determine laboratory algorithms to use as transfusion guidelines; and outline duties and facilitate communication between involved personnel. In MT patients, it is crucial to practice damage control resuscitation and to administer blood products early in the resuscitation. Trauma patients are often admitted with early trauma-induced coagulopathy (ETIC), which is associated with mortality; the aetiology of ETIC is likely multifactorial. Current data support that trauma patients treated with higher ratios of plasma and platelet to red blood cell transfusions have improved outcomes, but further clinical investigation is needed. Additionally, tranexamic acid has been shown to decrease the mortality in trauma patients requiring MT. Greater use of cryoprecipitate or fibrinogen concentrate might be beneficial in MT patients from obstetrical causes. The risks and benefits for other therapies (prothrombin complex concentrate, recombinant activated factor VII, or whole blood) are not clearly defined in MT patients. Throughout the resuscitation, the patient should be closely monitored and both metabolic and coagulation abnormalities corrected. Further studies are needed to clarify the optimal ratios of blood products, treatment based on underlying clinical disorder, use of alternative therapies, and integration of laboratory testing results in the management of massively bleeding patients.

  1. Massive antenatal fetomaternal hemorrhage

    DEFF Research Database (Denmark)

    Dziegiel, Morten Hanefeld; Koldkjaer, Ole; Berkowicz, Adela

    2005-01-01

    Massive fetomaternal hemorrhage (FMH) can lead to life-threatening anemia. Quantification based on flow cytometry with anti-hemoglobin F (HbF) is applicable in all cases but underestimation of large fetal bleeds has been reported. A large FMH from an ABO-compatible fetus allows an estimation...

  2. COLA with massive neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Wright, Bill S.; Winther, Hans A.; Koyama, Kazuya, E-mail: bill.wright@port.ac.uk, E-mail: hans.winther@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, Hampshire, PO1 3FX (United Kingdom)

    2017-10-01

    The effect of massive neutrinos on the growth of cold dark matter perturbations acts as a scale-dependent Newton's constant and leads to scale-dependent growth factors just as we often find in models of gravity beyond General Relativity. We show how to compute growth factors for ΛCDM and general modified gravity cosmologies combined with massive neutrinos in Lagrangian perturbation theory for use in COLA and extensions thereof. We implement this together with the grid-based massive neutrino method of Brandbyge and Hannestad in MG-PICOLA and compare COLA simulations to full N -body simulations of ΛCDM and f ( R ) gravity with massive neutrinos. Our implementation is computationally cheap if the underlying cosmology already has scale-dependent growth factors and it is shown to be able to produce results that match N -body to percent level accuracy for both the total and CDM matter power-spectra up to k ∼< 1 h /Mpc.

  3. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Christiansen, Anders Roy; Cording, Patrick Hagge

    2017-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  4. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2016-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  5. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  6. Massive propagators in instanton fields

    International Nuclear Information System (INIS)

    Brown, L.S.; Lee, C.

    1978-01-01

    Green's functions for massive spinor and vector particles propagating in a self-dual but otherwise arbitrary non-Abelian gauge field are shown to be completely determined by the corresponding Green's functions of massive scalar particles

  7. Proliferation of massive destruction weapons: fantasy or reality?

    International Nuclear Information System (INIS)

    Duval, M.

    2001-01-01

    This article evaluates the threat of massive destruction weapons (nuclear, chemical, biological) for Europe and recalls the existing safeguards against the different forms of nuclear proliferation: legal (non-proliferation treaty (NPT), comprehensive nuclear test ban treaty (CTBT), fissile material cut off treaty (FMCT) etc..), technical (fabrication of fissile materials, delays). However, all these safeguards can be overcome as proven by the activities of some countries. The situation of proliferation for the other type of massive destruction weapons is presented too. (J.S.)

  8. Permutations of massive vacua

    Energy Technology Data Exchange (ETDEWEB)

    Bourget, Antoine [Department of Physics, Universidad de Oviedo, Avenida Calvo Sotelo 18, 33007 Oviedo (Spain); Troost, Jan [Laboratoire de Physique Théorique de l’É cole Normale Supérieure, CNRS,PSL Research University, Sorbonne Universités, 75005 Paris (France)

    2017-05-09

    We discuss the permutation group G of massive vacua of four-dimensional gauge theories with N=1 supersymmetry that arises upon tracing loops in the space of couplings. We concentrate on superconformal N=4 and N=2 theories with N=1 supersymmetry preserving mass deformations. The permutation group G of massive vacua is the Galois group of characteristic polynomials for the vacuum expectation values of chiral observables. We provide various techniques to effectively compute characteristic polynomials in given theories, and we deduce the existence of varying symmetry breaking patterns of the duality group depending on the gauge algebra and matter content of the theory. Our examples give rise to interesting field extensions of spaces of modular forms.

  9. Massive stars in galaxies

    International Nuclear Information System (INIS)

    Humphreys, R.M.

    1987-01-01

    The relationship between the morphologic type of a galaxy and the evolution of its massive stars is explored, reviewing observational results for nearby galaxies. The data are presented in diagrams, and it is found that the massive-star populations of most Sc spiral galaxies and irregular galaxies are similar, while those of Sb spirals such as M 31 and M 81 may be affected by morphology (via differences in the initial mass function or star-formation rate). Consideration is also given to the stability-related upper luminosity limit in the H-R diagram of hypergiant stars (attributed to radiation pressure in hot stars and turbulence in cool stars) and the goals of future observation campaigns. 88 references

  10. Massive Open Online Courses

    Directory of Open Access Journals (Sweden)

    Tharindu Rekha Liyanagunawardena

    2015-01-01

    Full Text Available Massive Open Online Courses (MOOCs are a new addition to the open educational provision. They are offered mainly by prestigious universities on various commercial and non-commercial MOOC platforms allowing anyone who is interested to experience the world class teaching practiced in these universities. MOOCs have attracted wide interest from around the world. However, learner demographics in MOOCs suggest that some demographic groups are underrepresented. At present MOOCs seem to be better serving the continuous professional development sector.

  11. Evolution of massive stars

    International Nuclear Information System (INIS)

    Loore, C. de

    1984-01-01

    The evolution of stars with masses larger than 15 sun masses is reviewed. These stars have large convective cores and lose a substantial fraction of their matter by stellar wind. The treatment of convection and the parameterisation of the stellar wind mass loss are analysed within the context of existing disagreements between theory and observation. The evolution of massive close binaries and the origin of Wolf-Rayet Stars and X-ray binaries is also sketched. (author)

  12. Introduction to massive neutrinos

    International Nuclear Information System (INIS)

    Kayser, B.

    1984-01-01

    We discuss the theoretical ideas which make it natural to expect that neutrinos do indeed have mass. Then we focus on the physical consequences of neutrino mass, including neutrino oscillation and other phenomena whose observation would be very interesting, and would serve to demonstrate that neutrinos are indeed massive. We comment on the legitimacy of comparing results from different types of experiments. Finally, we consider the question of whether neutrinos are their own antiparticles. We explain what this question means, discuss the nature of a neutrino which is its own antiparticles, and consider how one might determine experimentally whether neutrinos are their own antiparticles or not

  13. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  14. Phases of massive gravity

    CERN Document Server

    Dubovsky, S L

    2004-01-01

    We systematically study the most general Lorentz-violating graviton mass invariant under three-dimensional Eucledian group using the explicitly covariant language. We find that at general values of mass parameters the massive graviton has six propagating degrees of freedom, and some of them are ghosts or lead to rapid classical instabilities. However, there is a number of different regions in the mass parameter space where massive gravity can be described by a consistent low-energy effective theory with cutoff $\\sim\\sqrt{mM_{Pl}}$ free of rapid instabilities and vDVZ discontinuity. Each of these regions is characterized by certain fine-tuning relations between mass parameters, generalizing the Fierz--Pauli condition. In some cases the required fine-tunings are consequences of the existence of the subgroups of the diffeomorphism group that are left unbroken by the graviton mass. We found two new cases, when the resulting theories have a property of UV insensitivity, i.e. remain well behaved after inclusion of ...

  15. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  16. Thermal stress control using waste steel fibers in massive concretes

    Science.gov (United States)

    Sarabi, Sahar; Bakhshi, Hossein; Sarkardeh, Hamed; Nikoo, Hamed Safaye

    2017-11-01

    One of the important subjects in massive concrete structures is the control of the generated heat of hydration and consequently the potential of cracking due to the thermal stress expansion. In the present study, using the waste turnery steel fibers in the massive concretes, the amount of used cement was reduced without changing the compressive strength. By substituting a part of the cement with waste steel fibers, the costs and the generated hydration heat were reduced and the tensile strength was increased. The results showed that by using 0.5% turnery waste steel fibers and consequently, reducing to 32% the cement content, the hydration heat reduced to 23.4% without changing the compressive strength. Moreover, the maximum heat gradient reduced from 18.5% in the plain concrete sample to 12% in the fiber-reinforced concrete sample.

  17. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  18. A hybrid data compression approach for online backup service

    Science.gov (United States)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  19. Computational fluid dynamics on a massively parallel computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    A finite difference code was implemented for the compressible Navier-Stokes equations on the Connection Machine, a massively parallel computer. The code is based on the ARC2D/ARC3D program and uses the implicit factored algorithm of Beam and Warming. The codes uses odd-even elimination to solve linear systems. Timings and computation rates are given for the code, and a comparison is made with a Cray XMP.

  20. Minimal massive 3D gravity

    International Nuclear Information System (INIS)

    Bergshoeff, Eric; Merbis, Wout; Hohm, Olaf; Routh, Alasdair J; Townsend, Paul K

    2014-01-01

    We present an alternative to topologically massive gravity (TMG) with the same ‘minimal’ bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new ‘minimal massive gravity’ has both a positive energy graviton and positive central charges for the asymptotic AdS-boundary conformal algebra. (paper)

  1. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  2. Massive Galileon positivity bounds

    Science.gov (United States)

    de Rham, Claudia; Melville, Scott; Tolley, Andrew J.; Zhou, Shuang-Yong

    2017-09-01

    The EFT coefficients in any gapped, scalar, Lorentz invariant field theory must satisfy positivity requirements if there is to exist a local, analytic Wilsonian UV completion. We apply these bounds to the tree level scattering amplitudes for a massive Galileon. The addition of a mass term, which does not spoil the non-renormalization theorem of the Galileon and preserves the Galileon symmetry at loop level, is necessary to satisfy the lowest order positivity bound. We further show that a careful choice of successively higher derivative corrections are necessary to satisfy the higher order positivity bounds. There is then no obstruction to a local UV completion from considerations of tree level 2-to-2 scattering alone. To demonstrate this we give an explicit example of such a UV completion.

  3. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  4. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  5. The Fate of Massive Black Holes in Gas-Rich Galaxy Mergers

    Science.gov (United States)

    Escala, A.; Larson, R. B.; Coppi, P. S.; Mardones, D.

    2006-06-01

    Using SPH numerical simulations, we investigate the effects of gas on the inspiral and merger of a massive black hole binary. This study is motivated by the very massive nuclear gas disks observed in the central regions of merging galaxies. Here we present results that expand on the treatment in previous works (Escala et al. 2004, 2005), by studying the evolution of a binary with different black holes masses in a massive gas disk.

  6. Study of the hoop fracture behaviour of nuclear fuel cladding from ring compression tests by means of non-linear optimization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Gómez, F.J., E-mail: javier.gomez@amsimulation.com [Advanced Material Simulation, AMS, Bilbao (Spain); Martin Rengel, M.A., E-mail: mamartin.rengel@upm.es [E.T.S.I. Caminos, Canales y Puertos, Universidad Politécnica de Madrid, C/Professor Aranguren SN, E-28040 Madrid (Spain); Ruiz-Hervias, J.; Puerta, M.A. [E.T.S.I. Caminos, Canales y Puertos, Universidad Politécnica de Madrid, C/Professor Aranguren SN, E-28040 Madrid (Spain)

    2017-06-15

    In this work, the hoop fracture toughness of ZIRLO{sup ®} fuel cladding is calculated as a function of three parameters: hydrogen concentration, temperature and displacement rate. To this end, pre-hydrided samples with nominal hydrogen concentrations of 0 (as-received), 150, 250, 500, 1200 and 2000 ppm were prepared. Hydrogen was precipitated as zirconium hydrides in the shape of platelets oriented along the hoop direction. Ring Compression Tests (RCTs) were conducted at three temperatures (20, 135 and 300 °C) and two displacement rates (0.5 and 100 mm/min). A new method has been proposed in this paper which allows the determination of fracture toughness from ring compression tests. The proposed method combines the experimental results, the cohesive crack model, finite elements simulations, numerical calculations and non-linear optimization techniques. The parameters of the cohesive crack model were calculated by minimizing the difference between the experimental data and the numerical results. An almost perfect fitting of the experimental results is achieved by this method. In addition, an estimation of the error in the calculated fracture toughness is also provided.

  7. Massive Black Hole Binary Evolution

    Directory of Open Access Journals (Sweden)

    Merritt David

    2005-11-01

    Full Text Available Coalescence of binary supermassive black holes (SBHs would constitute the strongest sources of gravitational waves to be observed by LISA. While the formation of binary SBHs during galaxy mergers is almost inevitable, coalescence requires that the separation between binary components first drop by a few orders of magnitude, due presumably to interaction of the binary with stars and gas in a galactic nucleus. This article reviews the observational evidence for binary SBHs and discusses how they would evolve. No completely convincing case of a bound, binary SBH has yet been found, although a handful of systems (e.g. interacting galaxies; remnants of galaxy mergers are now believed to contain two SBHs at projected separations of <~ 1kpc. N-body studies of binary evolution in gas-free galaxies have reached large enough particle numbers to reproduce the slow, “diffusive” refilling of the binary’s loss cone that is believed to characterize binary evolution in real galactic nuclei. While some of the results of these simulations - e.g. the binary hardening rate and eccentricity evolution - are strongly N-dependent, others - e.g. the “damage” inflicted by the binary on the nucleus - are not. Luminous early-type galaxies often exhibit depleted cores with masses of ~ 1-2 times the mass of their nuclear SBHs, consistent with the predictions of the binary model. Studies of the interaction of massive binaries with gas are still in their infancy, although much progress is expected in the near future. Binary coalescence has a large influence on the spins of SBHs, even for mass ratios as extreme as 10:1, and evidence of spin-flips may have been observed.

  8. Bill project related to the struggle against the proliferation of arms of massive destruction and their vectors; Projet de Loi relatif a la lutte contre la proliferation des armes de destruction massive et de leurs vecteurs

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    This bill project addresses several issues: the struggle against proliferation of arms of massive destruction (nuclear weapons, nuclear materials, biological weapons, and chemical weapons), the struggle against proliferation of vectors of arms of massive destruction, double-use goods, the use of these weapons and vectors in acts of terrorism

  9. The Destructive Birth of Massive Stars and Massive Star Clusters

    Science.gov (United States)

    Rosen, Anna; Krumholz, Mark; McKee, Christopher F.; Klein, Richard I.; Ramirez-Ruiz, Enrico

    2017-01-01

    Massive stars play an essential role in the Universe. They are rare, yet the energy and momentum they inject into the interstellar medium with their intense radiation fields dwarfs the contribution by their vastly more numerous low-mass cousins. Previous theoretical and observational studies have concluded that the feedback associated with massive stars' radiation fields is the dominant mechanism regulating massive star and massive star cluster (MSC) formation. Therefore detailed simulation of the formation of massive stars and MSCs, which host hundreds to thousands of massive stars, requires an accurate treatment of radiation. For this purpose, we have developed a new, highly accurate hybrid radiation algorithm that properly treats the absorption of the direct radiation field from stars and the re-emission and processing by interstellar dust. We use our new tool to perform a suite of three-dimensional radiation-hydrodynamic simulations of the formation of massive stars and MSCs. For individual massive stellar systems, we simulate the collapse of massive pre-stellar cores with laminar and turbulent initial conditions and properly resolve regions where we expect instabilities to grow. We find that mass is channeled to the massive stellar system via gravitational and Rayleigh-Taylor (RT) instabilities. For laminar initial conditions, proper treatment of the direct radiation field produces later onset of RT instability, but does not suppress it entirely provided the edges of the radiation-dominated bubbles are adequately resolved. RT instabilities arise immediately for turbulent pre-stellar cores because the initial turbulence seeds the instabilities. To model MSC formation, we simulate the collapse of a dense, turbulent, magnetized Mcl = 106 M⊙ molecular cloud. We find that the influence of the magnetic pressure and radiative feedback slows down star formation. Furthermore, we find that star formation is suppressed along dense filaments where the magnetic field is

  10. Nuclear

    International Nuclear Information System (INIS)

    2014-01-01

    This document proposes a presentation and discussion of the main notions, issues, principles, or characteristics related to nuclear energy: radioactivity (presence in the environment, explanation, measurement, periods and activities, low doses, applications), fuel cycle (front end, mining and ore concentration, refining and conversion, fuel fabrication, in the reactor, back end with reprocessing and recycling, transport), the future of the thorium-based fuel cycle (motivations, benefits and drawbacks), nuclear reactors (principles of fission reactors, reactor types, PWR reactors, BWR, heavy-water reactor, high temperature reactor of HTR, future reactors), nuclear wastes (classification, packaging and storage, legal aspects, vitrification, choice of a deep storage option, quantities and costs, foreign practices), radioactive releases of nuclear installations (main released radio-elements, radioactive releases by nuclear reactors and by La Hague plant, gaseous and liquid effluents, impact of releases, regulation), the OSPAR Convention, management and safety of nuclear activities (from control to quality insurance, to quality management and to sustainable development), national safety bodies (mission, means, organisation and activities of ASN, IRSN, HCTISN), international bodies, nuclear and medicine (applications of radioactivity, medical imagery, radiotherapy, doses in nuclear medicine, implementation, the accident in Epinal), nuclear and R and D (past R and D programmes and expenses, main actors in France and present funding, main R and D axis, international cooperation)

  11. Rio Blanco massive hydraulic fracture: project definition

    International Nuclear Information System (INIS)

    1976-01-01

    A recent Federal Power Commission feasibility study assessed the possibility of economically producing gas from three Rocky Mountain basins. These basins have potentially productive horizons 2,000 to 4,000 feet thick containing an estimated total of 600 trillion cubic feet of gas in place. However, the producing sands are of such low permeability and heterogeneity that conventional methods have failed to develop these basins economically. The Natural Gas Technology Task Force, responsible for preparing the referenced feasibility study, determined that, if effective well stimulation methods for these basins can be developed, it might be possible to recover 40 to 50 percent of the gas in place. The Task Force pointed out two possible underground fracturing methods: Nuclear explosive fracturing, and massive hydraulic fracturing. They argued that once technical viability has been demonstrated, and with adequate economic incentives, there should be no reason why one or even both of these approaches could not be employed, thus making a major contribution toward correcting the energy deficiency of the Nation. A joint Government-industry demonstration program has been proposed to test the relative effectiveness of massive hydraulic fracturing of the same formation and producing horizons that were stimulated by the Rio Blanco nuclear project

  12. Bill project related to the struggle against the proliferation of arms of massive destruction and their vectors

    International Nuclear Information System (INIS)

    2011-01-01

    This bill project addresses several issues: the struggle against proliferation of arms of massive destruction (nuclear weapons, nuclear materials, biological weapons, and chemical weapons), the struggle against proliferation of vectors of arms of massive destruction, double-use goods, the use of these weapons and vectors in acts of terrorism

  13. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang

    2014-01-01

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  14. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  15. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  16. Massive gravity from bimetric gravity

    International Nuclear Information System (INIS)

    Baccetti, Valentina; Martín-Moruno, Prado; Visser, Matt

    2013-01-01

    We discuss the subtle relationship between massive gravity and bimetric gravity, focusing particularly on the manner in which massive gravity may be viewed as a suitable limit of bimetric gravity. The limiting procedure is more delicate than currently appreciated. Specifically, this limiting procedure should not unnecessarily constrain the background metric, which must be externally specified by the theory of massive gravity itself. The fact that in bimetric theories one always has two sets of metric equations of motion continues to have an effect even in the massive gravity limit, leading to additional constraints besides the one set of equations of motion naively expected. Thus, since solutions of bimetric gravity in the limit of vanishing kinetic term are also solutions of massive gravity, but the contrary statement is not necessarily true, there is no complete continuity in the parameter space of the theory. In particular, we study the massive cosmological solutions which are continuous in the parameter space, showing that many interesting cosmologies belong to this class. (paper)

  17. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  18. Fueling-Controlled the Growth of Massive Black Holes

    Science.gov (United States)

    Escala, A.

    2009-05-01

    We study the relation between nuclear massive black holes and their host spheroid gravitational potential. Using AMR numerical simulations, we analyze how gas is transported into the nuclear (central kpc) regions of galaxies. We study gas fueling onto the inner accretion disk (sub-pc scale) and star formation in a massive nuclear disk like those generally found in proto-spheroids (ULIRGs, SCUBA Galaxies). These sub-pc resolution simulations of gas fueling, which is mainly depleted by star formation, naturally satisfy the `M_BH-M_{virial}' relation, with a scatter considerably less than that observed. We find that a generalized version of the Kennicutt-Schmidt Law for starbursts is satisfied, in which the total gas depletion rate (dot M_gas=dot M_BH + M_SF scales as M_gas/t_orbital. See Escala (2007) for more details about this work.

  19. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  20. Holographically viable extensions of topologically massive and minimal massive gravity?

    Science.gov (United States)

    Altas, Emel; Tekin, Bayram

    2016-01-01

    Recently [E. Bergshoeff et al., Classical Quantum Gravity 31, 145008 (2014)], an extension of the topologically massive gravity (TMG) in 2 +1 dimensions, dubbed as minimal massive gravity (MMG), which is free of the bulk-boundary unitarity clash that inflicts the former theory and all the other known three-dimensional theories, was found. Field equations of MMG differ from those of TMG at quadratic terms in the curvature that do not come from the variation of an action depending on the metric alone. Here we show that MMG is a unique theory and there does not exist a deformation of TMG or MMG at the cubic and quartic order (and beyond) in the curvature that is consistent at the level of the field equations. The only extension of TMG with the desired bulk and boundary properties having a single massive degree of freedom is MMG.

  1. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  2. Manufacture of stabilized phosphogypsum blocks for underwater massives in the Gulf of Gabès

    Directory of Open Access Journals (Sweden)

    Koubaa Lobna

    2014-04-01

    Studies show that the treatment of PH in the crushed sand and cement or cement and lime gives the best results in terms of ultrasonic speed and compressive strength. Also, they indicate that the addition of cement and lime can absorb huge amounts of PH (92 %. Resistance obtained is sufficient for possible use blocks of PH made for the construction of massive underwater.

  3. Massive Submucosal Ganglia in Colonic Inertia.

    Science.gov (United States)

    Naemi, Kaveh; Stamos, Michael J; Wu, Mark Li-Cheng

    2018-02-01

    - Colonic inertia is a debilitating form of primary chronic constipation with unknown etiology and diagnostic criteria, often requiring pancolectomy. We have occasionally observed massively enlarged submucosal ganglia containing at least 20 perikarya, in addition to previously described giant ganglia with greater than 8 perikarya, in cases of colonic inertia. These massively enlarged ganglia have yet to be formally recognized. - To determine whether such "massive submucosal ganglia," defined as ganglia harboring at least 20 perikarya, characterize colonic inertia. - We retrospectively reviewed specimens from colectomies of patients with colonic inertia and compared the prevalence of massive submucosal ganglia occurring in this setting to the prevalence of massive submucosal ganglia occurring in a set of control specimens from patients lacking chronic constipation. - Seven of 8 specimens affected by colonic inertia harbored 1 to 4 massive ganglia, for a total of 11 massive ganglia. One specimen lacked massive ganglia but had limited sampling and nearly massive ganglia. Massive ganglia occupied both superficial and deep submucosal plexus. The patient with 4 massive ganglia also had 1 mitotically active giant ganglion. Only 1 massive ganglion occupied the entire set of 10 specimens from patients lacking chronic constipation. - We performed the first, albeit distinctly small, study of massive submucosal ganglia and showed that massive ganglia may be linked to colonic inertia. Further, larger studies are necessary to determine whether massive ganglia are pathogenetic or secondary phenomena, and whether massive ganglia or mitotically active ganglia distinguish colonic inertia from other types of chronic constipation.

  4. Achalasia with massive oesophageal dilation causing tracheomalacia and asthma symptoms

    Directory of Open Access Journals (Sweden)

    Ana Gomez-Larrauri

    Full Text Available Achalasia is an uncommon oesophageal motor disorder characterized by failure of relaxation of the lower oesophageal sphincter and muscle hypertrophy, resulting in a loss of peristalsis and a dilated oesophagus. Gastrointestinal symptoms are invariably present in all cases of achalasia observed in adults. We report a case of a 34 year-old female patient with long standing history of asthma-like symptoms, labelled as uncontrolled and steroid resistant asthma with no gastrointestinal manifestations. Thoracic CT scan revealed a massive oesophagus due to achalasia, which caused severe tracheomalacia as a result of tracheal compression. Her symptoms regressed completely after a laparoscopic Heller myotomy surgery intervention.

  5. Key Technologies in Massive MIMO

    Directory of Open Access Journals (Sweden)

    Hu Qiang

    2018-01-01

    Full Text Available The explosive growth of wireless data traffic in the future fifth generation mobile communication system (5G has led researchers to develop new disruptive technologies. As an extension of traditional MIMO technology, massive MIMO can greatly improve the throughput rate and energy efficiency, and can effectively improve the link reliability and data transmission rate, which is an important research direction of 5G wireless communication. Massive MIMO technology is nearly three years to get a new technology of rapid development and it through a lot of increasing the number of antenna communication, using very duplex communication mode, make the system spectrum efficiency to an unprecedented height.

  6. Hunting for a massive neutrino

    CERN Document Server

    AUTHOR|(CDS)2108802

    1997-01-01

    A great effort is devoted by many groups of physicists all over the world to give an answer to the following question: Is the neutrino massive ? This question has profound implications with particle physics, astrophysics and cosmology, in relation to the so-called Dark Matter puzzle. The neutrino oscillation process, in particular, can only occur if the neutrino is massive. An overview of the neutrino mass measurements, of the oscillation formalism and experiments will be given, also in connection with the present experimental programme at CERN with the two experiments CHORUS and NOMAD.

  7. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  8. Hadronic production of massive lepton pairs

    International Nuclear Information System (INIS)

    Berger, E.L.

    1982-12-01

    A review is presented of recent experimental and theoretical progress in studies of the production of massive lepton pairs in hadronic collisions. I begin with the classical Drell-Yan annihilation model and its predictions. Subsequently, I discuss deviations from scaling, the status of the proofs of factorization in the parton model, higher-order terms in the perturbative QCD expansion, the discrepancy between measured and predicted yields (K factor), high-twist terms, soft gluon effects, transverse-momentum distributions, implications for weak vector boson (W +- and Z 0 ) yields and production properties, nuclear A dependence effects, correlations of the lepton pair with hadrons in the final state, and angular distributions in the lepton-pair rest frame

  9. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  10. Massive Neurofibroma of the Breast

    African Journals Online (AJOL)

    Valued eMachines Customer

    Neurofibromas are benign nerve sheath tumors that are extremely rare in the breast. We report a massive ... plexiform breast neurofibromas may transform into a malignant peripheral nerve sheath tumor1. We present a case .... Breast neurofibroma. http://www.breast-cancer.ca/type/breast-neurofibroma.htm. August 2011. 2.

  11. Cleaning Massive Sonar Point Clouds

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Larsen, Kasper Green; Mølhave, Thomas

    2010-01-01

    We consider the problem of automatically cleaning massive sonar data point clouds, that is, the problem of automatically removing noisy points that for example appear as a result of scans of (shoals of) fish, multiple reflections, scanner self-reflections, refraction in gas bubbles, and so on. We...

  12. Topologically Massive Higher Spin Gravity

    NARCIS (Netherlands)

    Bagchi, A.; Lal, S.; Saha, A.; Sahoo, B.

    2011-01-01

    We look at the generalisation of topologically massive gravity (TMG) to higher spins, specifically spin-3. We find a special "chiral" point for the spin-three, analogous to the spin-two example, which actually coincides with the usual spin-two chiral point. But in contrast to usual TMG, there is the

  13. Supernovae from massive AGB stars

    NARCIS (Netherlands)

    Poelarends, A.J.T.; Izzard, R.G.; Herwig, F.; Langer, N.; Heger, A.

    2006-01-01

    We present new computations of the final fate of massive AGB-stars. These stars form ONeMg cores after a phase of carbon burning and are called Super AGB stars (SAGB). Detailed stellar evolutionary models until the thermally pulsing AGB were computed using three di erent stellar evolution codes. The

  14. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  15. The proliferation of massive destruction weapons and ballistic missiles

    International Nuclear Information System (INIS)

    Schmitt, M.

    1996-01-01

    The author studies the actual situation of nuclear deterrence policies, the possibilities of use chemical weapons as massive destructions weapons for non nuclear governments. The situation of non proliferation of nuclear weapons took a new interest with the disintegration of the communism block, but it seems that only few nuclear matter disappeared towards proliferating countries. The denuclearization of Bielorussia, Ukraine and Kazakhstan makes progress with the START I treaty; China has signed the Non proliferation treaty in 1992, it conducts an export policy in matter of equipment and know-how, towards Iran, Pakistan, North Korea, Saudi Arabia and Syria. In a future of ten years, countries such, Iran, North Korea could catch up with Israel, India and Pakistan among non declared nuclear countries. For chemical weapon, Libya, Iran and Syria could catch up with Iraq. (N.C.)

  16. Nuclear

    International Nuclear Information System (INIS)

    Anon.

    2000-01-01

    The first text deals with a new circular concerning the collect of the medicine radioactive wastes, containing radium. This campaign wants to incite people to let go their radioactive wastes (needles, tubes) in order to suppress any danger. The second text presents a decree of the 31 december 1999, relative to the limitations of noise and external risks resulting from the nuclear facilities exploitation: noise, atmospheric pollution, water pollution, wastes management and fire prevention. (A.L.B.)

  17. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  18. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  19. Application of digital compression techniques to optical surveillance systems

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1991-01-01

    There are many benefits to handling video images electronically, however, the amount of digital data in a normal video image is a major obstacle. The solution is to remove the high frequency and redundant information in a process that is referred to as compression. Compression allows the number of digital bits required for a given image to be reduced for more efficient storage or transmission of images. The next question is how much compression can be done without impairing the image quality beyond its usefulness for a given application. This paper discusses image compression that might be applied to provide useful images in unattended nuclear facility surveillance applications

  20. Massive lepton pair production in massive quantum electrodynamics

    International Nuclear Information System (INIS)

    Raychaudhuri, P.

    1976-01-01

    The pp → l + +l - +x inclusive interaction has been studied at high energies in terms of the massive quantum electrodynamics. The differential cross-section (dsigma/dQ 2 ) is derived and proves to be proportional to Q -4 , where Q-mass of the lepton pair. Basic features of the cross-section are demonstrated to be consistent with the Drell-Yan model

  1. MassiveNuS: cosmological massive neutrino simulations

    Science.gov (United States)

    Liu, Jia; Bird, Simeon; Zorrilla Matilla, José Manuel; Hill, J. Colin; Haiman, Zoltán; Madhavacheril, Mathew S.; Petri, Andrea; Spergel, David N.

    2018-03-01

    The non-zero mass of neutrinos suppresses the growth of cosmic structure on small scales. Since the level of suppression depends on the sum of the masses of the three active neutrino species, the evolution of large-scale structure is a promising tool to constrain the total mass of neutrinos and possibly shed light on the mass hierarchy. In this work, we investigate these effects via a large suite of N-body simulations that include massive neutrinos using an analytic linear-response approximation: the Cosmological Massive Neutrino Simulations (MassiveNuS). The simulations include the effects of radiation on the background expansion, as well as the clustering of neutrinos in response to the nonlinear dark matter evolution. We allow three cosmological parameters to vary: the neutrino mass sum Mν in the range of 0–0.6 eV, the total matter density Ωm, and the primordial power spectrum amplitude As. The rms density fluctuation in spheres of 8 comoving Mpc/h (σ8) is a derived parameter as a result. Our data products include N-body snapshots, halo catalogues, merger trees, ray-traced galaxy lensing convergence maps for four source redshift planes between zs=1–2.5, and ray-traced cosmic microwave background lensing convergence maps. We describe the simulation procedures and code validation in this paper. The data are publicly available at http://columbialensing.org.

  2. Spacetime structure of massive Majorana particles and massive gravitino

    Energy Technology Data Exchange (ETDEWEB)

    Ahluwalia, D.V.; Kirchbach, M. [Theoretical Physics Group, Facultad de Fisica, Universidad Autonoma de Zacatecas, A.P. 600, 98062 Zacatecas (Mexico)

    2003-07-01

    The profound difference between Dirac and Majorana particles is traced back to the possibility of having physically different constructs in the (1/2, 0) 0 (0,1/2) representation space. Contrary to Dirac particles, Majorana-particle propagators are shown to differ from the simple linear {gamma} {mu} p{sub {mu}}, structure. Furthermore, neither Majorana particles, nor their antiparticles can be associated with a well defined arrow of time. The inevitable consequence of this peculiarity is the particle-antiparticle metamorphosis giving rise to neutrinoless double beta decay, on the one side, and enabling spin-1/2 fields to act as gauge fields, gauginos, on the other side. The second part of the lecture notes is devoted to massive gravitino. We argue that a spin measurement in the rest frame for an unpolarized ensemble of massive gravitino, associated with the spinor-vector [(1/2, 0) 0 (0,1/2)] 0 (1/2,1/2) representation space, would yield the results 3/2 with probability one half, and 1/2 with probability one half. The latter is distributed uniformly, i.e. as 1/4, among the two spin-1/2+ and spin-1/2- states of opposite parities. From that we draw the conclusion that the massive gravitino should be interpreted as a particle of multiple spin. (Author)

  3. Study of the compressibility of the nucleon

    Energy Technology Data Exchange (ETDEWEB)

    Morsch, P.H. [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Kernphysik]|[Laboratoire National Saturne, Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France)

    1996-12-31

    A brief discussion of the theoretical and experimental situation in baryon spectroscopy is given. Then, the radial structure is discussed, related to the ground state form factors and the compressibility. The compressibility derived from experimental data is compared with results from different nucleon models. From the study of the Roper resonance in nuclei information on the dynamical radius of the nucleon can be obtained. Experiments have been performed on deuteron and {sup 12}C which show no shift of the Roper resonance in these systems. This indicates no sizeable `swelling` or `shrinking` of the nucleon in the nuclear medium. (K.A.). 25 refs.

  4. Study of the compressibility of the nucleon

    International Nuclear Information System (INIS)

    Morsch, P.H.

    1996-01-01

    A brief discussion of the theoretical and experimental situation in baryon spectroscopy is given. Then, the radial structure is discussed, related to the ground state form factors and the compressibility. The compressibility derived from experimental data is compared with results from different nucleon models. From the study of the Roper resonance in nuclei information on the dynamical radius of the nucleon can be obtained. Experiments have been performed on deuteron and 12 C which show no shift of the Roper resonance in these systems. This indicates no sizeable 'swelling' or 'shrinking' of the nucleon in the nuclear medium. (K.A.)

  5. Minimal theory of massive gravity

    International Nuclear Information System (INIS)

    De Felice, Antonio; Mukohyama, Shinji

    2016-01-01

    We propose a new theory of massive gravity with only two propagating degrees of freedom. While the homogeneous and isotropic background cosmology and the tensor linear perturbations around it are described by exactly the same equations as those in the de Rham–Gabadadze–Tolley (dRGT) massive gravity, the scalar and vector gravitational degrees of freedom are absent in the new theory at the fully nonlinear level. Hence the new theory provides a stable nonlinear completion of the self-accelerating cosmological solution that was originally found in the dRGT theory. The cosmological solution in the other branch, often called the normal branch, is also rendered stable in the new theory and, for the first time, makes it possible to realize an effective equation-of-state parameter different from (either larger or smaller than) −1 without introducing any extra degrees of freedom.

  6. Spin-3 topologically massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Chen Bin, E-mail: bchen01@pku.edu.cn [Department of Physics, and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871 (China); Center for High Energy Physics, Peking University, Beijing 100871 (China); Long Jiang, E-mail: longjiang0301@gmail.com [Department of Physics, and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871 (China); Wu Junbao, E-mail: wujb@ihep.ac.cn [Institute of High Energy Physics, and Theoretical Physics Center for Science Facilities, Chinese Academy of Sciences, Beijing 100049 (China)

    2011-11-24

    In this Letter, we study the spin-3 topologically massive gravity (TMG), paying special attention to its properties at the chiral point. We propose an action describing the higher spin fields coupled to TMG. We discuss the traceless spin-3 fluctuations around the AdS{sub 3} vacuum and find that there is an extra local massive mode, besides the left-moving and right-moving boundary massless modes. At the chiral point, such extra mode becomes massless and degenerates with the left-moving mode. We show that at the chiral point the only degrees of freedom in the theory are the boundary right-moving graviton and spin-3 field. We conjecture that spin-3 chiral gravity with generalized Brown-Henneaux boundary condition is holographically dual to 2D chiral CFT with classical W{sub 3} algebra and central charge c{sub R}=3l/G.

  7. Minimal theory of massive gravity

    Directory of Open Access Journals (Sweden)

    Antonio De Felice

    2016-01-01

    Full Text Available We propose a new theory of massive gravity with only two propagating degrees of freedom. While the homogeneous and isotropic background cosmology and the tensor linear perturbations around it are described by exactly the same equations as those in the de Rham–Gabadadze–Tolley (dRGT massive gravity, the scalar and vector gravitational degrees of freedom are absent in the new theory at the fully nonlinear level. Hence the new theory provides a stable nonlinear completion of the self-accelerating cosmological solution that was originally found in the dRGT theory. The cosmological solution in the other branch, often called the normal branch, is also rendered stable in the new theory and, for the first time, makes it possible to realize an effective equation-of-state parameter different from (either larger or smaller than −1 without introducing any extra degrees of freedom.

  8. Search of massive star formation with COMICS

    Science.gov (United States)

    Okamoto, Yoshiko K.

    2004-04-01

    Mid-infrared observations is useful for studies of massive star formation. Especially COMICS offers powerful tools: imaging survey of the circumstellar structures of forming massive stars such as massive disks and cavity structures, mass estimate from spectroscopy of fine structure lines, and high dispersion spectroscopy to census gas motion around formed stars. COMICS will open the next generation infrared studies of massive star formation.

  9. The physics of massive neutrinos

    CERN Document Server

    Kayser, Boris; Perrier, Frederic

    1989-01-01

    This book explains the physics and phenomenology of massive neutrinos. The authors argue that neutrino mass is not unlikely and consider briefly the search for evidence of this mass in decay processes before they examine the physics and phenomenology of neutrino oscillation. The physics of Majorana neutrinos (neutrinos which are their own antiparticles) is then discussed. This volume requires of the reader only a knowledge of quantum mechanics and of very elementary quantum field theory.

  10. Vaidya spacetime in massive gravity's rainbow

    Directory of Open Access Journals (Sweden)

    Yaghoub Heydarzade

    2017-11-01

    Full Text Available In this paper, we will analyze the energy dependent deformation of massive gravity using the formalism of massive gravity's rainbow. So, we will use the Vainshtein mechanism and the dRGT mechanism for the energy dependent massive gravity, and thus analyze a ghost free theory of massive gravity's rainbow. We study the energy dependence of a time-dependent geometry, by analyzing the radiating Vaidya solution in this theory of massive gravity's rainbow. The energy dependent deformation of this Vaidya metric will be performed using suitable rainbow functions.

  11. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  12. STABLE ISOTOPE GEOCHEMISTRY OF MASSIVE ICE

    Directory of Open Access Journals (Sweden)

    Yurij K. Vasil’chuk

    2016-01-01

    Full Text Available The paper summarises stable-isotope research on massive ice in the Russian and North American Arctic, and includes the latest understanding of massive-ice formation. A new classification of massive-ice complexes is proposed, encompassing the range and variabilityof massive ice. It distinguishes two new categories of massive-ice complexes: homogeneousmassive-ice complexes have a similar structure, properties and genesis throughout, whereasheterogeneous massive-ice complexes vary spatially (in their structure and properties andgenetically within a locality and consist of two or more homogeneous massive-ice bodies.Analysis of pollen and spores in massive ice from Subarctic regions and from ice and snow cover of Arctic ice caps assists with interpretation of the origin of massive ice. Radiocarbon ages of massive ice and host sediments are considered together with isotope values of heavy oxygen and deuterium from massive ice plotted at a uniform scale in order to assist interpretation and correlation of the ice.

  13. Terror weapons. Ridding the world of nuclear, biological and chemical weapons - Commission on mass destruction weapons; Armes de terreur. Debarrasser le monde des armes nucleaires, biologiques et chimiques - Commission sur les armes de destruction massive

    Energy Technology Data Exchange (ETDEWEB)

    Blix, H.; Journe, V.

    2010-07-01

    This book approaches in 8 chapters the ambitious challenge of ridding the world of all mass destruction weapons: 1 - re-launching disarmament; 2 - terror weapons: nature of threats and answers (weakness of traditional answers, counter-proliferation); 3 - nuclear weapons: preventing proliferation and terrorism, reducing threat and nuclear weapons number, from regulation to banning); 4 - biological or toxin weapons; 5 - chemical weapons; 6 - vectors, anti-missile defenses and space weapons; 7 - exports control, international assistance and non-governmental actors; 8 - respect, verification, enforcement and role of the United Nations. The recommendations and works of the Commission are presented in appendix together with the declaration adopted on April 30, 2009. (J.S.)

  14. Spacetime structure of massive Majorana particles and massive gravitino

    CERN Document Server

    Ahluwalia, D V

    2003-01-01

    The profound difference between Dirac and Majorana particles is traced back to the possibility of having physically different constructs in the (1/2, 0) 0 (0,1/2) representation space. Contrary to Dirac particles, Majorana-particle propagators are shown to differ from the simple linear gamma mu p submu, structure. Furthermore, neither Majorana particles, nor their antiparticles can be associated with a well defined arrow of time. The inevitable consequence of this peculiarity is the particle-antiparticle metamorphosis giving rise to neutrinoless double beta decay, on the one side, and enabling spin-1/2 fields to act as gauge fields, gauginos, on the other side. The second part of the lecture notes is devoted to massive gravitino. We argue that a spin measurement in the rest frame for an unpolarized ensemble of massive gravitino, associated with the spinor-vector [(1/2, 0) 0 (0,1/2)] 0 (1/2,1/2) representation space, would yield the results 3/2 with probability one half, and 1/2 with probability one half. The ...

  15. Massive stars, successes and challenges

    OpenAIRE

    Meynet, Georges; Maeder, André; Georgy, Cyril; Ekström, Sylvia; Eggenberger, Patrick; Barblan, Fabio; Song, Han Feng

    2017-01-01

    We give a brief overview of where we stand with respect to some old and new questions bearing on how massive stars evolve and end their lifetime. We focus on the following key points that are further discussed by other contributions during this conference: convection, mass losses, rotation, magnetic field and multiplicity. For purpose of clarity, each of these processes are discussed on its own but we have to keep in mind that they are all interacting between them offering a large variety of ...

  16. Massive stars, successes and challenges

    Science.gov (United States)

    Meynet, Georges; Maeder, André; Georgy, Cyril; Ekström, Sylvia; Eggenberger, Patrick; Barblan, Fabio; Song, Han Feng

    2017-11-01

    We give a brief overview of where we stand with respect to some old and new questions bearing on how massive stars evolve and end their lifetime. We focus on the following key points that are further discussed by other contributions during this conference: convection, mass losses, rotation, magnetic field and multiplicity. For purpose of clarity, each of these processes are discussed on its own but we have to keep in mind that they are all interacting between them offering a large variety of outputs, some of them still to be discovered.

  17. Coming to grips with nuclear winter

    International Nuclear Information System (INIS)

    Scherr, S.J.

    1985-01-01

    This editorial examines the politics related to the concept of nuclear winter which is a term used to describe temperature changes brought on by the injection of smoke into the atmosphere by the massive fires set off by nuclear explosions. The climate change alone could cause crop failures and lead to massive starvation. The author suggests that the prospect of a nuclear winter should be a deterrent to any nuclear exchange

  18. Rapid depressurization of a compressible fluid

    International Nuclear Information System (INIS)

    Dang, M.; Dupont, J.F.; Weber, H.

    1978-08-01

    The rapid depressurization of a plenum is a situation frequently encountered in the dynamical analysis of nuclear gas cycles of the HHT type. Various methods of numerical analyses for a 1-dimensional flow model are examined: finite difference method; control volume method; method of characteristics. Based on the shallow water analogy to compressible flow, the numerical results are compared with those from a water table set up to simulate a standard problem. (Auth.)

  19. Solid holography and massive gravity

    International Nuclear Information System (INIS)

    Alberte, Lasma; Baggioli, Matteo; Khmelnitsky, Andrei; Pujolàs, Oriol

    2016-01-01

    Momentum dissipation is an important ingredient in condensed matter physics that requires a translation breaking sector. In the bottom-up gauge/gravity duality, this implies that the gravity dual is massive. We start here a systematic analysis of holographic massive gravity (HMG) theories, which admit field theory dual interpretations and which, therefore, might store interesting condensed matter applications. We show that there are many phases of HMG that are fully consistent effective field theories and which have been left overlooked in the literature. The most important distinction between the different HMG phases is that they can be clearly separated into solids and fluids. This can be done both at the level of the unbroken spacetime symmetries as well as concerning the elastic properties of the dual materials. We extract the modulus of rigidity of the solid HMG black brane solutions and show how it relates to the graviton mass term. We also consider the implications of the different HMGs on the electric response. We show that the types of response that can be consistently described within this framework is much wider than what is captured by the narrow class of models mostly considered so far.

  20. Solid holography and massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Alberte, Lasma [Abdus Salam International Centre for Theoretical Physics,Strada Costiera 11, 34151, Trieste (Italy); Baggioli, Matteo [Institut de Física d’Altes Energies (IFAE),The Barcelona Institute of Science and Technology (BIST), Campus UAB, 08193 Bellaterra, Barcelona (Spain); Department of Physics, Institute for Condensed Matter Theory, University of Illinois,1110 W. Green Street, Urbana, IL 61801 (United States); Khmelnitsky, Andrei [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, 34151, Trieste (Italy); Pujolàs, Oriol [Institut de Física d’Altes Energies (IFAE),The Barcelona Institute of Science and Technology (BIST), Campus UAB, 08193 Bellaterra, Barcelona (Spain)

    2016-02-17

    Momentum dissipation is an important ingredient in condensed matter physics that requires a translation breaking sector. In the bottom-up gauge/gravity duality, this implies that the gravity dual is massive. We start here a systematic analysis of holographic massive gravity (HMG) theories, which admit field theory dual interpretations and which, therefore, might store interesting condensed matter applications. We show that there are many phases of HMG that are fully consistent effective field theories and which have been left overlooked in the literature. The most important distinction between the different HMG phases is that they can be clearly separated into solids and fluids. This can be done both at the level of the unbroken spacetime symmetries as well as concerning the elastic properties of the dual materials. We extract the modulus of rigidity of the solid HMG black brane solutions and show how it relates to the graviton mass term. We also consider the implications of the different HMGs on the electric response. We show that the types of response that can be consistently described within this framework is much wider than what is captured by the narrow class of models mostly considered so far.

  1. Nonpainful wide-area compression inhibits experimental pain.

    Science.gov (United States)

    Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena

    2016-09-01

    Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.

  2. Two Cases of Massive Hydrothorax Complicating Peritoneal Dialysis

    International Nuclear Information System (INIS)

    Bae, Sang Kyun; Yum, Ha Yong; Rim, Hark

    1994-01-01

    Massive hydrothorax complicating continuous ambulatory peritoneal dialysis (CAPD) is relatively rare. A 67-year-old male and a 23-year-old female patients during CAPD presented massive pleural effusion, They have been performing peritoneal dialysis due to end-stage renal disease for 8 months and 2 weeks respectively. We injected '9 9m Tc-labelled radiopharmaceutical (phytate and MAA, respectively) into peritoneal cavity with the dialysate. The anterior, posterior and right lateral images were obtained. The studies reveal visible radioactivity in the right chest indicating the communication between the peritoneal and the pleural space. After sclerotherapy with tetracycline, the same studies reveal no radioactivity in the right chest suggesting successful therapy. We think nuclear imaging is a simple and noninvasive method for the differential diagnosis of pleural effusion in patients during CAPD and the evaluation of therapy.

  3. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  4. The Final Stages of Massive Star Evolution and Their Supernovae

    Science.gov (United States)

    Heger, Alexander

    In this chapter I discuss the final stages in the evolution of massive stars - stars that are massive enough to burn nuclear fuel all the way to iron group elements in their core. The core eventually collapses to form a neutron star or a black hole when electron captures and photo-disintegration reduce the pressure support to an extent that it no longer can hold up against gravity. The late burning stages of massive stars are a rich subject by themselves, and in them many of the heavy elements in the universe are first generated. The late evolution of massive stars strongly depends on their mass, and hence can be significantly effected by mass loss due to stellar winds and episodic mass loss events - a critical ingredient that we do not know as well as we would like. If the star loses all the hydrogen envelope, a Type I supernova results, if it does not, a Type II supernova is observed. Whether the star makes neutron star or a black hole, or a neutron star at first and a black hole later, and how fast they spin largely affects the energetics and asymmetry of the observed supernova explosion. Beyond photon-based astronomy, other than the sun, a supernova (SN 1987) has been the only object in the sky we ever observed in neutrinos, and supernovae may also be the first thing we will ever see in gravitational wave detectors like LIGO. I conclude this chapter reviewing the deaths of the most massive stars and of Population III stars.

  5. [Common types of massive intraoperative haemorrhage, treatment philosophy and operating skills in pelvic cancer surgery].

    Science.gov (United States)

    Wang, Gang-cheng; Han, Guang-sen; Ren, Ying-kun; Xu, Yong-chao; Zhang, Jian; Lu, Chao-min; Zhao, Yu-zhou; Li, Jian; Gu, Yan-hui

    2013-10-01

    To explore the common types of massive intraoperative bleeding, clinical characteristics, treatment philosophy and operating skills in pelvic cancer surgery. We treated massive intraoperative bleeding in 19 patients with pelvic cancer in our department from January 2003 to March 2012. Their clinical data were retrospectively analyzed. The clinical features of massive intraoperative bleeding were analyzed, the treatment experience and lessons were summed up, and the operating skills to manage this serious issue were analyzed. In this group of 19 patients, 7 cases were of presacral venous plexus bleeding, 5 cases of internal iliac vein bleeding, 6 cases of anterior sacral venous plexus and internal iliac vein bleeding, and one cases of internal and external iliac vein bleeding. Six cases of anterior sacral plexus bleeding and 4 cases of internal iliac vein bleeding were treated with suture ligation to stop the bleeding. Six cases of anterior sacral and internal iliac vein bleeding, one cases of anterior sacral vein bleeding, and one case of internal iliac vein bleeding were managed with transabdominal perineal incision or transabdominal cotton pad compression hemostasis. One case of internal and external iliac vein bleeding was treated with direct ligation of the external iliac vein and compression hemostasis of the internal iliac vein. Among the 19 patients, 18 cases had effective hemostasis. Their blood loss was 400-1500 ml, and they had a fair postoperative recovery. One patient died due to massive intraoperative bleeding of ca. 4500 ml. Most of the massive intraoperative bleeding during pelvic cancer surgery is from the presacral venous plexus and internal iliac vein. The operator should go along with the treatment philosophy to save the life of the patient above all, and to properly perform suture ligation or compression hemostasis according to the actual situation, and with mastered crucial operating hemostatic skills.

  6. Isentropic Compression of Argon

    International Nuclear Information System (INIS)

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

  7. Pulsed Compression Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  8. Medullary compression syndrome

    International Nuclear Information System (INIS)

    Barriga T, L.; Echegaray, A.; Zaharia, M.; Pinillos A, L.; Moscol, A.; Barriga T, O.; Heredia Z, A.

    1994-01-01

    The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

  9. Legal contamination of food products in case of nuclear accident. The CRIIRAD criticizes the outrageous work performed by Euratom experts, and calls for a massive mobilisation against the project of the European Commission

    International Nuclear Information System (INIS)

    Castanier, Corinne

    2015-01-01

    After having recalled the content of the project of the European Commission on the definition of maximum permissible levels of radioactive contamination of food products which will be applied in case of nuclear accident, this report first outlines that the associated risk levels are unacceptable (the maximum dose limit would not be respected by far). The authors outline numerous extremely severe anomalies and errors which occurred in the process of elaboration of the project. They try to identify responsibilities for these errors, and wander whether they are due to incompetence, or made on purpose as they always go in the same direction. The CRIIRAD therefore calls for a European mobilisation to sign a petition for a complete review of the applicable regulation. Letters written to or by members of European institutions are provided

  10. On maximal massive 3D supergravity

    OpenAIRE

    Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K

    2010-01-01

    ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...

  11. On the singularities of massive superstring amplitudes

    International Nuclear Information System (INIS)

    Foda, O.

    1987-01-01

    Superstring one-loop amplitudes with massive external states are shown to be in general ill-defined due to internal on-shell propagators. However, we argue that since any massive string state (in the uncompactified theory) has a finite lifetime to decay into massless particles, such amplitudes are not terms in the perturbative expansion of physical S-matrix elements: These can be defined only with massless external states. Consistent massive amplitudes repuire an off-shell formalism. (orig.)

  12. On the singularities of massive superstring amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.

    1987-06-04

    Superstring one-loop amplitudes with massive external states are shown to be in general ill-defined due to internal on-shell propagators. However, we argue that since any massive string state (in the uncompactified theory) has a finite lifetime to decay into massless particles, such amplitudes are not terms in the perturbative expansion of physical S-matrix elements: These can be defined only with massless external states. Consistent massive amplitudes repuire an off-shell formalism.

  13. Graph Compression by BFS

    Directory of Open Access Journals (Sweden)

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  14. Light weakly interacting massive particles

    Science.gov (United States)

    Gelmini, Graciela B.

    2017-08-01

    Light weakly interacting massive particles (WIMPs) are dark matter particle candidates with weak scale interaction with the known particles, and mass in the GeV to tens of GeV range. Hints of light WIMPs have appeared in several dark matter searches in the last decade. The unprecedented possible coincidence into tantalizingly close regions of mass and cross section of four separate direct detection experimental hints and a potential indirect detection signal in gamma rays from the galactic center, aroused considerable interest in our field. Even if these hints did not so far result in a discovery, they have had a significant impact in our field. Here we review the evidence for and against light WIMPs as dark matter candidates and discuss future relevant experiments and observations.

  15. Massive postpartum right renal hemorrhage.

    Science.gov (United States)

    Kiracofe, H L; Peterson, N

    1975-06-01

    All reported cases of massive postpartum right renal hemorrhage have involved healthy young primigravidas and blacks have predominated (4 of 7 women). Coagulopathies and underlying renal disease have been absent. Hematuria was painless in 5 of 8 cases. Hemorrhage began within 24 hours in 1 case, within 48 hours in 4 cases and 4 days post partum in 3 cases. Our first case is the only report in which hemorrhage has occurred in a primipara. Failure of closure or reopening of pyelovenous channels is suggested as the pathogenesis. The hemorrhage has been self-limiting, requiring no more than 1,500 cc whole blood replacement. Bleeding should stop spontaneously, and rapid renal pelvic clot lysis should follow with maintenance of adequate urine output and Foley catheter bladder decompression. To date surgical intervention has not been necessary.

  16. Cosmological attractors in massive gravity

    CERN Document Server

    Dubovsky, S; Tkachev, I I

    2005-01-01

    We study Lorentz-violating models of massive gravity which preserve rotations and are invariant under time-dependent shifts of the spatial coordinates. In the linear approximation the Newtonian potential in these models has an extra ``confining'' term proportional to the distance from the source. We argue that during cosmological expansion the Universe may be driven to an attractor point with larger symmetry which includes particular simultaneous dilatations of time and space coordinates. The confining term in the potential vanishes as one approaches the attractor. In the vicinity of the attractor the extra contribution is present in the Friedmann equation which, in a certain range of parameters, gives rise to the cosmic acceleration.

  17. Massive Black Holes and Galaxies

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Evidence has been accumulating for several decades that many galaxies harbor central mass concentrations that may be in the form of black holes with masses between a few million to a few billion time the mass of the Sun. I will discuss measurements over the last two decades, employing adaptive optics imaging and spectroscopy on large ground-based telescopes that prove the existence of such a massive black hole in the Center of our Milky Way, beyond any reasonable doubt. These data also provide key insights into its properties and environment. Most recently, a tidally disrupting cloud of gas has been discovered on an almost radial orbit that reached its peri-distance of ~2000 Schwarzschild radii in 2014, promising to be a valuable tool for exploring the innermost accretion zone. Future interferometric studies of the Galactic Center Black hole promise to be able to test gravity in its strong field limit.

  18. Stable massive particles at colliders

    Energy Technology Data Exchange (ETDEWEB)

    Fairbairn, M.; /Stockholm U.; Kraan, A.C.; /Pennsylvania U.; Milstead, D.A.; /Stockholm U.; Sjostrand, T.; /Lund U.; Skands, P.; /Fermilab; Sloan, T.; /Lancaster U.

    2006-11-01

    We review the theoretical motivations and experimental status of searches for stable massive particles (SMPs) which could be sufficiently long-lived as to be directly detected at collider experiments. The discovery of such particles would address a number of important questions in modern physics including the origin and composition of dark matter in the universe and the unification of the fundamental forces. This review describes the techniques used in SMP-searches at collider experiments and the limits so far obtained on the production of SMPs which possess various colour, electric and magnetic charge quantum numbers. We also describe theoretical scenarios which predict SMPs, the phenomenology needed to model their production at colliders and interactions with matter. In addition, the interplay between collider searches and open questions in cosmology such as dark matter composition are addressed.

  19. Compressible generalized Newtonian fluids

    Czech Academy of Sciences Publication Activity Database

    Málek, Josef; Rajagopal, K.R.

    2010-01-01

    Roč. 61, č. 6 (2010), s. 1097-1110 ISSN 0044-2275 Institutional research plan: CEZ:AV0Z20760514 Keywords : power law fluid * uniform temperature * compressible fluid Subject RIV: BJ - Thermodynamics Impact factor: 1.290, year: 2010

  20. Temporal compressive sensing systems

    Science.gov (United States)

    Reed, Bryan W.

    2017-12-12

    Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.

  1. Compression of Infrared images

    DEFF Research Database (Denmark)

    Mantel, Claire; Forchhammer, Søren

    2017-01-01

    best for bits-per-pixel rates below 1.4 bpp, while HEVC obtains best performance in the range 1.4 to 6.5 bpp. The compression performance is also evaluated based on maximum errors. These results also show that HEVC can achieve a precision of 1°C with an average of 1.3 bpp....

  2. Gas compression infrared generator

    International Nuclear Information System (INIS)

    Hug, W.F.

    1980-01-01

    A molecular gas is compressed in a quasi-adiabatic manner to produce pulsed radiation during each compressor cycle when the pressure and temperature are sufficiently high, and part of the energy is recovered during the expansion phase, as defined in U.S. Pat. No. 3,751,666; characterized by use of a cylinder with a reciprocating piston as a compressor

  3. Nuclear reactors

    International Nuclear Information System (INIS)

    Prescott, R.F.

    1976-01-01

    A nuclear reactor containment vessel faced internally with a metal liner is provided with thermal insulation for the liner, comprising one or more layers of compressible material such as ceramic fiber, such as would be conventional in an advanced gas-cooled reactor and also a superposed layer of ceramic bricks or tiles in combination with retention means therefor, the retention means (comprising studs projecting from the liner, and bolts or nuts in threaded engagement with the studs) being themselves insulated from the vessel interior so that the coolant temperatures achieved in a High-Temperature Reactor or a Fast Reactor can be tolerated with the vessel. The layer(s) of compressible material is held under a degree of compression either by the ceramic bricks or tiles themselves or by cover plates held on the studs, in which case the bricks or tiles are preferably bedded on a yielding layer (for example of carbon fibers) rather than directly on the cover plates

  4. Massive Star Burps, Then Explodes

    Science.gov (United States)

    2007-04-01

    Berkeley -- In a galaxy far, far away, a massive star suffered a nasty double whammy. On Oct. 20, 2004, Japanese amateur astronomer Koichi Itagaki saw the star let loose an outburst so bright that it was initially mistaken for a supernova. The star survived, but for only two years. On Oct. 11, 2006, professional and amateur astronomers witnessed the star actually blowing itself to smithereens as Supernova 2006jc. Swift UVOT Image Swift UVOT Image (Credit: NASA / Swift / S.Immler) "We have never observed a stellar outburst and then later seen the star explode," says University of California, Berkeley, astronomer Ryan Foley. His group studied the event with ground-based telescopes, including the 10-meter (32.8-foot) W. M. Keck telescopes in Hawaii. Narrow helium spectral lines showed that the supernova's blast wave ran into a slow-moving shell of material, presumably the progenitor's outer layers ejected just two years earlier. If the spectral lines had been caused by the supernova's fast-moving blast wave, the lines would have been much broader. artistic rendering This artistic rendering depicts two years in the life of a massive blue supergiant star, which burped and spewed a shell of gas, then, two years later, exploded. When the supernova slammed into the shell of gas, X-rays were produced. (Credit: NASA/Sonoma State Univ./A.Simonnet) Another group, led by Stefan Immler of NASA's Goddard Space Flight Center, Greenbelt, Md., monitored SN 2006jc with NASA's Swift satellite and Chandra X-ray Observatory. By observing how the supernova brightened in X-rays, a result of the blast wave slamming into the outburst ejecta, they could measure the amount of gas blown off in the 2004 outburst: about 0.01 solar mass, the equivalent of about 10 Jupiters. "The beautiful aspect of our SN 2006jc observations is that although they were obtained in different parts of the electromagnetic spectrum, in the optical and in X-rays, they lead to the same conclusions," says Immler. "This

  5. An effective theory of massive gauge bosons

    International Nuclear Information System (INIS)

    Doria, R.M.; Helayel Neto, J.A.

    1986-01-01

    The coupling of a group-valued massive scalar field to a gauge field through a symmetric rank-2 field strenght is studied. By considering energies very small compared with the mass of the scalar and invoking the decoupling theorem, one is left with a low-energy effective theory describing a dynamics of massive vector fields. (Author) [pt

  6. On the singularities of massive superstring amplitudes

    NARCIS (Netherlands)

    Foda, O.

    1987-01-01

    Superstring one-loop amplitudes with massive external states are shown to be in general ill-defined due to internal on-shell propagators. However, we argue that since any massive string state (in the uncompactified theory) has a finite lifetime to decay into massless particles, such amplitudes are

  7. Massive vector fields and black holes

    International Nuclear Information System (INIS)

    Frolov, V.P.

    1977-04-01

    A massive vector field inside the event horizon created by the static sources located outside the black hole is investigated. It is shown that the back reaction of such a field on the metric near r = 0 cannot be neglected. The possibility of the space-time structure changing near r = 0 due to the external massive field is discussed

  8. Management of massive haemoptysis | Adegboye | Nigerian Journal ...

    African Journals Online (AJOL)

    Background: This study compares two management techniques in the treatment of massive haemotysis. Method: All patients with massive haemoptysis treated between January 1969 and December 1980 (group 1) were retrospectively reviewed and those prospectively treated between January 1981 and August 1999 ...

  9. Nitrogen chronology of massive main sequence stars

    NARCIS (Netherlands)

    Köhler, K.; Borzyszkowski, M.; Brott, I.; Langer, N.; de Koter, A.

    2012-01-01

    Context. Rotational mixing in massive main sequence stars is predicted to monotonically increase their surface nitrogen abundance with time. Aims. We use this effect to design a method for constraining the age and the inclination angle of massive main sequence stars, given their observed luminosity,

  10. Modeling basic creep in concrete at early-age under compressive and tensile loading

    Energy Technology Data Exchange (ETDEWEB)

    Hilaire, Adrien, E-mail: adrien.hilaire@ens-cachan.fr [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France); Benboudjema, Farid; Darquennes, Aveline; Berthaud, Yves [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France); Nahas, Georges [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France); Institut de radioprotection et de sureté nucléaire, Fontenay-aux-Roses (France)

    2014-04-01

    A numerical model has been developed to predict early age cracking for massive concrete structures, and especially concrete nuclear containment vessels. Major phenomena are included: hydration, heat diffusion, autogenous and thermal shrinkage, creep and cracking. Since studied structures are massive, drying is not taken into account. Such modeling requires the identification of several material parameters. Literature data is used to validate the basic creep model. A massive wall, representative of a concrete nuclear containment, is simulated; predicted cracking is consistent with observation and is found highly sensitive to the creep phenomenon.

  11. Data compression with applications to digital radiology

    International Nuclear Information System (INIS)

    Elnahas, S.E.

    1985-01-01

    The structure of arithmetic codes is defined in terms of source parsing trees. The theoretical derivations of algorithms for the construction of optimal and sub-optimal structures are presented. The software simulation results demonstrate how arithmetic coding out performs variable-length to variable-length coding. Linear predictive coding is presented for the compression of digital diagnostic images from several imaging modalities including computed tomography, nuclear medicine, ultrasound, and magnetic resonance imaging. The problem of designing optimal predictors is formulated and alternative solutions are discussed. The results indicate that noiseless compression factors between 1.7 and 7.4 can be achieved. With nonlinear predictive coding, noisy and noiseless compression techniques are combined in a novel way that may have a potential impact on picture archiving and communication systems in radiology. Adaptive fast discrete cosine transform coding systems are used as nonlinear block predictors, and optimal delta modulation systems are used as nonlinear sequential predictors. The off-line storage requirements for archiving diagnostic images are reasonably reduced by the nonlinear block predictive coding. The online performance, however, seems to be bounded by that of the linear systems. The subjective quality of image imperfect reproductions from the cosine transform coding is promising and prompts future research on the compression of diagnostic images by transform coding systems and the clinical evaluation of these systems

  12. Compressible Fluid Suspension Performance Testing

    National Research Council Canada - National Science Library

    Hoogterp, Francis

    2003-01-01

    ... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

  13. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  14. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  15. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  16. Statistical Compression for Climate Model Output

    Science.gov (United States)

    Hammerling, D.; Guinness, J.; Soh, Y. J.

    2017-12-01

    Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.

  17. Digital cinema video compression

    Science.gov (United States)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  18. Fingerprints in compressed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Cording, Patrick Hagge

    2017-01-01

    In this paper we show how to construct a data structure for a string S of size N compressed into a context-free grammar of size n that supports efficient Karp–Rabin fingerprint queries to any substring of S. That is, given indices i and j, the answer to a query is the fingerprint of the substring S......[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(log⁡N) query time, and for Linear SLPs (an SLP derivative that captures LZ78 compression and its variations) we get O(log⁡log⁡N) query time...

  19. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  20. Compressed sensing electron tomography

    International Nuclear Information System (INIS)

    Leary, Rowan; Saghi, Zineb; Midgley, Paul A.; Holland, Daniel J.

    2013-01-01

    The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform

  1. Nonsingular universe in massive gravity's rainbow

    Science.gov (United States)

    Hendi, S. H.; Momennia, M.; Eslam Panah, B.; Panahiyan, S.

    2017-06-01

    One of the fundamental open questions in cosmology is whether we can regard the universe evolution without singularity like a Big Bang or a Big Rip. This challenging subject stimulates one to regard a nonsingular universe in the far past with an arbitrarily large vacuum energy. Considering the high energy regime in the cosmic history, it is believed that Einstein gravity should be corrected to an effective energy dependent theory which could be acquired by gravity's rainbow. On the other hand, employing massive gravity provided us with solutions to some of the long standing fundamental problems of cosmology such as cosmological constant problem and self acceleration of the universe. Considering these aspects of gravity's rainbow and massive gravity, in this paper, we initiate studying FRW cosmology in the massive gravity's rainbow formalism. At first, we show that although massive gravity modifies the FRW cosmology, but it does not itself remove the big bang singularity. Then, we generalize the massive gravity to the case of energy dependent spacetime and find that massive gravity's rainbow can remove the early universe singularity. We bring together all the essential conditions for having a nonsingular universe and the effects of both gravity's rainbow and massive gravity generalizations on such criteria are determined.

  2. Massive radiological releases profoundly differ from controlled releases

    International Nuclear Information System (INIS)

    Pascucci-Cahen, Ludivine; Patrick, Momal

    2012-11-01

    Preparing for a nuclear accident implies understanding potential consequences. While many specialized experts have been working on different particular aspects, surprisingly little effort has been dedicated to establishing the big picture and providing a global and balanced image of all major consequences. IRSN has been working on the cost of nuclear accidents, an exercise which must strive to be as comprehensive as possible since any omission obviously underestimates the cost. It therefore provides (ideally) an estimate of all cost components, thus revealing the structure of accident costs, and hence sketching a global picture. On a French PWR, it appears that controlled releases would cause an 'economical' accident with limited radiological consequences when compared to other costs; in contrast, massive releases would trigger a major crisis with strong radiological consequences. The two types of crises would confront managers with different types of challenges. (authors)

  3. Using massive digital libraries a LITA guide

    CERN Document Server

    Weiss, Andrew

    2014-01-01

    Some have viewed the ascendance of the digital library as some kind of existential apocalypse, nothing less than the beginning of the end for the traditional library. But Weiss, recognizing the concept of the library as a ""big idea"" that has been implemented in many ways over thousands of years, is not so gloomy. In this thought-provoking and unabashedly optimistic book, he explores how massive digital libraries are already adapting to society's needs, and looks ahead to the massive digital libraries of tomorrow, coveringThe author's criteria for defining massive digital librariesA history o

  4. A massive, dead disk galaxy in the early Universe.

    Science.gov (United States)

    Toft, Sune; Zabl, Johannes; Richard, Johan; Gallazzi, Anna; Zibetti, Stefano; Prescott, Moire; Grillo, Claudio; Man, Allison W S; Lee, Nicholas Y; Gómez-Guijarro, Carlos; Stockmann, Mikkel; Magdis, Georgios; Steinhardt, Charles L

    2017-06-21

    At redshift z = 2, when the Universe was just three billion years old, half of the most massive galaxies were extremely compact and had already exhausted their fuel for star formation. It is believed that they were formed in intense nuclear starbursts and that they ultimately grew into the most massive local elliptical galaxies seen today, through mergers with minor companions, but validating this picture requires higher-resolution observations of their centres than is currently possible. Magnification from gravitational lensing offers an opportunity to resolve the inner regions of galaxies. Here we report an analysis of the stellar populations and kinematics of a lensed z = 2.1478 compact galaxy, which-surprisingly-turns out to be a fast-spinning, rotationally supported disk galaxy. Its stars must have formed in a disk, rather than in a merger-driven nuclear starburst. The galaxy was probably fed by streams of cold gas, which were able to penetrate the hot halo gas until they were cut off by shock heating from the dark matter halo. This result confirms previous indirect indications that the first galaxies to cease star formation must have gone through major changes not just in their structure, but also in their kinematics, to evolve into present-day elliptical galaxies.

  5. The Compressed Baryonic Matter experiment

    Directory of Open Access Journals (Sweden)

    Seddiki Sélim

    2014-04-01

    Full Text Available The Compressed Baryonic Matter (CBM experiment is a next-generation fixed-target detector which will operate at the future Facility for Antiproton and Ion Research (FAIR in Darmstadt. The goal of this experiment is to explore the QCD phase diagram in the region of high net baryon densities using high-energy nucleus-nucleus collisions. Its research program includes the study of the equation-of-state of nuclear matter at high baryon densities, the search for the deconfinement and chiral phase transitions and the search for the QCD critical point. The CBM detector is designed to measure both bulk observables with a large acceptance and rare diagnostic probes such as charm particles, multi-strange hyperons, and low mass vector mesons in their di-leptonic decay. The physics program of CBM will be summarized, followed by an overview of the detector concept, a selection of the expected physics performance, and the status of preparation of the experiment.

  6. Massive congenital tricuspid insufficiency in the newborn

    International Nuclear Information System (INIS)

    Bogren, H.G.; Ikeda, R.; Riemenschneider, T.A.; Merten, D.F.; Janos, G.G.

    1979-01-01

    Three cases of massive congenital tricuspid incompetence in the newborn are reported and discussed from diagnostic, pathologic and etiologic points of view. The diagnosis is important as cases have been reported with spontaneous resolution. (Auth.)

  7. Current management of massive hemorrhage in trauma

    DEFF Research Database (Denmark)

    Johansson, Pär I; Stensballe, Jakob; Ostrowski, Sisse R

    2012-01-01

    ABSTRACT: Hemorrhage remains a major cause of potentially preventable deaths. Trauma and massive transfusion are associated with coagulopathy secondary to tissue injury, hypoperfusion, dilution, and consumption of clotting factors and platelets. Concepts of damage control surgery have evolved...

  8. How I treat patients with massive hemorrhage

    DEFF Research Database (Denmark)

    Johansson, Pär I; Stensballe, Jakob; Oliveri, Roberto

    2014-01-01

    Massive hemorrhage is associated with coagulopathy and high mortality. The transfusion guidelines up to 2006 recommended that resuscitation of massive hemorrhage should occur in successive steps using crystalloids, colloids and red blood cells (RBC) in the early phase, and plasma and platelets...... in the late phase. With the introduction of the cell-based model of hemostasis in the mid 1990ties, our understanding of the hemostatic process and of coagulopathy has improved. This has contributed to a change in resuscitation strategy and transfusion therapy of massive hemorrhage along with an acceptance...... outcome, although final evidence on outcome from randomized controlled trials are lacking. We here present how we in Copenhagen and Houston, today, manage patients with massive hemorrhage....

  9. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  10. Fast Compressive Tracking.

    Science.gov (United States)

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  11. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  12. Massive cerebellar infarction: a neurosurgical approach

    Directory of Open Access Journals (Sweden)

    Salazar Luis Rafael Moscote

    2015-12-01

    Full Text Available Cerebellar infarction is a challenge for the neurosurgeon. The rapid recognition will crucial to avoid devastating consequences. The massive cerebellar infarction has pseudotumoral behavior, should affect at least one third of the volume of the cerebellum. The irrigation of the cerebellum presents anatomical diversity, favoring the appearance of atypical infarcts. The neurosurgical management is critical for massive cerebellar infarction. We present a review of the literature.

  13. Analysis by compression

    DEFF Research Database (Denmark)

    Meredith, David

    MEL is a geometric music encoding language designed to allow for musical objects to be encoded parsimoniously as sets of points in pitch-time space, generated by performing geometric transformations on component patterns. MEL has been implemented in Java and coupled with the SIATEC pattern...... discovery algorithm to allow for compact encodings to be generated automatically from in extenso note lists. The MEL-SIATEC system is founded on the belief that music analysis and music perception can be modelled as the compression of in extenso descriptions of musical objects....

  14. Compressive Fatigue in Wood

    DEFF Research Database (Denmark)

    Clorius, Christian Odin; Pedersen, Martin Bo Uhre; Hoffmeyer, Preben

    1999-01-01

    An investigation of fatigue failure in wood subjected to load cycles in compression parallel to grain is presented. Small clear specimens of spruce are taken to failure in square wave formed fatigue loading at a stress excitation level corresponding to 80% of the short term strength. Four...... frequencies ranging from 0.01 Hz to 10 Hz are used. The number of cycles to failure is found to be a poor measure of the fatigue performance of wood. Creep, maximum strain, stiffness and work are monitored throughout the fatigue tests. Accumulated creep is suggested identified with damage and a correlation...

  15. Compressive full waveform lidar

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  16. Metal Hydride Compression

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)

    2017-07-01

    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

  17. The Evolution of Low-Metallicity Massive Stars

    Science.gov (United States)

    Szécsi, Dorottya

    2016-07-01

    Massive star evolution taking place in astrophysical environments consisting almost entirely of hydrogen and helium - in other words, low-metallicity environments - is responsible for some of the most intriguing and energetic cosmic phenomena, including supernovae, gamma-ray bursts and gravitational waves. This thesis aims to investigate the life and death of metal-poor massive stars, using theoretical simulations of the stellar structure and evolution. Evolutionary models of rotating, massive stars (9-600 Msun) with an initial metal composition appropriate for the low-metallicity dwarf galaxy I Zwicky 18 are presented and analyzed. We find that the fast rotating models (300 km/s) become a particular type of objects predicted only at low-metallicity: the so-called Transparent Wind Ultraviolet INtense (TWUIN) stars. TWUIN stars are fast rotating massive stars that are extremely hot (90 kK), very bright and as compact as Wolf-Rayet stars. However, as opposed to Wolf-Rayet stars, their stellar winds are optically thin. As these hot objects emit intense UV radiation, we show that they can explain the unusually high number of ionizing photons of the dwarf galaxy I Zwicky 18, an observational quantity that cannot be understood solely based on the normal stellar population of this galaxy. On the other hand, we find that the most massive, slowly rotating models become another special type of object predicted only at low-metallicity: core-hydrogen-burning cool supergiant stars. Having a slow but strong stellar wind, these supergiants may be important contributors in the chemical evolution of young galactic globular clusters. In particular, we suggest that the low mass stars observed today could form in a dense, massive and cool shell around these, now dead, supergiants. This scenario is shown to explain the anomalous surface abundances observed in these low mass stars, since the shell itself, having been made of the mass ejected by the supergiant’s wind, contains nuclear

  18. Free compression tube. Applications

    Science.gov (United States)

    Rusu, Ioan

    2012-11-01

    During the flight of vehicles, their propulsion energy must overcome gravity, to ensure the displacement of air masses on vehicle trajectory, to cover both energy losses from the friction between a solid surface and the air and also the kinetic energy of reflected air masses due to the impact with the flying vehicle. The flight optimization by increasing speed and reducing fuel consumption has directed research in the aerodynamics field. The flying vehicles shapes obtained through studies in the wind tunnel provide the optimization of the impact with the air masses and the airflow along the vehicle. By energy balance studies for vehicles in flight, the author Ioan Rusu directed his research in reducing the energy lost at vehicle impact with air masses. In this respect as compared to classical solutions for building flight vehicles aerodynamic surfaces which reduce the impact and friction with air masses, Ioan Rusu has invented a device which he named free compression tube for rockets, registered with the State Office for Inventions and Trademarks of Romania, OSIM, deposit f 2011 0352. Mounted in front of flight vehicles it eliminates significantly the impact and friction of air masses with the vehicle solid. The air masses come into contact with the air inside the free compression tube and the air-solid friction is eliminated and replaced by air to air friction.

  19. Photon compression in cylinders

    International Nuclear Information System (INIS)

    Ensley, D.L.

    1977-01-01

    It has been shown theoretically that intense microwave radiation is absorbed non-classically by a newly enunciated mechanism when interacting with hydrogen plasma. Fields > 1 Mg, lambda > 1 mm are within this regime. The predicted absorption, approximately P/sub rf/v/sub theta/sup e/, has not yet been experimentally confirmed. The applications of such a coupling are many. If microwave bursts approximately > 5 x 10 14 watts, 5 ns can be generated, the net generation of power from pellet fusion as well as various military applications becomes feasible. The purpose, then, for considering gas-gun photon compression is to obtain the above experimental capability by converting the gas kinetic energy directly into microwave form. Energies of >10 5 joules cm -2 and powers of >10 13 watts cm -2 are potentially available for photon interaction experiments using presently available technology. The following topics are discussed: microwave modes in a finite cylinder, injection, compression, switchout operation, and system performance parameter scaling

  20. Fingerprints in Compressed Strings

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2013-01-01

    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...... derivative that captures LZ78 compression and its variations) we get O(loglogN) query time. Hence, our data structures has the same time and space complexity as for random access in SLPs. We utilize the fingerprint data structures to solve the longest common extension problem in query time O(logNlogℓ) and O....... That is, given indices i and j, the answer to a query is the fingerprint of the substring S[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(logN) query time, and for Linear SLPs (an SLP...

  1. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  2. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  3. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  4. An improvement analysis on video compression using file segmentation

    Science.gov (United States)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  5. GPU Lossless Hyperspectral Data Compression System for Space Applications

    Science.gov (United States)

    Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled

    2012-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.

  6. Macron Formed Liner Compression as a Practical Method for Enabling Magneto-Inertial Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John

    2011-12-10

    The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. The main impediment for current nuclear fusion concepts is the complexity and large mass associated with the confinement systems. To take advantage of the smaller scale, higher density regime of magnetic fusion, an efficient method for achieving the compressional heating required to reach fusion gain conditions must be found. The very compact, high energy density plasmoid commonly referred to as a Field Reversed Configuration (FRC) provides for an ideal target for this purpose. To make fusion with the FRC practical, an efficient method for repetitively compressing the FRC to fusion gain conditions is required. A novel approach to be explored in this endeavor is to remotely launch a converging array of small macro-particles (macrons) that merge and form a more massive liner inside the reactor which then radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target FRC plasmoid suppresses the thermal transport to the confining liner significantly lowering the imploding power needed to compress the target. With the momentum flux being delivered by an assemblage of low mass, but high velocity macrons, many of the difficulties encountered with the liner implosion power technology are eliminated. The undertaking to be described in this proposal is to evaluate the feasibility achieving fusion conditions from this simple and low cost approach to fusion. During phase I the design and testing of the key components for the creation of the macron formed liner have been successfully carried out. Detailed numerical calculations of the merging, formation and radial implosion of the Macron Formed Liner (MFL) were also performed. The phase II effort will focus on an experimental demonstration of the macron launcher at full power, and the demonstration

  7. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  8. Presupernova evolution of massive stars

    International Nuclear Information System (INIS)

    Weaver, T.A.; Zimmerman, G.B.; Woosley, S.E.

    1977-01-01

    Population I stars of 15 M/sub mass/ and 25 M/sub mass/ have been evolved from the zero-age main sequence through iron core collapse utilizing a numerical model that incorporates both implicit hydrodynamics and a detailed treatment of nuclear reactions. The stars end their presupernova evolution as red supergiants with photospheric radii of 3.9 x 10 13 cm and 6.7 x 10 13 cm, respectively, and density structures similar to those invoked to explain Type II supernova light curves on a strictly hydrodynamic basis. Both stars are found to form substantially neutronized ''iron'' cores of 1.56 M/sub mass/ and 1.61 M/sub mass/, and central electron abundances of 0.427 and 0.439 moles/g, respectively, during hydrostatic silicon burning. Just prior to collapse, the abundances of the elements in the 25 M/sub mass/ star (excluding the neutronized iron core) have ratios strikingly close to their solar system values over the mass range from oxygen to calcium, while the 15 M/sub mass/ star is characterized by large enhancements of Ne, Mg, and Si. It is pointed out on nucleosynthetic grounds that the mass of the neutronized core must represent a lower limit to the mass of the neutron star or black hole remnant that stars in this mass range can normally produce

  9. Waves and compressible flow

    CERN Document Server

    Ockendon, Hilary

    2016-01-01

    Now in its second edition, this book continues to give readers a broad mathematical basis for modelling and understanding the wide range of wave phenomena encountered in modern applications.  New and expanded material includes topics such as elastoplastic waves and waves in plasmas, as well as new exercises.  Comprehensive collections of models are used to illustrate the underpinning mathematical methodologies, which include the basic ideas of the relevant partial differential equations, characteristics, ray theory, asymptotic analysis, dispersion, shock waves, and weak solutions. Although the main focus is on compressible fluid flow, the authors show how intimately gasdynamic waves are related to wave phenomena in many other areas of physical science.   Special emphasis is placed on the development of physical intuition to supplement and reinforce analytical thinking. Each chapter includes a complete set of carefully prepared exercises, making this a suitable textbook for students in applied mathematics, ...

  10. The Compressed Baryonic Matter Experiment at FAIR

    International Nuclear Information System (INIS)

    Heuser, Johann M.

    2013-01-01

    The Compressed Baryonic Matter (CBM) experiment will explore the phase diagram of strongly interacting matter in the region of high net baryon densities. The experiment is being laid out for nuclear collision rates from 0.1 to 10 MHz to access a unique wide spectrum of probes, including rarest particles like hadrons containing charm quarks, or multi-strange hyperons. The physics programme will be performed with ion beams of energies up to 45 GeV/nucleon. Those will be delivered by the SIS-300 synchrotron at the completed FAIR accelerator complex. Parts of the research programme can already be addressed with the SIS-100 synchrotron at the start of FAIR operation in 2018. The initial energy range of up to 11 GeV/nucleon for heavy nuclei, 14 GeV/nucleon for light nuclei, and 29 GeV for protons, allows addressing the equation of state of compressed nuclear matter, the properties of hadrons in a dense medium, the production and propagation of charm near the production threshold, and exploring the third, strange dimension of the nuclide chart. In this article we summarize the CBM physics programme, the preparation of the detector, and give an outline of the recently begun construction of the Facility for Antiproton and Ion Research

  11. Photonic compressive sensing enabled data efficient time stretch optical coherence tomography

    Science.gov (United States)

    Mididoddi, Chaitanya K.; Wang, Chao

    2018-03-01

    Photonic time stretch (PTS) has enabled real time spectral domain optical coherence tomography (OCT). However, this method generates a torrent of massive data at GHz stream rate, which requires capturing as per Nyquist principle. If the OCT interferogram signal is sparse in Fourier domain, which is always true for samples with limited number of layers, it can be captured at lower (sub-Nyquist) acquisition rate as per compressive sensing method. In this work we report a data compressed PTS-OCT system based on photonic compressive sensing with 66% compression with low acquisition rate of 50MHz and measurement speed of 1.51MHz per depth profile. A new method has also been proposed to improve the system with all-optical random pattern generation, which completely avoids electronic bottleneck in traditional binary pseudorandom binary sequence (PRBS) generators.

  12. Nuclear reactor

    International Nuclear Information System (INIS)

    Gibbons, J.F.; McLaughlin, D.J.

    1978-01-01

    In the pressure vessel of the water-cooled nuclear reactor there is provided an internal flange on which the one- or two-part core barrel is hanging by means of an external flange. A cylinder is extending from the reactor vessel closure downwards to a seat on the core cupport structure and serves as compression element for the transmission of the clamping load from the closure head to the core barrel (upper guide structure). With the core barrel, subject to tensile stress, between the vessel internal flange and its seat on one hand and the compression of the cylinder resp. hold-down element between the closure head and the seat on the other a very strong, elastic sprung structure is obtained. (DG) [de

  13. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  14. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  15. Data Compression with Linear Algebra

    OpenAIRE

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  16. Critical N = (1, 1) general massive supergravity

    Science.gov (United States)

    Deger, Nihat Sadik; Moutsopoulos, George; Rosseel, Jan

    2018-04-01

    In this paper we study the supermultiplet structure of N = (1, 1) General Massive Supergravity at non-critical and critical points of its parameter space. To do this, we first linearize the theory around its maximally supersymmetric AdS3 vacuum and obtain the full linearized Lagrangian including fermionic terms. At generic values, linearized modes can be organized as two massless and 2 massive multiplets where supersymmetry relates them in the standard way. At critical points logarithmic modes appear and we find that in three of such points some of the supersymmetry transformations are non-invertible in logarithmic multiplets. However, in the fourth critical point, there is a massive logarithmic multiplet with invertible supersymmetry transformations.

  17. HOW TO FIND YOUNG MASSIVE CLUSTER PROGENITORS

    Energy Technology Data Exchange (ETDEWEB)

    Bressert, E.; Longmore, S.; Testi, L. [European Southern Observatory, Karl Schwarzschild Str. 2, D-85748 Garching bei Muenchen (Germany); Ginsburg, A.; Bally, J.; Battersby, C. [Center for Astrophysics and Space Astronomy, University of Colorado, Boulder, CO 80309 (United States)

    2012-10-20

    We propose that bound, young massive stellar clusters form from dense clouds that have escape speeds greater than the sound speed in photo-ionized gas. In these clumps, radiative feedback in the form of gas ionization is bottled up, enabling star formation to proceed to sufficiently high efficiency so that the resulting star cluster remains bound even after gas removal. We estimate the observable properties of the massive proto-clusters (MPCs) for existing Galactic plane surveys and suggest how they may be sought in recent and upcoming extragalactic observations. These surveys will potentially provide a significant sample of MPC candidates that will allow us to better understand extreme star-formation and massive cluster formation in the Local Universe.

  18. Primordial inhomogeneities from massive defects during inflation

    Energy Technology Data Exchange (ETDEWEB)

    Firouzjahi, Hassan; Karami, Asieh; Rostami, Tahereh, E-mail: firouz@ipm.ir, E-mail: karami@ipm.ir, E-mail: t.rostami@ipm.ir [School of Astronomy, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

    2016-10-01

    We consider the imprints of local massive defects, such as a black hole or a massive monopole, during inflation. The massive defect breaks the background homogeneity. We consider the limit that the physical Schwarzschild radius of the defect is much smaller than the inflationary Hubble radius so a perturbative analysis is allowed. The inhomogeneities induced in scalar and gravitational wave power spectrum are calculated. We obtain the amplitudes of dipole, quadrupole and octupole anisotropies in curvature perturbation power spectrum and identify the relative configuration of the defect to CMB sphere in which large observable dipole asymmetry can be generated. We observe a curious reflection symmetry in which the configuration where the defect is inside the CMB comoving sphere has the same inhomogeneous variance as its mirror configuration where the defect is outside the CMB sphere.

  19. Massive type IIA supergravity and E10

    International Nuclear Information System (INIS)

    Henneaux, M.; Kleinschmidt, A.; Persson, D.; Jamsin, E.

    2009-01-01

    In this talk we investigate the symmetry under E 10 of Romans' massive type IIA supergravity. We show that the dynamics of a spinning particle in a non-linear sigma model on the coset space E 10 /K(E 10 ) reproduces the bosonic and fermionic dynamics of massive IIA supergravity, in the standard truncation. In particular, we identify Romans' mass with a generator of E 10 that is beyond the realm of the generators of E 10 considered in the eleven-dimensional analysis, but using the same, underformed sigma model. As a consequence, this work provides a dynamical unification of the massless and massive versions of type IIA supergravity inside E 10 . (Abstract Copyright [2009], Wiley Periodicals, Inc.)

  20. Massive stars and X-ray pulsars

    International Nuclear Information System (INIS)

    Henrichs, H.

    1982-01-01

    This thesis is a collection of 7 separate articles entitled: long term changes in ultraviolet lines in γ CAS, UV observations of γ CAS: intermittent mass-loss enhancement, episodic mass loss in γ CAS and in other early-type stars, spin-up and spin-down of accreting neutron stars, an excentric close binary model for the X Persei system, has a 97 minute periodicity in 4U 1700-37/HD 153919 really been discovered, and, mass loss and stellar wind in massive X-ray binaries. (Articles 1, 2, 5, 6 and 7 have been previously published). The first three articles are concerned with the irregular mass loss in massive stars. The fourth critically reviews thoughts since 1972 on the origin of the changes in periodicity shown by X-ray pulsars. The last articles indicate the relation between massive stars and X-ray pulsars. (C.F.)

  1. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  2. A Massively Parallel Face Recognition System

    Directory of Open Access Journals (Sweden)

    Lahdenoja Olli

    2007-01-01

    Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.

  3. Massive gravity and Fierz-Pauli theory

    International Nuclear Information System (INIS)

    Blasi, Alberto; Maggiore, Nicola

    2017-01-01

    Linearized gravity is considered as an ordinary gauge field theory. This implies the need for gauge fixing in order to have well-defined propagators. Only after having achieved this, the most general mass term is added. The aim of this paper is to study of the degrees of freedom of the gauge fixed theory of linearized gravity with mass term. The main result is that, even outside the usual Fierz-Pauli constraint on the mass term, it is possible to choose a gauge fixing belonging to the Landau class, which leads to a massive theory of gravity with the five degrees of freedom of a spin-2 massive particle. (orig.)

  4. Massive gravity and Fierz-Pauli theory

    Energy Technology Data Exchange (ETDEWEB)

    Blasi, Alberto [Universita di Genova, Dipartimento di Fisica, Genova (Italy); Maggiore, Nicola [I.N.F.N.-Sezione di Genova, Genoa (Italy)

    2017-09-15

    Linearized gravity is considered as an ordinary gauge field theory. This implies the need for gauge fixing in order to have well-defined propagators. Only after having achieved this, the most general mass term is added. The aim of this paper is to study of the degrees of freedom of the gauge fixed theory of linearized gravity with mass term. The main result is that, even outside the usual Fierz-Pauli constraint on the mass term, it is possible to choose a gauge fixing belonging to the Landau class, which leads to a massive theory of gravity with the five degrees of freedom of a spin-2 massive particle. (orig.)

  5. SALT Spectroscopy of Evolved Massive Stars

    Science.gov (United States)

    Kniazev, A. Y.; Gvaramadze, V. V.; Berdnikov, L. N.

    2017-06-01

    Long-slit spectroscopy with the Southern African Large Telescope (SALT) of central stars of mid-infrared nebulae detected with the Spitzer Space Telescope and Wide-Field Infrared Survey Explorer (WISE) led to the discovery of numerous candidate luminous blue variables (cLBVs) and other rare evolved massive stars. With the recent advent of the SALT fiber-fed high-resolution echelle spectrograph (HRS), a new perspective for the study of these interesting objects is appeared. Using the HRS we obtained spectra of a dozen newly identified massive stars. Some results on the recently identified cLBV Hen 3-729 are presented.

  6. A Massively Parallel Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ari Paasio

    2006-12-01

    Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.

  7. Parallel Tensor Compression for Large-Scale Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara G. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ballard, Grey [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Austin, Woody Nathan [Univ. of Texas, Austin, TX (United States)

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  8. The mechanical vapour compression process applied to seawater desalination

    International Nuclear Information System (INIS)

    Murat, F.; Tabourier, B.

    1984-01-01

    The authors present the mechanical vapour compression process applied to sea water desalination. As an example, the paper presents the largest unit so far constructed by SIDEM using this process : a 1,500 m3/day unit installed in the Nuclear Power Plant of Flamanville in France which supplies a high quality process water to that plant. The authors outline the advantages of this process and present also the serie of mechanical vapour compression unit that SIDEM has developed in a size range in between 25 m3/day and 2,500 m3/day

  9. MASSIVE BLACK HOLES IN STELLAR SYSTEMS: 'QUIESCENT' ACCRETION AND LUMINOSITY

    International Nuclear Information System (INIS)

    Volonteri, M.; Campbell, D.; Mateo, M.; Dotti, M.

    2011-01-01

    Only a small fraction of local galaxies harbor an accreting black hole, classified as an active galactic nucleus. However, many stellar systems are plausibly expected to host black holes, from globular clusters to nuclear star clusters, to massive galaxies. The mere presence of stars in the vicinity of a black hole provides a source of fuel via mass loss of evolved stars. In this paper, we assess the expected luminosities of black holes embedded in stellar systems of different sizes and properties, spanning a large range of masses. We model the distribution of stars and derive the amount of gas available to a central black hole through a geometrical model. We estimate the luminosity of the black holes under simple, but physically grounded, assumptions on the accretion flow. Finally, we discuss the detectability of 'quiescent' black holes in the local universe.

  10. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  11. MRI assessment of bronchial compression in absent pulmonary valve syndrome and review of the syndrome

    International Nuclear Information System (INIS)

    Taragin, Benjamin H.; Berdon, Walter E.; Prinz, B.

    2006-01-01

    Absent pulmonary valve syndrome (APVS) is a rare cardiac malformation with massive pulmonary insufficiency that presents with short-term and long-term respiratory problems secondary to severe bronchial compression from enlarged central and hilar pulmonary arteries. Association with chromosome 22.Q11 deletions and DiGeorge syndrome is common. This historical review illustrates the airway disease with emphasis on assessment of the bronchial compression in patients with persistent respiratory difficulties post-valvular repair. Cases that had MRI for cardiac assessment are used to illustrate the pattern of airway disease. (orig.)

  12. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  13. Massively Clustered CubeSats NCPS Demo Mission

    Science.gov (United States)

    Robertson, Glen A.; Young, David; Kim, Tony; Houts, Mike

    2013-01-01

    Technologies under development for the proposed Nuclear Cryogenic Propulsion Stage (NCPS) will require an un-crewed demonstration mission before they can be flight qualified over distances and time frames representative of a crewed Mars mission. In this paper, we describe a Massively Clustered CubeSats platform, possibly comprising hundreds of CubeSats, as the main payload of the NCPS demo mission. This platform would enable a mechanism for cost savings for the demo mission through shared support between NASA and other government agencies as well as leveraged commercial aerospace and academic community involvement. We believe a Massively Clustered CubeSats platform should be an obvious first choice for the NCPS demo mission when one considers that cost and risk of the payload can be spread across many CubeSat customers and that the NCPS demo mission can capitalize on using CubeSats developed by others for its own instrumentation needs. Moreover, a demo mission of the NCPS offers an unprecedented opportunity to invigorate the public on a global scale through direct individual participation coordinated through a web-based collaboration engine. The platform we describe would be capable of delivering CubeSats at various locations along a trajectory toward the primary mission destination, in this case Mars, permitting a variety of potential CubeSat-specific missions. Cameras on various CubeSats can also be used to provide multiple views of the space environment and the NCPS vehicle for video monitoring as well as allow the public to "ride along" as virtual passengers on the mission. This collaborative approach could even initiate a brand new Science, Technology, Engineering and Math (STEM) program for launching student developed CubeSat payloads beyond Low Earth Orbit (LEO) on future deep space technology qualification missions. Keywords: Nuclear Propulsion, NCPS, SLS, Mars, CubeSat.

  14. Compression etiology in tendinopathy.

    Science.gov (United States)

    Almekinders, Louis C; Weinhold, Paul S; Maffulli, Nicola

    2003-10-01

    Recent studies have emphasized that the etiology of tendinopathy is not as simple as was once thought. The etiology is likely to be multifactorial. Etiologic factors may include some of the traditional factors such as overuse, inflexibility, and equipment problems; however, other factors need to be considered as well, such as age-related tendon degeneration and biomechanical considerations as outlined in this article. More research is needed to determine the significance of stress-shielding and compression in tendinopathy. If they are confirmed to play a role, this finding may significantly alter our approach in both prevention and in treatment through exercise therapy. The current biomechanical studies indicate that certain joint positions are more likely to place tensile stress on the area of the tendon commonly affected by tendinopathy. These joint positions seem to be different than the traditional positions for stretching exercises used for prevention and rehabilitation of tendinopathic conditions. Incorporation of different joint positions during stretching exercises may exert more uniform, controlled tensile stress on these affected areas of the tendon and avoid stresshielding. These exercises may be able to better maintain the mechanical strength of that region of the tendon and thereby avoid injury. Alternatively, they could more uniformly stress a healing area of the tendon in a controlled manner, and thereby stimulate healing once an injury has occurred. Additional work will have to prove if a change in rehabilitation exercises is more efficacious that current techniques.

  15. Compressible Vortex Ring

    Science.gov (United States)

    Elavarasan, Ramasamy; Arakeri, Jayawant; Krothapalli, Anjaneyulu

    1999-11-01

    The interaction of a high-speed vortex ring with a shock wave is one of the fundamental issues as it is a source of sound in supersonic jets. The complex flow field induced by the vortex alters the propagation of the shock wave greatly. In order to understand the process, a compressible vortex ring is studied in detail using Particle Image Velocimetry (PIV) and shadowgraphic techniques. The high-speed vortex ring is generated from a shock tube and the shock wave, which precedes the vortex, is reflected back by a plate and made to interact with the vortex. The shadowgraph images indicate that the reflected shock front is influenced by the non-uniform flow induced by the vortex and is decelerated while passing through the vortex. It appears that after the interaction the shock is "split" into two. The PIV measurements provided clear picture about the evolution of the vortex at different time interval. The centerline velocity traces show the maximum velocity to be around 350 m/s. The velocity field, unlike in incompressible rings, contains contributions from both the shock and the vortex ring. The velocity distribution across the vortex core, core diameter and circulation are also calculated from the PIV data.

  16. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  17. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  18. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  19. Massively dilated right atrium masquerading as a mediastinal tumor

    Directory of Open Access Journals (Sweden)

    Thomas Schroeter

    2011-04-01

    Full Text Available Severe tricuspid valve insufficiency causes right atrial dilatation, venous congestion, and reduced atrial contractility, and may eventually lead to right heart failure. We report a case of a patient with severe tricuspid valve insufficiency, right heart failure, and a massively dilated right atrium. The enormously dilated atrium compressed the right lung, resulting in a radiographic appearance of a mediastinal tumor. Tricuspid valve repair and reduction of the right atrium was performed. Follow up examination revealed improvement of liver function, reduced peripheral edema and improved New York Heart Association (NYHA class. The reduction of the atrial size and repair of the tricuspid valve resulted in a restoration of the conduit and reservoir function of the right atrium. Given the chronicity of the disease process and the long-standing atrial fibrillation, there is no impact of this operation on right atrial contraction. In combination with the reconstruction of the tricuspid valve, the reduction atrioplasty will reduce the risk of thrombembolic events and preserve the right ventricular function.

  20. Massive rectal bleeding from colonic diverticulosis

    African Journals Online (AJOL)

    ABEOLUGBENGAS

    Rapport De Cas: Nous mettons un cas d'un homme de 79 ans quiàprésente une hémorragie rectal massive ... cause of overt lower gastrointestinal (GI) ... vessels into the intestinal lumen results in ... placed on a high fibre diet, and intravenous.

  1. Improved visibility computation on massive grid terrains

    NARCIS (Netherlands)

    Fishman, J.; Haverkort, H.J.; Toma, L.; Wolfson, O.; Agrawal, D.; Lu, C.-T.

    2009-01-01

    This paper describes the design and engineering of algorithms for computing visibility maps on massive grid terrains. Given a terrain T, specified by the elevations of points in a regular grid, and given a viewpoint v, the visibility map or viewshed of v is the set of grid points of T that are

  2. Facial transplantation for massive traumatic injuries.

    Science.gov (United States)

    Alam, Daniel S; Chi, John J

    2013-10-01

    This article describes the challenges of facial reconstruction and the role of facial transplantation in certain facial defects and injuries. This information is of value to surgeons assessing facial injuries with massive soft tissue loss or injury. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Difference equations in massive higher order calculations

    International Nuclear Information System (INIS)

    Bierenbaum, I.; Bluemlein, J.; Klein, S.; Schneider, C.

    2007-07-01

    The calculation of massive 2-loop operator matrix elements, required for the higher order Wilson coefficients for heavy flavor production in deeply inelastic scattering, leads to new types of multiple infinite sums over harmonic sums and related functions, which depend on the Mellin parameter N. We report on the solution of these sums through higher order difference equations using the summation package Sigma. (orig.)

  4. FRW Cosmological Perturbations in Massive Bigravity

    CERN Document Server

    Comelli, D; Pilo, L

    2014-01-01

    Cosmological perturbations of FRW solutions in ghost free massive bigravity, including also a second matter sector, are studied in detail. At early time, we find that sub horizon exponential instabilities are unavoidable and they lead to a premature departure from the perturbative regime of cosmological perturbations.

  5. Circular symmetry in topologically massive gravity

    International Nuclear Information System (INIS)

    Deser, S; Franklin, J

    2010-01-01

    We re-derive, compactly, a topologically massive gravity (TMG) decoupling theorem: source-free TMG separates into its Einstein and Cotton sectors for spaces with a hypersurface-orthogonal Killing vector, here concretely for circular symmetry. We then generalize the theorem to include matter; surprisingly, the single Killing symmetry also forces conformal invariance, requiring the sources to be null. (note)

  6. NOTE: Circular symmetry in topologically massive gravity

    Science.gov (United States)

    Deser, S.; Franklin, J.

    2010-05-01

    We re-derive, compactly, a topologically massive gravity (TMG) decoupling theorem: source-free TMG separates into its Einstein and Cotton sectors for spaces with a hypersurface-orthogonal Killing vector, here concretely for circular symmetry. We then generalize the theorem to include matter; surprisingly, the single Killing symmetry also forces conformal invariance, requiring the sources to be null.

  7. Circular symmetry in topologically massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Deser, S [Physics Department, Brandeis University, Waltham, MA 02454 (United States); Franklin, J, E-mail: deser@brandeis.ed, E-mail: jfrankli@reed.ed [Reed College, Portland, OR 97202 (United States)

    2010-05-21

    We re-derive, compactly, a topologically massive gravity (TMG) decoupling theorem: source-free TMG separates into its Einstein and Cotton sectors for spaces with a hypersurface-orthogonal Killing vector, here concretely for circular symmetry. We then generalize the theorem to include matter; surprisingly, the single Killing symmetry also forces conformal invariance, requiring the sources to be null. (note)

  8. Massively parallel sequencing of forensic STRs

    DEFF Research Database (Denmark)

    Parson, Walther; Ballard, David; Budowle, Bruce

    2016-01-01

    The DNA Commission of the International Society for Forensic Genetics (ISFG) is reviewing factors that need to be considered ahead of the adoption by the forensic community of short tandem repeat (STR) genotyping by massively parallel sequencing (MPS) technologies. MPS produces sequence data that...

  9. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  10. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  11. Stiffness, resilience, compressibility

    Energy Technology Data Exchange (ETDEWEB)

    Leu, Bogdan M. [Argonne National Laboratory, Advanced Photon Source (United States); Sage, J. Timothy, E-mail: jtsage@neu.edu [Northeastern University, Department of Physics and Center for Interdisciplinary Research on Complex Systems (United States)

    2016-12-15

    The flexibility of a protein is an important component of its functionality. We use nuclear resonance vibrational spectroscopy (NRVS) to quantify the flexibility of the heme iron environment in the electron-carrying protein cytochrome c by measuring the stiffness and the resilience. These quantities are sensitive to structural differences between the active sites of different proteins, as illustrated by a comparative analysis with myoglobin. The elasticity of the entire protein, on the other hand, can be probed quantitatively from NRVS and high energy-resolution inelastic X-ray scattering (IXS) measurements, an approach that we used to extract the bulk modulus of cytochrome c.

  12. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  13. Sustainability of compressive residual stress by stress improvement processes

    International Nuclear Information System (INIS)

    Nishikawa, Satoru; Okita, Shigeru; Yamaguchi, Atsunori

    2013-01-01

    Stress improvement processes are countermeasures against stress corrosion cracking in nuclear power plant components. It is necessary to confirm whether compressive residual stress induced by stress improvement processes can be sustained under operation environment. In order to evaluate stability of the compressive residual stress in 60-year operating conditions, the 0.07% cyclic strains of 200 times at 593 K were applied to the welded specimens, then a thermal aging treatment for 1.66x10 6 s at 673 K was carried out. As the result, it was confirmed that the compressive residual stresses were sustained on both surfaces of the dissimilar welds of austenitic stainless steel (SUS316L) and nickel base alloy (NCF600 and alloy 182) processed by laser peening (LP), water jet peening (WJP), ultrasonic shot peening (USP), shot peening (SP) and polishing under 60-year operating conditions. (author)

  14. Compressed gas fuel storage system

    Science.gov (United States)

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  15. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  16. Nonlinear compression of optical solitons

    Indian Academy of Sciences (India)

    linear pulse propagation is the nonlinear Schrödinger (NLS) equation [1]. There are ... Optical pulse compression finds important applications in optical fibres. The pulse com ..... to thank CSIR, New Delhi for financial support in the form of SRF.

  17. Trigenerative micro compressed air energy storage: Concept and thermodynamic assessment

    International Nuclear Information System (INIS)

    Facci, Andrea L.; Sánchez, David; Jannelli, Elio; Ubertini, Stefano

    2015-01-01

    Highlights: • The trigenerative-CAES concept is introduced. • The thermodynamic feasibility of the trigenerative-CAES is assessed. • The effects of the relevant parameter on the system performances are dissected. • Technological issues on the trigenerative-CAES are highlighted. - Abstract: Energy storage is a cutting edge front for renewable and sustainable energy research. In fact, a massive exploitation of intermittent renewable sources, such as wind and sun, requires the introduction of effective mechanical energy storage systems. In this paper we introduce the concept of a trigenerative energy storage based on a compressed air system. The plant in study is a simplified design of the adiabatic compressed air energy storage and accumulates mechanical and thermal (both hot and cold) energy at the same time. We envisage the possibility to realize a relatively small size trigenerative compressed air energy storage to be placed close to the energy demand, according to the distributed generation paradigm. Here, we describe the plant concept and we identify all the relevant parameters influencing its thermodynamic behavior. Their effects are dissected through an accurate thermodynamic model. The most relevant technological issues, such as the guidelines for a proper choice of the compressor, expander and heat exchangers are also addressed. Our results show that T-CAES may have an interesting potential as a distributed system that combines electricity storage with heat and cooling energy production. We also show that the performances are significantly influenced by some operating and design parameters, whose feasibility in real applications must be considered.

  18. Compressing climate model simulations: reducing storage burden while preserving information

    Science.gov (United States)

    Hammerling, Dorit; Baker, Allison; Xu, Haiying; Clyne, John; Li, Samuel

    2017-04-01

    Climate models, which are run at high spatial and temporal resolutions, generate massive quantities of data. As our computing capabilities continue to increase, storing all of the generated data is becoming a bottleneck, which negatively affects scientific progress. It is thus important to develop methods for representing the full datasets by smaller compressed versions, which still preserve all the critical information and, as an added benefit, allow for faster read and write operations during analysis work. Traditional lossy compression algorithms, as for example used for image files, are not necessarily ideally suited for climate data. While visual appearance is relevant, climate data has additional critical features such as the preservation of extreme values and spatial and temporal gradients. Developing alternative metrics to quantify information loss in a manner that is meaningful to climate scientists is an ongoing process still in its early stages. We will provide an overview of current efforts to develop such metrics to assess existing algorithms and to guide the development of tailored compression algorithms to address this pressing challenge.

  19. Uncertainties in s-process nucleosynthesis in massive stars determined by Monte Carlo variations

    Science.gov (United States)

    Nishimura, N.; Hirschi, R.; Rauscher, T.; St. J. Murphy, A.; Cescutti, G.

    2017-08-01

    The s-process in massive stars produces the weak component of the s-process (nuclei up to A ˜ 90), in amounts that match solar abundances. For heavier isotopes, such as barium, production through neutron capture is significantly enhanced in very metal-poor stars with fast rotation. However, detailed theoretical predictions for the resulting final s-process abundances have important uncertainties caused both by the underlying uncertainties in the nuclear physics (principally neutron-capture reaction and β-decay rates) as well as by the stellar evolution modelling. In this work, we investigated the impact of nuclear-physics uncertainties relevant to the s-process in massive stars. Using a Monte Carlo based approach, we performed extensive nuclear reaction network calculations that include newly evaluated upper and lower limits for the individual temperature-dependent reaction rates. We found that most of the uncertainty in the final abundances is caused by uncertainties in the neutron-capture rates, while β-decay rate uncertainties affect only a few nuclei near s-process branchings. The s-process in rotating metal-poor stars shows quantitatively different uncertainties and key reactions, although the qualitative characteristics are similar. We confirmed that our results do not significantly change at different metallicities for fast rotating massive stars in the very low metallicity regime. We highlight which of the identified key reactions are realistic candidates for improved measurement by future experiments.

  20. A rare case of massive hepatosplenomegaly due to acute ...

    African Journals Online (AJOL)

    massive hepatosplenomegaly include chronic lymphoproliferative malignancies, infections (malaria, leishmaniasis) and glycogen storage diseases (Gaucher's disease).[4] In our case the probable causes of the massive hepatosplenomegaly were a combination of late presentation after symptom onset, leukaemic infiltration.

  1. Reappraising the concept of massive transfusion in trauma

    DEFF Research Database (Denmark)

    Stanworth, Simon J; Morris, Timothy P; Gaarder, Christine

    2010-01-01

    ABSTRACT : INTRODUCTION : The massive-transfusion concept was introduced to recognize the dilutional complications resulting from large volumes of packed red blood cells (PRBCs). Definitions of massive transfusion vary and lack supporting clinical evidence. Damage-control resuscitation regimens o...

  2. Massive vulval oedema in multiple pregnancies at Bugando Medical ...

    African Journals Online (AJOL)

    In this report we describe two cases of massive vulval oedema seen in two ... passage of yellow-whitish discharge per vagina (Figure 1). Examination revealed massive oedema, and digital vaginal examination was difficult due to tenderness.

  3. Massively Parallel Algorithms for Solution of Schrodinger Equation

    Science.gov (United States)

    Fijany, Amir; Barhen, Jacob; Toomerian, Nikzad

    1994-01-01

    In this paper massively parallel algorithms for solution of Schrodinger equation are developed. Our results clearly indicate that the Crank-Nicolson method, in addition to its excellent numerical properties, is also highly suitable for massively parallel computation.

  4. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  5. Massive ovarian edema, due to adjacent appendicitis.

    Science.gov (United States)

    Callen, Andrew L; Illangasekare, Tushani; Poder, Liina

    2017-04-01

    Massive ovarian edema is a benign clinical entity, the imaging findings of which can mimic an adnexal mass or ovarian torsion. In the setting of acute abdominal pain, identifying massive ovarian edema is a key in avoiding potential fertility-threatening surgery in young women. In addition, it is important to consider other contributing pathology when ovarian edema is secondary to another process. We present a case of a young woman presenting with subacute abdominal pain, whose initial workup revealed marked enlarged right ovary. Further imaging, diagnostic tests, and eventually diagnostic laparoscopy revealed that the ovarian enlargement was secondary to subacute appendicitis, rather than a primary adnexal process. We review the classic ultrasound and MRI imaging findings and pitfalls that relate to this diagnosis.

  6. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  7. Stochastic spin-one massive field

    International Nuclear Information System (INIS)

    Lim, S.C.

    1984-01-01

    Stochastic quantization schemes of Nelson and Parisi and Wu are applied to a spin-one massive field. Unlike the scalar case Nelson's stochastic spin-one massive field cannot be identified with the corresponding euclidean field even if the fourth component of the euclidean coordinate is taken as equal to the real physical time. In the Parisi-Wu quantization scheme the stochastic Proca vector field has a similar property as the scalar field; which has an asymptotically stationary part and a transient part. The large equal-time limit of the expectation values of the stochastic Proca field are equal to the expectation values of the corresponding euclidean field. In the Stueckelberg formalism the Parisi-Wu scheme gives rise to a stochastic vector field which differs from the massless gauge field in that the gauge cannot be fixed by the choice of boundary condition. (orig.)

  8. Frontiers of massively parallel scientific computation

    International Nuclear Information System (INIS)

    Fischer, J.R.

    1987-07-01

    Practical applications using massively parallel computer hardware first appeared during the 1980s. Their development was motivated by the need for computing power orders of magnitude beyond that available today for tasks such as numerical simulation of complex physical and biological processes, generation of interactive visual displays, satellite image analysis, and knowledge based systems. Representative of the first generation of this new class of computers is the Massively Parallel Processor (MPP). A team of scientists was provided the opportunity to test and implement their algorithms on the MPP. The first results are presented. The research spans a broad variety of applications including Earth sciences, physics, signal and image processing, computer science, and graphics. The performance of the MPP was very good. Results obtained using the Connection Machine and the Distributed Array Processor (DAP) are presented

  9. M2M massive wireless access

    DEFF Research Database (Denmark)

    Zanella, Andrea; Zorzi, Michele; Santos, André F.

    2013-01-01

    In order to make the Internet of Things a reality, ubiquitous coverage and low-complexity connectivity are required. Cellular networks are hence the most straightforward and realistic solution to enable a massive deployment of always connected Machines around the globe. Nevertheless, a paradigm...... shift in the conception and design of future cellular networks is called for. Massive access attempts, low-complexity and cheap machines, sporadic transmission and correlated signals are among the main properties of this new reality, whose main consequence is the disruption of the development...... Access Reservation, Coded Random Access and the exploitation of multiuser detection in random access. Additionally, we will show how the properties of machine originated signals, such as sparsity and spatial/time correlation can be exploited. The end goal of this paper is to provide motivation...

  10. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  11. Impact analysis on a massively parallel computer

    International Nuclear Information System (INIS)

    Zacharia, T.; Aramayo, G.A.

    1994-01-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper

  12. Massive scalar field evolution in de Sitter

    Energy Technology Data Exchange (ETDEWEB)

    Markkanen, Tommi [Department of Physics, King’s College London,Strand, London WC2R 2LS (United Kingdom); Rajantie, Arttu [Department of Physics, Imperial College London,London SW7 2AZ (United Kingdom)

    2017-01-30

    The behaviour of a massive, non-interacting and non-minimally coupled quantised scalar field in an expanding de Sitter background is investigated by solving the field evolution for an arbitrary initial state. In this approach there is no need to choose a vacuum in order to provide a definition for particle states, nor to introduce an explicit ultraviolet regularization. We conclude that the expanding de Sitter space is a stable equilibrium configuration under small perturbations of the initial conditions. Depending on the initial state, the energy density can approach its asymptotic value from above or below, the latter of which implies a violation of the weak energy condition. The backreaction of the quantum corrections can therefore lead to a phase of super-acceleration also in the non-interacting massive case.

  13. How Massive Single Stars End Their Life

    Science.gov (United States)

    Heger, A.; Fryer, C. L.; Woosley, S. E.; Langer, N.; Hartmann, D. H.

    2003-01-01

    How massive stars die-what sort of explosion and remnant each produces-depends chiefly on the masses of their helium cores and hydrogen envelopes at death. For single stars, stellar winds are the only means of mass loss, and these are a function of the metallicity of the star. We discuss how metallicity, and a simplified prescription for its effect on mass loss, affects the evolution and final fate of massive stars. We map, as a function of mass and metallicity, where black holes and neutron stars are likely to form and where different types of supernovae are produced. Integrating over an initial mass function, we derive the relative populations as a function of metallicity. Provided that single stars rotate rapidly enough at death, we speculate on stellar populations that might produce gamma-ray bursts and jet-driven supernovae.

  14. Electromagnetic form factors of a massive neutrino

    International Nuclear Information System (INIS)

    Dvornikov, M.S.; Studenikin, A.I.

    2004-01-01

    Electromagnetic form factors of a massive neutrino are studied in a minimally extended standard model in an arbitrary R ξ gauge and taking into account the dependence on the masses of all interacting particles. The contribution from all Feynman diagrams to the electric, magnetic, and anapole form factors, in which the dependence of the masses of all particles as well as on gauge parameters is accounted for exactly, are obtained for the first time in explicit form. The asymptotic behavior of the magnetic form factor for large negative squares of the momentum of an external photon is analyzed and the expression for the anapole moment of a massive neutrino is derived. The results are generalized to the case of mixing between various flavors of the neutrino. Explicit expressions are obtained for the electric, magnetic, and electric dipole and anapole transitional form factors as well as for the transitional electric dipole moment

  15. HII regions in collapsing massive molecular clouds

    International Nuclear Information System (INIS)

    Yorke, H.W.; Bodenheimer, P.; Tenorio-Tagle, G.

    1982-01-01

    Results of two-dimensional numerical calculations of the evolution of HII regions associated with self-gravitating, massive molecular clouds are presented. Depending on the location of the exciting star, a champagne flow can occur concurrently with the central collapse of a nonrotating cloud. Partial evaporation of the cloud at a rate of about 0.005 solar masses/yr results. When 100 O-stars are placed at the center of a freely falling cloud of 3x10 5 solar masses no evaporation takes place. Rotating clouds collapse to disks and the champagne flow can evaporate the cloud at a higher rate (0.01 solar masses/yr). It is concluded that massive clouds containing OB-stars have lifetimes of no more than 10 7 yr. (Auth.)

  16. Compression measurement in laser driven implosion experiments

    International Nuclear Information System (INIS)

    Attwood, D.T.; Cambell, E.M.; Ceglio, N.M.; Lane, S.L.; Larsen, J.T.; Matthews, D.M.

    1981-01-01

    This paper discusses the measurement of compression in the context of the Inertial Confinement Fusion Programs' transition from thin-walled exploding pusher targets, to thicker walled targets which are designed to lead the way towards ablative type implosions which will result in higher fuel density and pR at burn time. These experiments promote desirable reactor conditions but pose diagnostic problems because of reduced multi-kilovolt x-ray and reaction product emissions, as well as increasingly more difficult transport problems for these emissions as they pass through the thicker pR pusher conditions. Solutions to these problems, pointing the way toward higher energy twodimensional x-ray images, new reaction product imaging ideas and the use of seed gases for both x-ray spectroscopic and nuclear activation techniques are identified

  17. Massive radiological releases profoundly differ from controlled releases

    International Nuclear Information System (INIS)

    Pascucci-Cahen, Ludivine; Patrick, Momal

    2013-01-01

    In this article, the authors report identification and assessment of different types of costs associated with nuclear accidents. They first outline that these cost assessments must be as exhaustive or comprehensive as possible. While referring to past accidents, they define the different categories of costs: on-site costs (decontamination and dismantling, electricity not produced on the site), off-site costs (health costs, psychological costs, farming losses), image-related costs (impact on food and farm product exports, decrease of other exports), costs related to energy production, costs related to contaminated areas (refugees, lands). They give an assessment of a severe nuclear accident (i.e. an accident with important but controlled radiological releases) in France and outline that it would be a national catastrophe which could be however managed. They discuss the possible variations of the estimated costs. Then, they show that a major accident (i.e. an accident with massive radiological releases) in France would be an unmanageable European catastrophe because of the radiological consequences, of high economic costs, and of huge losses

  18. Nucleosynthesis and remnants in massive stars of solar metallicity

    International Nuclear Information System (INIS)

    Woosley, S.E.; Heger, A.

    2007-01-01

    Hans Bethe contributed in many ways to our understanding of the supernovae that happen in massive stars, but, to this day, a first principles model of how the explosion is energized is lacking. Nevertheless, a quantitative theory of nucleosynthesis is possible. We present a survey of the nucleosynthesis that occurs in 32 stars of solar metallicity in the mass range 12-120M sun . The most recent set of solar abundances, opacities, mass loss rates, and current estimates of nuclear reaction rates are employed. Restrictions on the mass cut and explosion energy of the supernovae based upon nucleosynthesis, measured neutron star masses, and light curves are discussed and applied. The nucleosynthetic results, when integrated over a Salpeter initial mass function (IMF), agree quite well with what is seen in the sun. We discuss in some detail the production of the long lived radioactivities, 26 Al and 60 Fe, and why recent model-based estimates of the ratio 60 Fe/ 26 Al are overly large compared with what satellites have observed. A major source of the discrepancy is the uncertain nuclear cross sections for the creation and destruction of these unstable isotopes

  19. Massively parallel evolutionary computation on GPGPUs

    CERN Document Server

    Tsutsui, Shigeyoshi

    2013-01-01

    Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u

  20. FMFT. Fully massive four-loop tadpoles

    Energy Technology Data Exchange (ETDEWEB)

    Pikelner, Andrey [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik

    2017-07-15

    We present FMFT - a package written in FORM that evaluates four-loop fully massive tadpole Feynman diagrams. It is a successor of the MATAD package that has been successfully used to calculate many renormalization group functions at three-loop order in a wide range of quantum field theories especially in the Standard Model. We describe an internal structure of the package and provide some examples of its usage.

  1. Towards Massive Machine Type Cellular Communications

    OpenAIRE

    Dawy, Zaher; Saad, Walid; Ghosh, Arunabha; Andrews, Jeffrey G.; Yaacoub, Elias

    2015-01-01

    Cellular networks have been engineered and optimized to carrying ever-increasing amounts of mobile data, but over the last few years, a new class of applications based on machine-centric communications has begun to emerge. Automated devices such as sensors, tracking devices, and meters - often referred to as machine-to-machine (M2M) or machine-type communications (MTC) - introduce an attractive revenue stream for mobile network operators, if a massive number of them can be efficiently support...

  2. Massive Schwinger model at finite θ

    Science.gov (United States)

    Azcoiti, Vicente; Follana, Eduardo; Royo-Amondarain, Eduardo; Di Carlo, Giuseppe; Vaquero Avilés-Casco, Alejandro

    2018-01-01

    Using the approach developed by V. Azcoiti et al. [Phys. Lett. B 563, 117 (2003), 10.1016/S0370-2693(03)00601-4], we are able to reconstruct the behavior of the massive one-flavor Schwinger model with a θ term and a quantized topological charge. We calculate the full dependence of the order parameter with θ . Our results at θ =π are compatible with Coleman's conjecture on the phase diagram of this model.

  3. Harmonic polylogarithms for massive Bhabha scattering

    International Nuclear Information System (INIS)

    Czakon, M.; Riemann, T.

    2005-08-01

    One- and two-dimensional harmonic polylogarithms, HPLs and GPLs, appear in calculations of multi-loop integrals. We discuss them in the context of analytical solutions for two-loop master integrals in the case of massive Bhabha scattering in QED. For the GPLs we discuss analytical representations, conformal transformations, and also their transformations corresponding to relations between master integrals in the s- and t-channel. (orig.)

  4. Massive Open Online Courses and economic sustainability

    OpenAIRE

    Liyanagunawardena, Tharindu R.; Lundqvist, Karsten O.; Williams, Shirley A.

    2015-01-01

    Millions of users around the world have registered on Massive Open Online Courses (MOOCs) offered by hundreds of universities (and other organizations) worldwide. Creating and offering these courses costs thousands of pounds. However, at present, revenue generated by MOOCs is not sufficient to offset these costs. The sustainability of MOOCs is a pressing concern as they incur not only upfront creation costs but also maintenance costs to keep content relevant, as well as on-going facilitation ...

  5. Weakly interacting massive particles and stellar structure

    International Nuclear Information System (INIS)

    Bouquet, A.

    1988-01-01

    The existence of weakly interacting massive particles (WIMPs) may solve both the dark matter problem and the solar neutrino problem. Such particles affect the energy transport in the stellar cores and change the stellar structure. We present the results of an analytic approximation to compute these effects in a self-consistent way. These results can be applied to many different stars, but we focus on the decrease of the 8 B neutrino flux in the case of the Sun

  6. Non Pauli-Fierz Massive Gravitons

    CERN Document Server

    Dvali, Gia; Redi, Michele

    2008-01-01

    We study general Lorentz invariant theories of massive gravitons. We show that, contrary to the standard lore, there exist consistent theories where the graviton mass term violates Pauli-Fierz structure. For theories where the graviton is a resonance this does not imply the existence of a scalar ghost if the deviation from Pauli-Fierz becomes sufficiently small at high energies. These types of mass terms are required by any consistent realization of the DGP model in higher dimension.

  7. FMFT: fully massive four-loop tadpoles

    Science.gov (United States)

    Pikelner, Andrey

    2018-03-01

    We present FMFT - a package written in FORM that evaluates four-loop fully massive tadpole Feynman diagrams. It is a successor of the MATAD package that has been successfully used to calculate many renormalization group functions at three-loop order in a wide range of quantum field theories especially in the Standard Model. We describe an internal structure of the package and provide some examples of its usage.

  8. On 3D Minimal Massive Gravity

    CERN Document Server

    Alishahiha, Mohsen; Naseh, Ali; Shirzad, Ahmad

    2014-12-03

    We study linearized equations of motion of the newly proposed three dimensional gravity, known as minimal massive gravity, using its metric formulation. We observe that the resultant linearized equations are exactly the same as that of TMG by making use of a redefinition of the parameters of the model. In particular the model admits logarithmic modes at the critical points. We also study several vacuum solutions of the model, specially at a certain limit where the contribution of Chern-Simons term vanishes.

  9. Magnetic fields and massive star formation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Qizhou; Keto, Eric; Ho, Paul T. P.; Ching, Tao-Chung; Chen, How-Huan [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Qiu, Keping [School of Astronomy and Space Science, Nanjing University, 22 Hankou Road, Nanjing 210093 (China); Girart, Josep M.; Juárez, Carmen [Institut de Ciències de l' Espai, (CSIC-IEEC), Campus UAB, Facultat de Ciències, C5p 2, E-08193 Bellaterra, Catalonia (Spain); Liu, Hauyu; Tang, Ya-Wen; Koch, Patrick M.; Rao, Ramprasad; Lai, Shih-Ping [Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 106, Taiwan (China); Li, Zhi-Yun [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States); Frau, Pau [Observatorio Astronómico Nacional, Alfonso XII, 3 E-28014 Madrid (Spain); Li, Hua-Bai [Department of Physics, The Chinese University of Hong Kong, Hong Kong (China); Padovani, Marco [Laboratoire de Radioastronomie Millimétrique, UMR 8112 du CNRS, École Normale Supérieure et Observatoire de Paris, 24 rue Lhomond, F-75231 Paris Cedex 05 (France); Bontemps, Sylvain [OASU/LAB-UMR5804, CNRS, Université Bordeaux 1, F-33270 Floirac (France); Csengeri, Timea, E-mail: qzhang@cfa.harvard.edu [Max Planck Institute for Radioastronomy, Auf dem Hügel 69, D-53121 Bonn (Germany)

    2014-09-10

    Massive stars (M > 8 M {sub ☉}) typically form in parsec-scale molecular clumps that collapse and fragment, leading to the birth of a cluster of stellar objects. We investigate the role of magnetic fields in this process through dust polarization at 870 μm obtained with the Submillimeter Array (SMA). The SMA observations reveal polarization at scales of ≲0.1 pc. The polarization pattern in these objects ranges from ordered hour-glass configurations to more chaotic distributions. By comparing the SMA data with the single dish data at parsec scales, we found that magnetic fields at dense core scales are either aligned within 40° of or perpendicular to the parsec-scale magnetic fields. This finding indicates that magnetic fields play an important role during the collapse and fragmentation of massive molecular clumps and the formation of dense cores. We further compare magnetic fields in dense cores with the major axis of molecular outflows. Despite a limited number of outflows, we found that the outflow axis appears to be randomly oriented with respect to the magnetic field in the core. This result suggests that at the scale of accretion disks (≲ 10{sup 3} AU), angular momentum and dynamic interactions possibly due to close binary or multiple systems dominate over magnetic fields. With this unprecedentedly large sample of massive clumps, we argue on a statistical basis that magnetic fields play an important role during the formation of dense cores at spatial scales of 0.01-0.1 pc in the context of massive star and cluster star formation.

  10. Comment on ''Topologically Massive Gauge Theories''

    International Nuclear Information System (INIS)

    Bezerra de Mello, E.R.

    1988-01-01

    In a recent paper by R. Pisarski and S. Rao concerning topologically massive quantum Yang--Mills theory, the expression of the P-even part of the non-Abelian gauge field self-energy at one-loop order is shown to obey a consistency condition, which is not fulfilled by the formula originally presented by S. Deser, R. Jackiw, and S. Templeton. In this comment, I present a recalculation which agress with Pisarski and Rao. copyright 1988 Academic Press, Inc

  11. SUPERDENSE MASSIVE GALAXIES IN WINGS LOCAL CLUSTERS

    International Nuclear Information System (INIS)

    Valentinuzzi, T.; D'Onofrio, M.; Fritz, J.; Poggianti, B. M.; Bettoni, D.; Fasano, G.; Moretti, A.; Omizzolo, A.; Varela, J.; Cava, A.; Couch, W. J.; Dressler, A.; Moles, M.; Kjaergaard, P.; Vanzella, E.

    2010-01-01

    Massive quiescent galaxies at z > 1 have been found to have small physical sizes, and hence to be superdense. Several mechanisms, including minor mergers, have been proposed for increasing galaxy sizes from high- to low-z. We search for superdense massive galaxies in the WIde-field Nearby Galaxy-cluster Survey (WINGS) of X-ray selected galaxy clusters at 0.04 10 M sun , are mostly S0 galaxies, have a median effective radius (R e ) = 1.61 ± 0.29 kpc, a median Sersic index (n) = 3.0 ± 0.6, and very old stellar populations with a median mass-weighted age of 12.1 ± 1.3 Gyr. We calculate a number density of 2.9 x 10 -2 Mpc -3 for superdense galaxies in local clusters, and a hard lower limit of 1.3 x 10 -5 Mpc -3 in the whole comoving volume between z = 0.04 and z = 0.07. We find a relation between mass, effective radius, and luminosity-weighted age in our cluster galaxies, which can mimic the claimed evolution of the radius with redshift, if not properly taken into account. We compare our data with spectroscopic high-z surveys and find that-when stellar masses are considered-there is consistency with the local WINGS galaxy sizes out to z ∼ 2, while a discrepancy of a factor of 3 exists with the only spectroscopic z > 2 study. In contrast, there is strong evidence for a large evolution in radius for the most massive galaxies with M * > 4 x 10 11 M sun compared to similarly massive galaxies in WINGS, i.e., the brightest cluster galaxies.

  12. EVOLUTION OF MASSIVE PROTOSTARS VIA DISK ACCRETION

    International Nuclear Information System (INIS)

    Hosokawa, Takashi; Omukai, Kazuyuki; Yorke, Harold W.

    2010-01-01

    Mass accretion onto (proto-)stars at high accretion rates M-dot * > 10 -4 M sun yr -1 is expected in massive star formation. We study the evolution of massive protostars at such high rates by numerically solving the stellar structure equations. In this paper, we examine the evolution via disk accretion. We consider a limiting case of 'cold' disk accretion, whereby most of the stellar photosphere can radiate freely with negligible backwarming from the accretion flow, and the accreting material settles onto the star with the same specific entropy as the photosphere. We compare our results to the calculated evolution via spherically symmetric accretion, the opposite limit, whereby the material accreting onto the star contains the entropy produced in the accretion shock front. We examine how different accretion geometries affect the evolution of massive protostars. For cold disk accretion at 10 -3 M sun yr -1 , the radius of a protostar is initially small, R * ≅ a few R sun . After several solar masses have accreted, the protostar begins to bloat up and for M * ≅ 10 M sun the stellar radius attains its maximum of 30-400 R sun . The large radius ∼100 R sun is also a feature of spherically symmetric accretion at the same accreted mass and accretion rate. Hence, expansion to a large radius is a robust feature of accreting massive protostars. At later times, the protostar eventually begins to contract and reaches the zero-age main sequence (ZAMS) for M * ≅ 30 M sun , independent of the accretion geometry. For accretion rates exceeding several 10 -3 M sun yr -1 , the protostar never contracts to the ZAMS. The very large radius of several hundreds R sun results in the low effective temperature and low UV luminosity of the protostar. Such bloated protostars could well explain the existence of bright high-mass protostellar objects, which lack detectable H II regions.

  13. 29 CFR 1917.154 - Compressed air.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  14. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  15. Extensive tumor reconstruction with massive allograft

    International Nuclear Information System (INIS)

    Zulmi Wan

    1999-01-01

    Massive deep-frozen bone allografts were implanted in four patients after wide tumor resection. Two cases were solitary proximal femur metastases, secondary to Thyroid cancer and breast cancer respectively; while the other two cases were primary in nature i.e. Chondrosarcoma proximal humerus and Osteosarcoma proximal femur. All were treated with a cemented alloprosthesis except in the upper limb where shoulder fusion was performed. Augmentation of these techniques were done with a segment 1 free vascularised fibular composite graft to the proximal femur of breast secondaries and proximal humerus Chondrosarcoma. Coverage of the wound of the latter was also contributed by lattisimus dorsi flap. The present investigations demonstrated the massive bone allografts were intimately anchored by host bone and there had been no evidence of aseptic loosening at the graft-cement interface. This study showed that with good effective tumor control, reconstructive surgery with massive allografts represented a good alternative to prosthetic implants in tumors of the limbs. No infection was seen in all four cases

  16. Cosmology in general massive gravity theories

    International Nuclear Information System (INIS)

    Comelli, D.; Nesti, F.; Pilo, L.

    2014-01-01

    We study the cosmological FRW flat solutions generated in general massive gravity theories. Such a model are obtained adding to the Einstein General Relativity action a peculiar non derivative potentials, function of the metric components, that induce the propagation of five gravitational degrees of freedom. This large class of theories includes both the case with a residual Lorentz invariance as well as the case with rotational invariance only. It turns out that the Lorentz-breaking case is selected as the only possibility. Moreover it turns out that that perturbations around strict Minkowski or dS space are strongly coupled. The upshot is that even though dark energy can be simply accounted by massive gravity modifications, its equation of state w eff has to deviate from -1. Indeed, there is an explicit relation between the strong coupling scale of perturbations and the deviation of w eff from -1. Taking into account current limits on w eff and submillimiter tests of the Newton's law as a limit on the possible strong coupling scale, we find that it is still possible to have a weakly coupled theory in a quasi dS background. Future experimental improvements on short distance tests of the Newton's law may be used to tighten the deviation of w eff form -1 in a weakly coupled massive gravity theory

  17. Massive transfusion protocols: current best practice

    Directory of Open Access Journals (Sweden)

    Hsu YM

    2016-03-01

    Full Text Available Yen-Michael S Hsu,1 Thorsten Haas,2 Melissa M Cushing1 1Department of Pathology and Laboratory Medicine, Weill Cornell Medical College, New York, NY, USA; 2Department of Anesthesia, University Children's Hospital Zurich, Zurich, Switzerland Abstract: Massive transfusion protocols (MTPs are established to provide rapid blood replacement in a setting of severe hemorrhage. Early optimal blood transfusion is essential to sustain organ perfusion and oxygenation. There are many variables to consider when establishing an MTP, and studies have prospectively evaluated different scenarios and patient populations to establish the best practices to attain improved patient outcomes. The establishment and utilization of an optimal MTP is challenging given the ever-changing patient status during resuscitation efforts. Much of the MTP literature comes from the trauma population, due to the fact that massive hemorrhage is the leading cause of preventable trauma-related death. As we come to further understand the positive and negative clinical impacts of transfusion-related factors, massive transfusion practice can be further refined. This article will first discuss specific MTPs targeting different patient populations and current relevant international guidelines. Then, we will examine a wide selection of therapeutic products to support MTPs, including newly available products and the most suitable of the traditional products. Lastly, we will discuss the best design for an MTP, including ratio-based MTPs and MTPs based on the use of point-of-care coagulation diagnostic tools. Keywords: hemorrhage, MTP, antifibrinolytics, coagulopathy, trauma, ratio, logistics, guidelines, hemostatic

  18. Galaxy bispectrum from massive spinning particles

    Science.gov (United States)

    Moradinezhad Dizgah, Azadeh; Lee, Hayden; Muñoz, Julian B.; Dvorkin, Cora

    2018-05-01

    Massive spinning particles, if present during inflation, lead to a distinctive bispectrum of primordial perturbations, the shape and amplitude of which depend on the masses and spins of the extra particles. This signal, in turn, leaves an imprint in the statistical distribution of galaxies; in particular, as a non-vanishing galaxy bispectrum, which can be used to probe the masses and spins of these particles. In this paper, we present for the first time a new theoretical template for the bispectrum generated by massive spinning particles, valid for a general triangle configuration. We then proceed to perform a Fisher-matrix forecast to assess the potential of two next-generation spectroscopic galaxy surveys, EUCLID and DESI, to constrain the primordial non-Gaussianity sourced by these extra particles. We model the galaxy bispectrum using tree-level perturbation theory, accounting for redshift-space distortions and the Alcock-Paczynski effect, and forecast constraints on the primordial non-Gaussianity parameters marginalizing over all relevant biases and cosmological parameters. Our results suggest that these surveys would potentially be sensitive to any primordial non-Gaussianity with an amplitude larger than fNL≈ 1, for massive particles with spins 2, 3, and 4. Interestingly, if non-Gaussianities are present at that level, these surveys will be able to infer the masses of these spinning particles to within tens of percent. If detected, this would provide a very clear window into the particle content of our Universe during inflation.

  19. Effects of massive transfusion on oxygen availability

    Directory of Open Access Journals (Sweden)

    José Otávio Costa Auler Jr

    Full Text Available OBJECTIVE: To determine oxygen derived parameters, hemodynamic and biochemical laboratory data (2,3 Diphosphoglycerate, lactate and blood gases analysis in patients after cardiac surgery who received massive blood replacement. DESIGN: Prospective study. SETTING: Heart Institute (Instituto do Coração, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo, Brazil. PARTICIPANTS: Twelve patients after cardiac surgery who received massive transfusion replacement; six of them evolved to a fatal outcome within the three-day postoperative follow-up. MEASUREMENTS AND MAIN RESULTS: The non-survivors group (n=6 presented high lactate levels and low P50 levels, when compared to the survivors group (p<0.05. Both groups presented an increase in oxygen consumption and O2 extraction, and there were no significant differences between them regarding these parameters. The 2,3 DPG levels were slightly reduced in both groups. CONCLUSIONS: This study shows that patients who are massively transfused following cardiovascular surgery present cell oxygenation disturbances probably as a result of O2 transport inadequacy.

  20. Emergent universe with wormholes in massive gravity

    Science.gov (United States)

    Paul, B. C.; Majumdar, A. S.

    2018-03-01

    An emergent universe (EU) scenario is proposed to obtain a universe free from big-bang singularity. In this framework the present universe emerged from a static Einstein universe phase in the infinite past. A flat EU scenario is found to exist in Einstein’s gravity with a non-linear equation of state (EoS). It has been shown subsequently that a physically realistic EU model can be obtained considering cosmic fluid composed of interacting fluids with a non-linear equation of state. It results a viable cosmological model accommodating both early inflation and present accelerating phases. In the present paper, the origin of an initial static Einstein universe needed in the EU model is explored in a massive gravity theory which subsequently emerged to be a dynamically evolving universe. A new gravitational instanton solution in a flat universe is obtained in the massive gravity theory which is a dynamical wormhole that might play an important role in realizing the origin of the initial state of the emergent universe. The emergence of a Lorentzian universe from a Euclidean gravity is understood by a Wick rotation τ = i t . A universe with radiation at the beginning finally transits into the present observed universe with a non-linear EoS as the interactions among the fluids set in. Thus a viable flat EU scenario where the universe stretches back into time infinitely, with no big bang is permitted in a massive gravity.

  1. Transcatheter emboilization therapy of massive colonic bleeding

    International Nuclear Information System (INIS)

    Shin, G. H.; Oh, J. H.; Yoon, Y.

    1996-01-01

    To evaulate the efficacy and safety of emergent superselective transcatheter embolization for controlling massive colonic bleeding. Six of the seven patients who had symptom of massive gastrointestinal bleeding underwent emergent transcatheter embolization for control of the bleeding. Gastrointestinal bleeding in these patients was originated from various colonic diseases: rectal cancer(n=1), proctitis(n=1), benign ulcer(n=1), mucosal injury by ventriculoperitoneal shunt(n=1), and unknown(n=2). All patients except one with rectal cancer were critically ill. Superselective embolization were done by using Gelfoam particles and/or coils. The vessels embolized were ileocolic artery(n=1). superior rectal artery(n=2), inferior rectal artery (n=1), and middle and inferior rectal arteries(n=1). Hemostasis was successful immediately in all patients. Two underwnet surgery due to recurrent bleeding developed 3 days after the procedure(n=1) or in associalion with underlying rectal cancer(n=1). On surgical specimen of two cases, there was no mucosal ischemic change. Transcatheter embolization is a safe and effective treatment of method for the control of massive colonic bleeding

  2. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  3. Early-age behaviour of concrete in massive structures, experimentation and modelling

    International Nuclear Information System (INIS)

    Zreiki, J.; Bouchelaghem, F.; Chaouche, M.

    2010-01-01

    This study is focused on the behaviour of concrete at early-age in massive structures, in relation with the prediction of both cracking risk and residual stresses, which is still a challenging task. In this paper, a 3D thermo-chemo-mechanical model has been developed, on the basis of complete material characterization experiments, in order to predict the early-age development of strains and residual stresses, and in order to assess the risk of cracking in massive concrete structures. The parameters of the proposed model were identified on two different concretes, High Performance Concrete and Fibrous Self-Compacted Concrete - from simple experiments in the laboratory: uniaxial tension and compression tests, dynamic Young's modulus measurements, free and autogenous shrinkages, semi-adiabatic calorimetry. The proposed model has been implemented in a Finite Element code, and the numerical simulations of the laboratory tests have proved the model consistency. Furthermore, early-age experiments conducted on massive structures have also been simulated, in order to investigate the predictive capability of the model, and to assess the model performance in practical situations where varying temperatures are involved.

  4. Early-age behaviour of concrete in massive structures, experimentation and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Zreiki, J., E-mail: zreiki@lmt.ens-cachan.f [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France); Bouchelaghem, F. [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France); UPMC Univ Paris 06 (France); Chaouche, M. [ENS Cachan/CNRS UMR8535/UPMC/PRES UniverSud Paris, Cachan (France)

    2010-10-15

    This study is focused on the behaviour of concrete at early-age in massive structures, in relation with the prediction of both cracking risk and residual stresses, which is still a challenging task. In this paper, a 3D thermo-chemo-mechanical model has been developed, on the basis of complete material characterization experiments, in order to predict the early-age development of strains and residual stresses, and in order to assess the risk of cracking in massive concrete structures. The parameters of the proposed model were identified on two different concretes, High Performance Concrete and Fibrous Self-Compacted Concrete - from simple experiments in the laboratory: uniaxial tension and compression tests, dynamic Young's modulus measurements, free and autogenous shrinkages, semi-adiabatic calorimetry. The proposed model has been implemented in a Finite Element code, and the numerical simulations of the laboratory tests have proved the model consistency. Furthermore, early-age experiments conducted on massive structures have also been simulated, in order to investigate the predictive capability of the model, and to assess the model performance in practical situations where varying temperatures are involved.

  5. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  6. Harnessing Disorder in Compression Based Nanofabrication

    Science.gov (United States)

    Engel, Clifford John

    The future of nanotechnologies depends on the successful development of versatile, low-cost techniques for patterning micro- and nanoarchitectures. While most approaches to nanofabrication have focused primarily on making periodic structures at ever smaller length scales with an ultimate goal of massively scaling their production, I have focused on introducing control into relatively disordered nanofabrication systems. Well-ordered patterns are increasingly unnecessary for a growing range of applications, from anti-biofouling coatings to light trapping to omniphobic surfaces. The ability to manipulate disorder, at will and over multiple length scales, starting with the nanoscale, can open new prospects for textured substrates and unconventional applications. Taking advantage of previously considered defects; I have been able to develop nanofabrication techniques with potential for massive scalability and the incorporation into a wide range of potential application. This thesis first describes the manipulation of the non-Newtonian properties of liquid Ga and Ga alloys to confine the metal and metal alloys in gratings with sub-wavelength periodicities. Through a solid to liquid phase change, I was able to access the superior plasmonic properties of liquid Ga for the generation of surface plasmon polaritons (SPP). The switching contract between solid and liquid Ga confine in the nanogratings allowed for reversible manipulation of SPP properties through heating and cooling around the relatively low melting temperature of Ga (29.8 °C). The remaining chapters focus on the development and characterization of an all polymer wrinkle material system. Wrinkles, spontaneous disordered features that are produced in response to compressive force, are an ideal for a growing number of applications where fine feature control is no longer the main motivation. However the mechanical limitations of many wrinkle systems have restricted the potential applications of wrinkled surfaces

  7. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  8. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  9. Plasma heating by adiabatic compression

    International Nuclear Information System (INIS)

    Ellis, R.A. Jr.

    1972-01-01

    These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

  10. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  11. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  12. Seeding magnetic fields for laser-driven flux compression in high-energy-density plasmas.

    Science.gov (United States)

    Gotchev, O V; Knauer, J P; Chang, P Y; Jang, N W; Shoup, M J; Meyerhofer, D D; Betti, R

    2009-04-01

    A compact, self-contained magnetic-seed-field generator (5 to 16 T) is the enabling technology for a novel laser-driven flux-compression scheme in laser-driven targets. A magnetized target is directly irradiated by a kilojoule or megajoule laser to compress the preseeded magnetic field to thousands of teslas. A fast (300 ns), 80 kA current pulse delivered by a portable pulsed-power system is discharged into a low-mass coil that surrounds the laser target. A >15 T target field has been demonstrated using a hot spot of a compressed target. This can lead to the ignition of massive shells imploded with low velocity-a way of reaching higher gains than is possible with conventional ICF.

  13. Reappraising the concept of massive transfusion in trauma

    DEFF Research Database (Denmark)

    Stanworth, Simon J; Morris, Timothy P; Gaarder, Christine

    2010-01-01

    ABSTRACT : INTRODUCTION : The massive-transfusion concept was introduced to recognize the dilutional complications resulting from large volumes of packed red blood cells (PRBCs). Definitions of massive transfusion vary and lack supporting clinical evidence. Damage-control resuscitation regimens...... of modern trauma care are targeted to the early correction of acute traumatic coagulopathy. The aim of this study was to identify a clinically relevant definition of trauma massive transfusion based on clinical outcomes. We also examined whether the concept was useful in that early prediction of massive...... transfusion as a concept in trauma has limited utility, and emphasis should be placed on identifying patients with massive hemorrhage and acute traumatic coagulopathy....

  14. Thermodynamics inducing massive particles' tunneling and cosmic censorship

    International Nuclear Information System (INIS)

    Zhang, Baocheng; Cai, Qing-yu; Zhan, Ming-sheng

    2010-01-01

    By calculating the change of entropy, we prove that the first law of black hole thermodynamics leads to the tunneling probability of massive particles through the horizon, including the tunneling probability of massive charged particles from the Reissner-Nordstroem black hole and the Kerr-Newman black hole. Novelly, we find the trajectories of massive particles are close to that of massless particles near the horizon, although the trajectories of massive charged particles may be affected by electromagnetic forces. We show that Hawking radiation as massive particles tunneling does not lead to violation of the weak cosmic-censorship conjecture. (orig.)

  15. Thermodynamics inducing massive particles' tunneling and cosmic censorship

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Baocheng [Chinese Academy of Sciences, State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Wuhan (China); Graduate University of Chinese Academy of Sciences, Beijing (China); Cai, Qing-yu [Chinese Academy of Sciences, State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Wuhan (China); Zhan, Ming-sheng [Chinese Academy of Sciences, State Key Laboratory of Magnetic Resonances and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Wuhan (China); Chinese Academy of Sciences, Center for Cold Atom Physics, Wuhan (China)

    2010-08-15

    By calculating the change of entropy, we prove that the first law of black hole thermodynamics leads to the tunneling probability of massive particles through the horizon, including the tunneling probability of massive charged particles from the Reissner-Nordstroem black hole and the Kerr-Newman black hole. Novelly, we find the trajectories of massive particles are close to that of massless particles near the horizon, although the trajectories of massive charged particles may be affected by electromagnetic forces. We show that Hawking radiation as massive particles tunneling does not lead to violation of the weak cosmic-censorship conjecture. (orig.)

  16. Rectal perforation by compressed air.

    Science.gov (United States)

    Park, Young Jin

    2017-07-01

    As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.

  17. Compact torus compression of microwaves

    International Nuclear Information System (INIS)

    Hewett, D.W.; Langdon, A.B.

    1985-01-01

    The possibility that a compact torus (CT) might be accelerated to large velocities has been suggested by Hartman and Hammer. If this is feasible one application of these moving CTs might be to compress microwaves. The proposed mechanism is that a coaxial vacuum region in front of a CT is prefilled with a number of normal electromagnetic modes on which the CT impinges. A crucial assumption of this proposal is that the CT excludes the microwaves and therefore compresses them. Should the microwaves penetrate the CT, compression efficiency is diminished and significant CT heating results. MFE applications in the same parameters regime have found electromagnetic radiation capable of penetrating, heating, and driving currents. We report here a cursory investigation of rf penetration using a 1-D version of a direct implicit PIC code

  18. Premixed autoignition in compressible turbulence

    Science.gov (United States)

    Konduri, Aditya; Kolla, Hemanth; Krisman, Alexander; Chen, Jacqueline

    2016-11-01

    Prediction of chemical ignition delay in an autoignition process is critical in combustion systems like compression ignition engines and gas turbines. Often, ignition delay times measured in simple homogeneous experiments or homogeneous calculations are not representative of actual autoignition processes in complex turbulent flows. This is due the presence of turbulent mixing which results in fluctuations in thermodynamic properties as well as chemical composition. In the present study the effect of fluctuations of thermodynamic variables on the ignition delay is quantified with direct numerical simulations of compressible isotropic turbulence. A premixed syngas-air mixture is used to remove the effects of inhomogeneity in the chemical composition. Preliminary results show a significant spatial variation in the ignition delay time. We analyze the topology of autoignition kernels and identify the influence of extreme events resulting from compressibility and intermittency. The dependence of ignition delay time on Reynolds and turbulent Mach numbers is also quantified. Supported by Basic Energy Sciences, Dept of Energy, United States.

  19. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

  20. Efficient access of compressed data

    International Nuclear Information System (INIS)

    Eggers, S.J.; Shoshani, A.

    1980-06-01

    A compression technique is presented that allows a high degree of compression but requires only logarithmic access time. The technique is a constant suppression scheme, and is most applicable to stable databases whose distribution of constants is fairly clustered. Furthermore, the repeated use of the technique permits the suppression of a multiple number of different constants. Of particular interest is the application of the constant suppression technique to databases the composite key of which is made up of an incomplete cross product of several attribute domains. The scheme for compressing the full cross product composite key is well known. This paper, however, also handles the general, incomplete case by applying the constant suppression technique in conjunction with a composite key suppression scheme

  1. Compressibility of rotating black holes

    International Nuclear Information System (INIS)

    Dolan, Brian P.

    2011-01-01

    Interpreting the cosmological constant as a pressure, whose thermodynamically conjugate variable is a volume, modifies the first law of black hole thermodynamics. Properties of the resulting thermodynamic volume are investigated: the compressibility and the speed of sound of the black hole are derived in the case of nonpositive cosmological constant. The adiabatic compressibility vanishes for a nonrotating black hole and is maximal in the extremal case--comparable with, but still less than, that of a cold neutron star. A speed of sound v s is associated with the adiabatic compressibility, which is equal to c for a nonrotating black hole and decreases as the angular momentum is increased. An extremal black hole has v s 2 =0.9 c 2 when the cosmological constant vanishes, and more generally v s is bounded below by c/√(2).

  2. Compressive behavior of fine sand.

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Bradley E. (Air Force Research Laboratory, Eglin, FL); Kabir, Md. E. (Purdue University, West Lafayette, IN); Song, Bo; Chen, Wayne (Purdue University, West Lafayette, IN)

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  3. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  4. The compression algorithm for the data acquisition system in HT-7 tokamak

    International Nuclear Information System (INIS)

    Zhu Lin; Luo Jiarong; Li Guiming; Yue Dongli

    2003-01-01

    HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acquire, store, analyze and index the data. The volume of the data is nearly up to hundreds of million bytes. Besides the hardware and software support, a great capacity of data storage, process and transfer is a more important problem. To deal with this problem, the key technology is data compression algorithm. In the paper, the data format in HT-7 is introduced first, then the data compression algorithm, LZO, being a kind of portable lossless data compression algorithm with ANSIC, is analyzed. This compression algorithm, which fits well with the data acquisition and distribution in the nuclear fusion experiment, offers a pretty fast compression and extremely fast decompression. At last the performance evaluation of LZO application in HT-7 is given

  5. Shock waves in relativistic nuclear matter, I

    International Nuclear Information System (INIS)

    Gleeson, A.M.; Raha, S.

    1979-02-01

    The relativistic Rankine-Hugoniot relations are developed for a 3-dimensional plane shock and a 3-dimensional oblique shock. Using these discontinuity relations together with various equations of state for nuclear matter, the temperatures and the compressibilities attainable by shock compression for a wide range of laboratory kinetic energy of the projectile are calculated. 12 references

  6. Pressure Infusion Cuff and Blood Warmer during Massive Transfusion: An Experimental Study About Hemolysis and Hypothermia.

    Science.gov (United States)

    Poder, Thomas G; Pruneau, Denise; Dorval, Josée; Thibault, Louis; Fisette, Jean-François; Bédard, Suzanne K; Jacques, Annie; Beauregard, Patrice

    2016-01-01

    Blood warmers were developed to reduce the risk of hypothermia associated with the infusion of cold blood products. During massive transfusion, these devices are used with compression sleeve, which induce a major stress to red blood cells. In this setting, the combination of blood warmer and compression sleeve could generate hemolysis and harm the patient. We conducted this study to compare the impact of different pressure rates on the hemolysis of packed red blood cells and on the outlet temperature when a blood warmer set at 41.5°C is used. Pressure rates tested were 150 and 300 mmHg. Ten packed red blood cells units were provided by Héma-Québec and each unit was sequentially tested. We found no increase in hemolysis either at 150 or 300 mmHg. By cons, we found that the blood warmer was not effective at warming the red blood cells at the specified temperature. At 150 mmHg, the outlet temperature reached 37.1°C and at 300 mmHg, the temperature was 33.7°C. To use a blood warmer set at 41.5°C in conjunction with a compression sleeve at 150 or 300 mmHg does not generate hemolysis. At 300 mmHg a blood warmer set at 41.5°C does not totally avoid a risk of hypothermia.

  7. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  8. A new method of on-line multiparameter amplitude analysis with compression

    International Nuclear Information System (INIS)

    Morhac, M.; matousek, V.

    1996-01-01

    An algorithm of one-line multidimensional amplitude analysis with compression using fast adaptive orthogonal transform is presented in the paper. The method is based on a direct modification of multiplication coefficients of the signal flow graph of the fast Cooley-Tukey's algorithm. The coefficients are modified according to a reference vector representing the processed data. The method has been tested to compress three parameter experimental nuclear data. The efficiency of the derived adaptive transform is compared with classical orthogonal transforms. (orig.)

  9. Revealing evolved massive stars with Spitzer

    Science.gov (United States)

    Gvaramadze, V. V.; Kniazev, A. Y.; Fabrika, S.

    2010-06-01

    Massive evolved stars lose a large fraction of their mass via copious stellar wind or instant outbursts. During certain evolutionary phases, they can be identified by the presence of their circumstellar nebulae. In this paper, we present the results of a search for compact nebulae (reminiscent of circumstellar nebulae around evolved massive stars) using archival 24-μm data obtained with the Multiband Imaging Photometer for Spitzer. We have discovered 115 nebulae, most of which bear a striking resemblance to the circumstellar nebulae associated with luminous blue variables (LBVs) and late WN-type (WNL) Wolf-Rayet (WR) stars in the Milky Way and the Large Magellanic Cloud (LMC). We interpret this similarity as an indication that the central stars of detected nebulae are either LBVs or related evolved massive stars. Our interpretation is supported by follow-up spectroscopy of two dozen of these central stars, most of which turn out to be either candidate LBVs (cLBVs), blue supergiants or WNL stars. We expect that the forthcoming spectroscopy of the remaining objects from our list, accompanied by the spectrophotometric monitoring of the already discovered cLBVs, will further increase the known population of Galactic LBVs. This, in turn, will have profound consequences for better understanding the LBV phenomenon and its role in the transition between hydrogen-burning O stars and helium-burning WR stars. We also report on the detection of an arc-like structure attached to the cLBV HD 326823 and an arc associated with the LBV R99 (HD 269445) in the LMC. Partially based on observations collected at the German-Spanish Astronomical Centre, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC). E-mail: vgvaram@mx.iki.rssi.ru (VVG); akniazev@saao.ac.za (AYK); fabrika@sao.ru (SF)

  10. Russia's nuclear elite on rampage

    International Nuclear Information System (INIS)

    Popova, L.

    1993-01-01

    In July 1992, the Russian Ministry of Nuclear Industry began pressing the Russian government to adopt a plan to build new nuclear power plants. In mid-January 1993 the government announced that it will build at least 30 new nuclear power plants, and that the second stage of the building program will include construction of three fast-breeder reactors. In this article, the author addresses the rationale behind this massive building program, despite the country's economic condition and public dread of another Chernobyl-type accident. The viewpoints of both the Russian Ministry of Nuclear Industry and opposing interests are discussed

  11. Widespread after-effects of nuclear war

    International Nuclear Information System (INIS)

    Teller, E.

    1984-01-01

    Radioactive fallout and depletion of the ozone layer, once believed catastrophic consequences of nuclear war, are now proved unimportant in comparison to immediate war damage. Today, ''nuclear winter'' is claimed to have apocalyptic effects. Uncertainties in massive smoke production and in meteorological phenomena give reason to doubt this conclusion. (author)

  12. A Massively Parallel Code for Polarization Calculations

    Science.gov (United States)

    Akiyama, Shizuka; Höflich, Peter

    2001-03-01

    We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.

  13. Deflection of massive neutrinos by gravitational fields

    International Nuclear Information System (INIS)

    Fargion, D.

    1981-01-01

    The curvature undergone by massive neutrino trajectories, passing by a mass M at a distance b from the center of a body, is examined. Calculations led to the following angle of deflection: δ rho = 2GM/b#betta# 2 sub(infinity)C 2 (1 + #betta# 2 sub(infinity)), where #betta#sub(infinity) is the dimensionless velocity of the particle at infinity. The ultrarelativistic limit (#betta#sub(infinity) = 1) coincides with the usual massless deflection. Physical consequences are considered. (author)

  14. Body contouring following massive weight loss

    Directory of Open Access Journals (Sweden)

    Vijay Langer

    2011-01-01

    Full Text Available Obesity is a global disease with epidemic proportions. Bariatric surgery or modified lifestyles go a long way in mitigating the vast weight gain. Patients following these interventions usually undergo massive weight loss. This results in redundant tissues in various parts of the body. Loose skin causes increased morbidity and psychological trauma. This demands various body contouring procedures that are usually excisional. These procedures are complex and part of a painstaking process that needs a committed patient and an industrious plastic surgeon. As complications in these patients can be quite frequent, both the patient and the surgeon need to be aware and willing to deal with them.

  15. Non-Pauli-Fierz Massive Gravitons

    International Nuclear Information System (INIS)

    Dvali, Gia; Pujolas, Oriol; Redi, Michele

    2008-01-01

    We study general Lorentz invariant theories of massive gravitons. We show that, contrary to the standard lore, there exist consistent theories where the graviton mass term violates Pauli-Fierz structure. For theories where the graviton is a resonance, this does not imply the existence of a scalar ghost if the deviation from a Pauli-Fierz structure becomes sufficiently small at high energies. These types of mass terms are required by any consistent realization of the Dvali-Gabadadze-Porrati model in higher dimension

  16. Massive Preperitoneal Hematoma after a Subcutaneous Injection

    Directory of Open Access Journals (Sweden)

    Hideki Katagiri

    2016-01-01

    Full Text Available Preperitoneal hematomas are rare and can develop after surgery or trauma. A 74-year-old woman, receiving systemic anticoagulation, developed a massive preperitoneal hematoma after a subcutaneous injection of teriparatide using a 32-gauge, 4 mm needle. In this patient, there were two factors, the subcutaneous injection of teriparatide and systemic anticoagulation, associated with development of the hematoma. These two factors are especially significant, because they are widely used clinically. Although extremely rare, physicians must consider this potentially life-threatening complication after subcutaneous injections, especially in patients receiving anticoagulation.

  17. Hadroproduction of massive lepton pairs and QCD

    International Nuclear Information System (INIS)

    Berger, E.L.

    1979-04-01

    A survey is presented of some current issues of interest in attempts to describe the production of massive lepton pairs in hadronic collisions at high energies. I concentrate on the interpretation of data in terms of the parton model and on predictions derived from quantum-chromodynamics (QCD), their reliability and their confrontation with experiment. Among topics treated are the connection with deep-inelastic lepton scattering, universality of structure functions, and the behavior of cross-sections as a function of transverse momentum

  18. Discovery of massive neutral vector mesons

    International Nuclear Information System (INIS)

    Anon.

    1976-01-01

    Personal accounts of the discovery of massive neutral vector mesons (psi particles) are given by researchers S. Ting, G. Goldhaber, and B. Richter. The double-arm spectrometer and the Cherenkov effect are explained in a technical note, and the solenoidal magnetic detector is discussed in an explanatory note for nonspecialists. Reprints of three papers in Physical Review Letters which announced the discovery of the particles are given: Experimental observation of a heavy particle J, Discovery of a narrow resonance in e + e - annihilation, and Discovery of a second narrow resonance in e + e - annihilation. A discussion of subsequent developments and scientific biographies of the three authors are also presented. 25 figures

  19. Monopole Solutions in Topologically Massive Gauge Theory

    International Nuclear Information System (INIS)

    Teh, Rosy; Wong, Khai-Ming; Koh, Pin-Wai

    2010-01-01

    Monopoles in topologically massive SU(2) Yang-Mils-Higgs gauge theory in 2+1 dimensions with a Chern-Simon mass term have been studied by Pisarski some years ago. He argued that there is a monopole solution that is regular everywhere, but found that it does not possess finite action. There were no exact or numerical solutions being presented by him. Hence it is our purpose to further investigate this solution in more detail. We obtained numerical regular solutions that smoothly interpolates between the behavior at small and large distances for different values of Chern-Simon term strength and for several fixed values of Higgs field strength.

  20. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  1. Massive Asynchronous Parallelization of Sparse Matrix Factorizations

    Energy Technology Data Exchange (ETDEWEB)

    Chow, Edmond [Georgia Inst. of Technology, Atlanta, GA (United States)

    2018-01-08

    Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.

  2. The Black Hole Radiation in Massive Gravity

    Directory of Open Access Journals (Sweden)

    Ivan Arraut

    2018-02-01

    Full Text Available We apply the Bogoliubov transformations in order to connect two different vacuums, one located at past infinity and another located at future infinity around a black hole inside the scenario of the nonlinear theory of massive gravity. The presence of the extra degrees of freedom changes the behavior of the logarithmic singularity and, as a consequence, the relation between the two Bogoliubov coefficients. This has an effect on the number of particles, or equivalently, on the black hole temperature perceived by observers defining the time arbitrarily.

  3. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  4. Massive computation methodology for reactor operation (MACRO)

    International Nuclear Information System (INIS)

    Gustavsson, Cecilia; Pomp, Stephan; Sjoestrand, Henrik; Wallin, Gustav; Oesterlund, Michael; Koning, Arjan; Rochman, Dimitri; Bejmer, Klaes-Hakan; Henriksson, Hans

    2010-01-01

    Today, nuclear data libraries do not handle uncertainties from nuclear data in a consistent manner and the reactor codes do not request uncertainties in nuclear data input. Thus, the output from these codes have unknown uncertainties. The plan is to use a method proposed by Koning and Rochman to investigate the propagation of nuclear data uncertainties into reactor physics codes and macroscopic parameters. A project (acronym MACRO) has started at Uppsala University in collaboration with A. Koning and with financial support from Vattenfall AB and the Swedish Research Council within the GENIUS (Generation IV research in universities of Sweden) project. In the proposed method the uncertainties in nuclear model parameters will be derived from theoretical considerations and comparisons of nuclear model results with experimental cross-section data. Given the probability distribution in the model parameters a large set of random, complete ENDF-formatted nuclear data libraries will be created using the TALYS code. The generated nuclear data libraries will then be used in neutron transport codes to obtain macroscopic reactor parameters. For this, models of reactor systems with proper geometry and elements will be used. This will be done for all data libraries and the variation of the final results will be regarded as a systematic uncertainty in the investigated reactor parameter. The understanding of these systematic uncertainties is especially important for the design and intercomparison of new reactor concepts, i.e., Generation IV, and optimization applications for current generation reactors is envisaged. (authors)

  5. Massive computation methodology for reactor operation (MACRO)

    Energy Technology Data Exchange (ETDEWEB)

    Gustavsson, Cecilia; Pomp, Stephan; Sjoestrand, Henrik; Wallin, Gustav; Oesterlund, Michael [Division of applied nuclear physics, Department of physics and astronomy, Uppsala University, Laegerhyddsvaegen 1, 751 20 Uppsala (Sweden); Koning, Arjan; Rochman, Dimitri [Nuclear Research and consultancy Group (NRG) Westerduinweg 3, Petten (Netherlands); Bejmer, Klaes-Hakan [Vattenfall Nuclear Fuel AB, Jaemtlandsgatan 99, Vaellingby (Sweden); Henriksson, Hans [Vattenfall Research and Development AB, Jaemtlandsgatan 99, Vaellingby (Sweden)

    2010-07-01

    Today, nuclear data libraries do not handle uncertainties from nuclear data in a consistent manner and the reactor codes do not request uncertainties in nuclear data input. Thus, the output from these codes have unknown uncertainties. The plan is to use a method proposed by Koning and Rochman to investigate the propagation of nuclear data uncertainties into reactor physics codes and macroscopic parameters. A project (acronym MACRO) has started at Uppsala University in collaboration with A. Koning and with financial support from Vattenfall AB and the Swedish Research Council within the GENIUS (Generation IV research in universities of Sweden) project. In the proposed method the uncertainties in nuclear model parameters will be derived from theoretical considerations and comparisons of nuclear model results with experimental cross-section data. Given the probability distribution in the model parameters a large set of random, complete ENDF-formatted nuclear data libraries will be created using the TALYS code. The generated nuclear data libraries will then be used in neutron transport codes to obtain macroscopic reactor parameters. For this, models of reactor systems with proper geometry and elements will be used. This will be done for all data libraries and the variation of the final results will be regarded as a systematic uncertainty in the investigated reactor parameter. The understanding of these systematic uncertainties is especially important for the design and intercomparison of new reactor concepts, i.e., Generation IV, and optimization applications for current generation reactors is envisaged. (authors)

  6. Compression-based aggregation model for medical web services.

    Science.gov (United States)

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  7. Thermophysical properties of shock compressed argon and xenon

    International Nuclear Information System (INIS)

    Fortov, V.E.; Gryaznov, V.K.; Mintsev, V.B.; Ternovoi, V.Ya.

    2001-01-01

    The problem of the nature of thermodynamic properties and the high level electrical conductivity of substances at high pressures and temperatures is one of the most key issues of physics of high energy densities. So called pressure ionization is one of the most impressive demonstrations of the strong coupling effects in plasma under compression. Noble gases are the simplest object of studying of these phenomena because of absence of molecules and spherical symmetry of their atoms. In the present paper we are trying to have a common look from the chemical plasma picture on the whole available massive of the experimental data on Ar and Xe in a wide range of the parameters: from gaseous densities of 0,01 g/cc and pressures of several kilobars up to extremely high densities corresponding to the insulator-metal transition and megabar pressure range. (orig.)

  8. Japan's long-term energy outlook to 2050: Estimation for the potential of massive CO2 mitigation

    Energy Technology Data Exchange (ETDEWEB)

    Komiyama, Ryoichi

    2010-09-15

    This paper analyzes Japan's energy outlook and CO2 emissions to 2050. Scenario analysis reveals that Japan's CO2 emissions in 2050 could be potentially reduced by 58% from the emissions in 2005. For achieving this massive mitigation, it is required to reduce primary energy supply per GDP by 60% in 2050 from the 2005 level and to expand the share of non-fossil fuel in total supply to 50% by 2050. Concerning power generation mix, nuclear will account for 60%, renewable for 30% in 2050. For massive CO2 abatement, Japan should tackle technological and economic challenges for large-scale deployment of advanced technologies.

  9. Lossless compression of waveform data for efficient storage and transmission

    International Nuclear Information System (INIS)

    Stearns, S.D.; Tan, Li Zhe; Magotra, Neeraj

    1993-01-01

    Compression of waveform data is significant in many engineering and research areas since it can be used to alleviate data storage and transmission bandwidth. For example, seismic data are widely recorded and transmitted so that analysis can be performed on large amounts of data for numerous applications such as petroleum exploration, determination of the earth's core structure, seismic event detection and discrimination of underground nuclear explosions, etc. This paper describes a technique for lossless wave form data compression. The technique consists of two stages. The first stage is a modified form of linear prediction with discrete coefficients and the second stage is bi-level sequence coding. The linear predictor generates an error or residue sequence in a way such that exact reconstruction of the original data sequence can be accomplished with a simple algorithm. The residue sequence is essentially white Gaussian with seismic or other similar waveform data. Bi-level sequence coding, in which two sample sizes are chosen and the residue sequence is encoded into subsequences that alternate from one level to the other, further compresses the residue sequence. The principal feature of the two-stage data compression algorithm is that it is lossless, that is, it allows exact, bit-for-bit recovery of the original data sequence. The performance of the lossless compression algorithm at each stage is analyzed. The advantages of using bi-level sequence coding in the second stage are its simplicity of implementation, its effectiveness on data with large amplitude variations, and its near-optimal performance in encoding Gaussian sequences. Applications of the two-stage technique to typical seismic data indicates that an average number of compressed bits per sample close to the lower bound is achievable in practical situations

  10. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  11. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  12. Entropy, Coding and Data Compression

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 9. Entropy, Coding and Data Compression. S Natarajan. General Article Volume 6 Issue 9 September 2001 pp 35-45. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/09/0035-0045 ...

  13. Shock compression of synthetic opal

    International Nuclear Information System (INIS)

    Inoue, A; Okuno, M; Okudera, H; Mashimo, T; Omurzak, E; Katayama, S; Koyano, M

    2010-01-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO 4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO 2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  14. Range Compressed Holographic Aperture Ladar

    Science.gov (United States)

    2017-06-01

    entropy saturation behavior of the estimator is analytically described. Simultaneous range-compression and aperture synthesis is experimentally...4 2.1 Circular and Inverse -Circular HAL...2.3 Single Aperture, Multi-λ Imaging ...................................................................................... 14 2.4 Simultaneous Range

  15. Compression of Probabilistic XML documents

    NARCIS (Netherlands)

    Veldman, Irma

    2009-01-01

    Probabilistic XML (PXML) files resulting from data integration can become extremely large, which is undesired. For XML there are several techniques available to compress the document and since probabilistic XML is in fact (a special form of) XML, it might benefit from these methods even more. In

  16. Shock compression of synthetic opal

    Science.gov (United States)

    Inoue, A.; Okuno, M.; Okudera, H.; Mashimo, T.; Omurzak, E.; Katayama, S.; Koyano, M.

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  17. Shock compression of synthetic opal

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, A; Okuno, M; Okudera, H [Department of Earth Sciences, Kanazawa University Kanazawa, Ishikawa, 920-1192 (Japan); Mashimo, T; Omurzak, E [Shock Wave and Condensed Matter Research Center, Kumamoto University, Kumamoto, 860-8555 (Japan); Katayama, S; Koyano, M, E-mail: okuno@kenroku.kanazawa-u.ac.j [JAIST, Nomi, Ishikawa, 923-1297 (Japan)

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO{sub 4} tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO{sub 2} glass. However, internal silanole groups still remain even at 38.1 GPa.

  18. Force balancing in mammographic compression

    International Nuclear Information System (INIS)

    Branderhorst, W.; Groot, J. E. de; Lier, M. G. J. T. B. van; Grimbergen, C. A.; Neeter, L. M. F. H.; Heeten, G. J. den; Neeleman, C.

    2016-01-01

    Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess its feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast

  19. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    Science.gov (United States)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  20. Adiabatic compression of ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.

    1982-01-01

    A study has been made of the compression of collisionless ion rings in an increasing external magnetic field, B/sub e/ = zB/sub e/(t), by numerically implementing a previously developed kinetic theory of ring compression. The theory is general in that there is no limitation on the ring geometry or the compression ratio, lambdaequivalentB/sub e/ (final)/B/sub e/ (initial)> or =1. However, the motion of a single particle in an equilibrium is assumed to be completely characterized by its energy H and canonical angular momentum P/sub theta/ with the absence of a third constant of the motion. The present computational work assumes that plasma currents are negligible, as is appropriate for a low-temperature collisional plasma. For a variety of initial ring geometries and initial distribution functions (having a single value of P/sub theta/), it is found that the parameters for ''fat'', small aspect ratio rings follow general scaling laws over a large range of compression ratios, 1 3 : The ring radius varies as lambda/sup -1/2/; the average single particle energy as lambda/sup 0.72/; the root mean square energy spread as lambda/sup 1.1/; and the total current as lambda/sup 0.79/. The field reversal parameter is found to saturate at values typically between 2 and 3. For large compression ratios the current density is found to ''hollow out''. This hollowing tends to improve the interchange stability of an embedded low β plasma. The implications of these scaling laws for fusion reactor systems are discussed

  1. Nuclear deterrence: Inherent escalation?

    International Nuclear Information System (INIS)

    Bergbauer, J.R. Jr.

    1993-01-01

    Despite 40 years of peace between the super powers, there is increasing clamor to the effect that nuclear war between the super powers is imminent; or could occur through escalation from a minor conflict; or could result from harsh rhetoric (but only on the part of the U.S.) in the super power dialogue. The factor that is ignored is that a massive nuclear attack would be rational ONLY if that attack could inflict such damage that the other super power could not launch a significant retaliatory nuclear attack. ONLY in this circumstance would there be any profit in launching an initial Strategic Nuclear Attack. This First Strike capability is not now possessed nor projected to be developed by either super power. As long as ANY possible Strategic Nuclear Attack against the national territory of one super power would be insufficient to prevent an equally destructive retaliatory attack, then a Strategic Nuclear Attack would inevitably result in the destruction of both and would be profitless, hence, pointless. This situation describes Mutually Assured Destruction (MAD), the governing conflict paradigm applicable to both super powers. The only convential attack that would even remotely rival the national-destruction potential of a Strategic Nuclear Attack and could cause the attacked power to consider launching a retaliatory Strategic Nuclear Attack would be a massive land-air invasion/occupation of one super power by the other. Since neither super power can successfully execute such a conventional invasion/occupation, this situation is moot. The geo-political environments of the two super powers are so asymmetrical and their military positions so symmetrical that the probability of ANY forseeable situation resulting in their resorting to a Strategic Nuclear Exchange is vanishingly small. It is possible escape the Chicken-Little syndrome and, instead, devote energy to ensuring the maintenance of this favorable, but fragile, world system

  2. Testing the Larson relations in massive clumps

    Science.gov (United States)

    Traficante, A.; Duarte-Cabral, A.; Elia, D.; Fuller, G. A.; Merello, M.; Molinari, S.; Peretto, N.; Schisano, E.; Di Giorgio, A.

    2018-06-01

    We tested the validity of the three Larson relations in a sample of 213 massive clumps selected from the Herschel infrared Galactic Plane (Hi-GAL) survey, also using data from the Millimetre Astronomy Legacy Team 90 GHz (MALT90) survey of 3-mm emission lines. The clumps are divided into five evolutionary stages so that we can also discuss the Larson relations as a function of evolution. We show that this ensemble does not follow the three Larson relations, regardless of the clump's evolutionary phase. A consequence of this breakdown is that the dependence of the virial parameter αvir on mass (and radius) is only a function of the gravitational energy, independent of the kinetic energy of the system; thus, αvir is not a good descriptor of clump dynamics. Our results suggest that clumps with clear signatures of infall motions are statistically indistinguishable from clumps with no such signatures. The observed non-thermal motions are not necessarily ascribed to turbulence acting to sustain the gravity, but they might be a result of the gravitational collapse at the clump scales. This seems to be particularly true for the most massive (M ≥ 1000 M⊙) clumps in the sample, where exceptionally high magnetic fields might not be enough to stabilize the collapse.

  3. Planckian Interacting Massive Particles as Dark Matter.

    Science.gov (United States)

    Garny, Mathias; Sandora, McCullen; Sloth, Martin S

    2016-03-11

    The standard model could be self-consistent up to the Planck scale according to the present measurements of the Higgs boson mass and top quark Yukawa coupling. It is therefore possible that new physics is only coupled to the standard model through Planck suppressed higher dimensional operators. In this case the weakly interacting massive particle miracle is a mirage, and instead minimality as dictated by Occam's razor would indicate that dark matter is related to the Planck scale, where quantum gravity is anyway expected to manifest itself. Assuming within this framework that dark matter is a Planckian interacting massive particle, we show that the most natural mass larger than 0.01M_{p} is already ruled out by the absence of tensor modes in the cosmic microwave background (CMB). This also indicates that we expect tensor modes in the CMB to be observed soon for this type of minimal dark matter model. Finally, we touch upon the Kaluza-Klein graviton mode as a possible realization of this scenario within UV complete models, as well as further potential signatures and peculiar properties of this type of dark matter candidate. This paradigm therefore leads to a subtle connection between quantum gravity, the physics of primordial inflation, and the nature of dark matter.

  4. Massive neutrinos in almost-commutative geometry

    International Nuclear Information System (INIS)

    Stephan, Christoph A.

    2007-01-01

    In the noncommutative formulation of the standard model of particle physics by Chamseddine and Connes [Commun. Math. Phys. 182, 155 (1996), e-print hep-th/9606001], one of the three generations of fermions has to possess a massless neutrino. [C. P. Martin et al., Phys. Rep. 29, 363 (1998), e-print hep-th-9605001]. This formulation is consistent with neutrino oscillation experiments and the known bounds of the Pontecorvo-Maki-Nakagawa-Sakata matrix (PMNS matrix). But future experiments which may be able to detect neutrino masses directly and high-precision measurements of the PMNS matrix might need massive neutrinos in all three generations. In this paper we present an almost-commutative geometry which allows for a standard model with massive neutrinos in all three generations. This model does not follow in a straightforward way from the version of Chamseddine and Connes since it requires an internal algebra with four summands of matrix algebras, instead of three summands for the model with one massless neutrino

  5. MASSIVE PLEURAL EFFUSION: A CASE REPORT

    Directory of Open Access Journals (Sweden)

    Putu Bayu Dian Tresna Dewi

    2013-03-01

    Full Text Available Pleural effusion is abnormal fluid accumulation within pleural cavity between the parietal pleura and visceralis pleura, either transudation or exudates. A 47 year-old female presented with dyspneu, cough, and decreased of appetite. She had history of right lung tumor. Physical examination revealed asymmetric chest movement where right part of lung was lagged during breathing, vocal fremitus on the right chest was decreased, dullness at the right chest, decreased vesicular sound in the right chest, enlargement of supraclavicular and colli dextra lymph nodes, and hepatomegali. Complete blood count showed leukocytosis. Clinical chemistry analysis showed hipoalbumin and decreased liver function. Blood gas analysis showed hypoxemia. Pleural fluid analysis showed an exudates, murky red liquid color filled with erythrocytes, number of cells. Cytological examination showed existence of a non-small cell carcinoma tends adeno type. From chest X-ray showed massive right pleural effusion. Based on history, physical examination and investigations, she was diagnosed with massive pleural effusion et causa suspected malignancy. She had underwent pleural fluid evacuation and treated with analgesics and antibiotics.

  6. Massive clot formation after tooth extraction

    Directory of Open Access Journals (Sweden)

    Santosh Hunasgi

    2015-01-01

    Full Text Available Oral surgical procedures mainly tooth extraction can be related with an extended hemorrhage owed to the nature of the process resulting in an "open wound." The attempt of this paper is to present a case of massive postoperative clot formation after tooth extraction and highlight on the oral complications of surgical procedures. A 32-year-old male patient reported to the Dental Clinic for evaluation and extraction of grossly decayed 46. Clinical evaluation of 46 revealed root stumps. Extraction of the root stumps was performed, and it was uneventful. Hemostasis was achieved and postsurgical instructions were specified to the patient. The patient reported to the clinic, the very subsequent morning with a criticism of bleeding at the extraction site. On clinical examination, bleeding was noted from the socket in relation to 46. To control bleeding, oral hemostatic drugs Revici - E (Ethamsylate 500 mg was prescribed and bleeding was stopped in 2 h. However, a massive clot was formed at the extraction site. Further, this clot resolved on its own in 1-week time. Despite the fact that dental extraction is considered to be a minor surgical procedure, some cases may present with life-threatening complications including hemorrhage. Vigilant and significant history taking, physical and dental examinations prior to dental procedures are a must to avoid intraoperative and postoperative complications.

  7. One-loop calculations with massive particles

    International Nuclear Information System (INIS)

    Oldenborgh, G.J. van.

    1990-01-01

    In this thesis some techniques for performing one-loop calculations with massive particles are presented. Numerical techniques are presented necessary for evaluating one-loop integrals which occur in one-loop calculations of photon-photon scattering. The algorithms have been coded in FORTRAN (to evaluate the scalar integrals) and the algebraic language FORM (to reduce the tensor integrals to scalar integrals). Applications are made in the theory of the strong interaction, QCD, i.e. in handling one-loop integrals with massive particles, in order to regulate the infinities by mass parameters encountered in this theory. However this simplifies the computation considerably, the description of the proton structure functions have to be renormalized in order to obtain physical results. This renormalization is different from the published results for the gluon and thus has to be redone. The first physics results that have been obtained with these new methods are presented. These concern heavy quark production in semi-leptonic interactions, for instance neutrino charm production and top production at the electron-proton (ep) collider HERA and the proposed LEP/LHC combination. Total and differential cross-sections for one-loop corrections to top production at the HERA and proposed LEP/HLC ep colliders are given and structure functions for charmed quark production are compared with previously published results. (author). 58 refs.; 18 figs.; 5 tabs

  8. Dipolar dark matter with massive bigravity

    International Nuclear Information System (INIS)

    Blanchet, Luc; Heisenberg, Lavinia

    2015-01-01

    Massive gravity theories have been developed as viable IR modifications of gravity motivated by dark energy and the problem of the cosmological constant. On the other hand, modified gravity and modified dark matter theories were developed with the aim of solving the problems of standard cold dark matter at galactic scales. Here we propose to adapt the framework of ghost-free massive bigravity theories to reformulate the problem of dark matter at galactic scales. We investigate a promising alternative to dark matter called dipolar dark matter (DDM) in which two different species of dark matter are separately coupled to the two metrics of bigravity and are linked together by an internal vector field. We show that this model successfully reproduces the phenomenology of dark matter at galactic scales (i.e. MOND) as a result of a mechanism of gravitational polarisation. The model is safe in the gravitational sector, but because of the particular couplings of the matter fields and vector field to the metrics, a ghost in the decoupling limit is present in the dark matter sector. However, it might be possible to push the mass of the ghost beyond the strong coupling scale by an appropriate choice of the parameters of the model. Crucial questions to address in future work are the exact mass of the ghost, and the cosmological implications of the model

  9. Evolution of massive close binary stars

    International Nuclear Information System (INIS)

    Masevich, A.G.; Tutukov, A.V.

    1982-01-01

    Some problems of the evolution of massive close binary stars are discussed. Most of them are nonevolutionized stars with close masses of components. After filling the Roche cavity and exchange of matter between the components the Wolf-Rayet star is formed. As a result of the supernovae explosion a neutron star or a black hole is formed in the system. The system does not disintegrate but obtains high space velocity owing to the loss of the supernovae envelope. The satellite of the neutron star or black hole - the star of the O or B spectral class loses about 10 -6 of the solar mass for a year. Around the neighbouring component a disc of this matter is formed the incidence of which on a compact star leads to X radiation appearance. The neutron star cannot absorb the whole matter of the widening component and the binary system submerges into the common envelope. As a result of the evolution of massive close binary systems single neutron stars can appear which after the lapse of some time become radiopulsars. Radiopulsars with such high space velocities have been found in our Galaxy [ru

  10. Effect of compressibility on the hypervelocity penetration

    Science.gov (United States)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  11. Formation of the First Star Clusters and Massive Star Binaries by Fragmentation of Filamentary Primordial Gas Clouds

    Science.gov (United States)

    Hirano, Shingo; Yoshida, Naoki; Sakurai, Yuya; Fujii, Michiko S.

    2018-03-01

    We perform a set of cosmological simulations of early structure formation incorporating baryonic streaming motions. We present a case where a significantly elongated gas cloud with ∼104 solar mass (M ⊙) is formed in a pre-galactic (∼107 M ⊙) dark halo. The gas streaming into the halo compresses and heats the massive filamentary cloud to a temperature of ∼10,000 Kelvin. The gas cloud cools rapidly by atomic hydrogen cooling, and then by molecular hydrogen cooling down to ∼400 Kelvin. The rapid decrease of the temperature and hence of the Jeans mass triggers fragmentation of the filament to yield multiple gas clumps with a few hundred solar masses. We estimate the mass of the primordial star formed in each fragment by adopting an analytic model based on a large set of radiation hydrodynamics simulations of protostellar evolution. The resulting stellar masses are in the range of ∼50–120 M ⊙. The massive stars gravitationally attract each other and form a compact star cluster. We follow the dynamics of the star cluster using a hybrid N-body simulation. We show that massive star binaries are formed in a few million years through multi-body interactions at the cluster center. The eventual formation of the remnant black holes will leave a massive black hole binary, which can be a progenitor of strong gravitational wave sources similar to those recently detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO).

  12. MASSIVE+: The Growth Histories of MASSIVE Survey Galaxies from their Globular Cluster Colors

    Science.gov (United States)

    Blakeslee, John

    2017-08-01

    The MASSIVE survey is targeting the 100 most massive galaxies within 108 Mpc that are visible in the northern sky. These most massive galaxies in the present-day universe reside in a surprisingly wide variety of environments, from rich clusters to fossil groups to near isolation. We propose to use WFC3/UVIS and ACS to carry out a deep imaging study of the globular cluster populations around a selected subset of the MASSIVE targets. Though much is known about GC systems of bright galaxies in rich clusters, we know surprisingly little about the effects of environment on these systems. The MASSIVE sample provides a golden opportunity to learn about the systematics of GC systems and what they can tell us about environmental drivers on the evolution of the highest mass galaxies. The most pressing questions to be addressed include: (1) Do isolated giants have the same constant mass fraction of GCs to total halo mass as BCGs of similar luminosity? (2) Do their GC systems show the same color (metallicity) distribution, which is an outcome of the mass spectrum of gas-rich halos during hierarchical growth? (3) Do the GCs in isolated high-mass galaxies follow the same radial distribution versus metallicity as in rich environments (a test of the relative importance of growth by accretion)? (4) Do the GCs of galaxies in sparse environments follow the same mass function? Our proposed second-band imaging will enable us to secure answers to these questions and add enormously to the legacy value of existing HST imaging of the highest mass galaxies in the universe.

  13. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  14. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    Energy Technology Data Exchange (ETDEWEB)

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  15. The task of control digital image compression

    OpenAIRE

    TASHMANOV E.B.; МАМАTOV М.S.

    2014-01-01

    In this paper we consider the relationship of control tasks and image compression losses. The main idea of this approach is to allocate structural lines simplified image and further compress the selected data

  16. Discrete Wigner Function Reconstruction and Compressed Sensing

    OpenAIRE

    Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin

    2011-01-01

    A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

  17. Compressibility Analysis of the Tongue During Speech

    National Research Council Canada - National Science Library

    Unay, Devrim

    2001-01-01

    .... In this paper, 3D compression and expansion analysis of the tongue will be presented. Patterns of expansion and compression have been compared for different syllables and various repetitions of each syllable...

  18. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  19. Nuclear power in our societies

    International Nuclear Information System (INIS)

    Fardeau, J.C.

    2011-01-01

    Hiroshima, Chernobyl, Fukushima Daiichi are the well known sad milestones on the path toward a broad development of nuclear energy. They are so well known that they have blurred certainly for long in a very unfair way the positive image of nuclear energy in the public eye. The impact of the media appetite for disasters favours the fear and puts aside all the achievements of nuclear sciences like nuclear medicine for instance and all the assets of nuclear power like the quasi absence of greenhouse gas emission or its massive capacity to produce electricity or heat. The unique solution to enhance nuclear acceptance is the reduction of the fear through a better understanding of nuclear sciences by the public. (A.C.)

  20. Neutron stars structure in the context of massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Hendi, S.H.; Bordbar, G.H.; Panah, B. Eslam; Panahiyan, S., E-mail: hendi@shirazu.ac.ir, E-mail: ghbordbar@shirazu.ac.ir, E-mail: behzad.eslampanah@gmail.com, E-mail: sh.panahiyan@gmail.com [Physics Department and Biruni Observatory, College of Sciences, Shiraz University, Shiraz 71454 (Iran, Islamic Republic of)

    2017-07-01

    Motivated by the recent interests in spin−2 massive gravitons, we study the structure of neutron star in the context of massive gravity. The modifications of TOV equation in the presence of massive gravity are explored in 4 and higher dimensions. Next, by considering the modern equation of state for the neutron star matter (which is extracted by the lowest order constrained variational (LOCV) method with the AV18 potential), different physical properties of the neutron star (such as Le Chatelier's principle, stability and energy conditions) are investigated. It is shown that consideration of the massive gravity has specific contributions into the structure of neutron star and introduces new prescriptions for the massive astrophysical objects. The mass-radius relation is examined and the effects of massive gravity on the Schwarzschild radius, average density, compactness, gravitational redshift and dynamical stability are studied. Finally, a relation between mass and radius of neutron star versus the Planck mass is extracted.

  1. Neutron stars structure in the context of massive gravity

    Science.gov (United States)

    Hendi, S. H.; Bordbar, G. H.; Eslam Panah, B.; Panahiyan, S.

    2017-07-01

    Motivated by the recent interests in spin-2 massive gravitons, we study the structure of neutron star in the context of massive gravity. The modifications of TOV equation in the presence of massive gravity are explored in 4 and higher dimensions. Next, by considering the modern equation of state for the neutron star matter (which is extracted by the lowest order constrained variational (LOCV) method with the AV18 potential), different physical properties of the neutron star (such as Le Chatelier's principle, stability and energy conditions) are investigated. It is shown that consideration of the massive gravity has specific contributions into the structure of neutron star and introduces new prescriptions for the massive astrophysical objects. The mass-radius relation is examined and the effects of massive gravity on the Schwarzschild radius, average density, compactness, gravitational redshift and dynamical stability are studied. Finally, a relation between mass and radius of neutron star versus the Planck mass is extracted.

  2. Neutron stars structure in the context of massive gravity

    International Nuclear Information System (INIS)

    Hendi, S.H.; Bordbar, G.H.; Panah, B. Eslam; Panahiyan, S.

    2017-01-01

    Motivated by the recent interests in spin−2 massive gravitons, we study the structure of neutron star in the context of massive gravity. The modifications of TOV equation in the presence of massive gravity are explored in 4 and higher dimensions. Next, by considering the modern equation of state for the neutron star matter (which is extracted by the lowest order constrained variational (LOCV) method with the AV18 potential), different physical properties of the neutron star (such as Le Chatelier's principle, stability and energy conditions) are investigated. It is shown that consideration of the massive gravity has specific contributions into the structure of neutron star and introduces new prescriptions for the massive astrophysical objects. The mass-radius relation is examined and the effects of massive gravity on the Schwarzschild radius, average density, compactness, gravitational redshift and dynamical stability are studied. Finally, a relation between mass and radius of neutron star versus the Planck mass is extracted.

  3. On Normalized Compression Distance and Large Malware

    OpenAIRE

    Borbely, Rebecca Schuller

    2015-01-01

    Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...

  4. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  5. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  6. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  7. A biological compression model and its applications.

    Science.gov (United States)

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  8. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  9. Massive supermultiplets in four-dimensional superstring theory

    International Nuclear Information System (INIS)

    Feng Wanzhe; Lüst, Dieter; Schlotterer, Oliver

    2012-01-01

    We extend the discussion of Feng et al. (2011) on massive Regge excitations on the first mass level of four-dimensional superstring theory. For the lightest massive modes of the open string sector, universal supermultiplets common to all four-dimensional compactifications with N=1,2 and N=4 spacetime supersymmetry are constructed respectively - both their vertex operators and their supersymmetry variations. Massive spinor helicity methods shed light on the interplay between individual polarization states.

  10. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  11. H.264/AVC Video Compression on Smartphones

    Science.gov (United States)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  12. Relationship between the edgewise compression strength of ...

    African Journals Online (AJOL)

    The results of this study were used to determine the linear regression constants in the Maltenfort model by correlating the measured board edgewise compression strength (ECT) with the predicted strength, using the paper components' compression strengths, measured with the short-span compression test (SCT) and the ...

  13. Compressible dynamic stall control using high momentum microjets

    Science.gov (United States)

    Beahan, James J.; Shih, Chiang; Krothapalli, Anjaneyulu; Kumar, Rajan; Chandrasekhara, Muguru S.

    2014-09-01

    Control of the dynamic stall process of a NACA 0015 airfoil undergoing periodic pitching motion is investigated experimentally at the NASA Ames compressible dynamic stall facility. Multiple microjet nozzles distributed uniformly in the first 12 % chord from the airfoil's leading edge are used for the dynamic stall control. Point diffraction interferometry technique is used to characterize the control effectiveness, both qualitatively and quantitatively. The microjet control has been found to be very effective in suppressing both the emergence of the dynamic stall vortex and the associated massive flow separation at the entire operating range of angles of attack. At the high Mach number ( M = 0.4), the use of microjets appears to eliminate the shock structures that are responsible for triggering the shock-induced separation, establishing the fact that the use of microjets is effective in controlling dynamic stall with a strong compressibility effect. In general, microjet control has an overall positive effect in terms of maintaining leading edge suction pressure and preventing flow separation.

  14. FIRST INVESTIGATION OF THE COMBINED IMPACT OF IONIZING RADIATION AND MOMENTUM WINDS FROM A MASSIVE STAR ON A SELF-GRAVITATING CORE

    International Nuclear Information System (INIS)

    Ngoumou, Judith; Hubber, David; Dale, James E.; Burkert, Andreas

    2015-01-01

    Massive stars shape the surrounding interstellar matter (ISM) by emitting ionizing photons and ejecting material through stellar winds. To study the impact of the momentum from the wind of a massive star on the surrounding neutral or ionized material, we implemented a new HEALPix-based momentum-conserving wind scheme in the smoothed particle hydrodynamics (SPH) code SEREN. A qualitative study of the impact of the feedback from an O7.5-like star on a self-gravitating sphere shows that on its own, the transfer of momentum from a wind onto cold surrounding gas has both a compressing and dispersing effect. It mostly affects gas at low and intermediate densities. When combined with a stellar source's ionizing ultraviolet (UV) radiation, we find the momentum-driven wind to have little direct effect on the gas. We conclude that during a massive star's main sequence, the UV ionizing radiation is the main feedback mechanism shaping and compressing the cold gas. Overall, the wind's effects on the dense gas dynamics and on the triggering of star formation are very modest. The structures formed in the ionization-only simulation and in the combined feedback simulation are remarkably similar. However, in the combined feedback case, different SPH particles end up being compressed. This indicates that the microphysics of gas mixing differ between the two feedback simulations and that the winds can contribute to the localized redistribution and reshuffling of gas

  15. Exact Solutions in 3D New Massive Gravity

    Science.gov (United States)

    Ahmedov, Haji; Aliev, Alikram N.

    2011-01-01

    We show that the field equations of new massive gravity (NMG) consist of a massive (tensorial) Klein-Gordon-type equation with a curvature-squared source term and a constraint equation. We also show that, for algebraic type D and N spacetimes, the field equations of topologically massive gravity (TMG) can be thought of as the “square root” of the massive Klein-Gordon-type equation. Using this fact, we establish a simple framework for mapping all types D and N solutions of TMG into NMG. Finally, we present new examples of types D and N solutions to NMG.

  16. Holographic heat engine within the framework of massive gravity

    Science.gov (United States)

    Mo, Jie-Xiong; Li, Gu-Qiang

    2018-05-01

    Heat engine models are constructed within the framework of massive gravity in this paper. For the four-dimensional charged black holes in massive gravity, it is shown that the existence of graviton mass improves the heat engine efficiency significantly. The situation is more complicated for the five-dimensional neutral black holes since the constant which corresponds to the third massive potential also contributes to the efficiency. It is also shown that the existence of graviton mass can improve the heat engine efficiency. Moreover, we probe how the massive gravity influences the behavior of the heat engine efficiency approaching the Carnot efficiency.

  17. Very massive runaway stars from three-body encounters

    Science.gov (United States)

    Gvaramadze, Vasilii V.; Gualandris, Alessia

    2011-01-01

    Very massive stars preferentially reside in the cores of their parent clusters and form binary or multiple systems. We study the role of tight very massive binaries in the origin of the field population of very massive stars. We performed numerical simulations of dynamical encounters between single (massive) stars and a very massive binary with parameters similar to those of the most massive known Galactic binaries, WR 20a and NGC 3603-A1. We found that these three-body encounters could be responsible for the origin of high peculiar velocities (≥70 km s-1) observed for some very massive (≥60-70 M⊙) runaway stars in the Milky Way and the Large Magellanic Cloud (e.g. λ Cep, BD+43°3654, Sk -67°22, BI 237, 30 Dor 016), which can hardly be explained within the framework of the binary-supernova scenario. The production of high-velocity massive stars via three-body encounters is accompanied by the recoil of the binary in the opposite direction to the ejected star. We show that the relative position of the very massive binary R145 and the runaway early B-type star Sk-69°206 on the sky is consistent with the possibility that both objects were ejected from the central cluster, R136, of the star-forming region 30 Doradus via the same dynamical event - a three-body encounter.

  18. Nuclear radiation and the properties of concrete

    International Nuclear Information System (INIS)

    Kaplan, M.F.

    1983-08-01

    Concrete is used for structures in which the concrete is exposed to nuclear radiation. Exposure to nuclear radiation may affect the properties of concrete. The report mentions the types of nuclear radiation while radiation damage in concrete is discussed. Attention is also given to the effects of neutron and gamma radiation on compressive and tensile strength of concrete. Finally radiation shielding, the attenuation of nuclear radiation and the value of concrete as a shielding material is discussed

  19. Using autoencoders for mammogram compression.

    Science.gov (United States)

    Tan, Chun Chet; Eswaran, Chikkannan

    2011-02-01

    This paper presents the results obtained for medical image compression using autoencoder neural networks. Since mammograms (medical images) are usually of big sizes, training of autoencoders becomes extremely tedious and difficult if the whole image is used for training. We show in this paper that the autoencoders can be trained successfully by using image patches instead of the whole image. The compression performances of different types of autoencoders are compared based on two parameters, namely mean square error and structural similarity index. It is found from the experimental results that the autoencoder which does not use Restricted Boltzmann Machine pre-training yields better results than those which use this pre-training method.

  20. Culture: copying, compression, and conventionality.

    Science.gov (United States)

    Tamariz, Mónica; Kirby, Simon

    2015-01-01

    Through cultural transmission, repeated learning by new individuals transforms cultural information, which tends to become increasingly compressible (Kirby, Cornish, & Smith, ; Smith, Tamariz, & Kirby, ). Existing diffusion chain studies include in their design two processes that could be responsible for this tendency: learning (storing patterns in memory) and reproducing (producing the patterns again). This paper manipulates the presence of learning in a simple iterated drawing design experiment. We find that learning seems to be the causal factor behind the increase in compressibility observed in the transmitted information, while reproducing is a source of random heritable innovations. Only a theory invoking these two aspects of cultural learning will be able to explain human culture's fundamental balance between stability and innovation. Copyright © 2014 Cognitive Science Society, Inc.

  1. Instability of ties in compression

    DEFF Research Database (Denmark)

    Buch-Hansen, Thomas Cornelius

    2013-01-01

    Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from...... the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since...... exact instability solutions are complex to derive, not to mention the extra complexity introducing dimensional instability from the temperature gradients. Using an inverse variable substitution and comparing an exact theory with an analytical instability solution a method to design tie...

  2. Diagnostic imaging of compression neuropathy

    International Nuclear Information System (INIS)

    Weishaupt, D.; Andreisek, G.

    2007-01-01

    Compression-induced neuropathy of peripheral nerves can cause severe pain of the foot and ankle. Early diagnosis is important to institute prompt treatment and to minimize potential injury. Although clinical examination combined with electrophysiological studies remain the cornerstone of the diagnostic work-up, in certain cases, imaging may provide key information with regard to the exact anatomic location of the lesion or aid in narrowing the differential diagnosis. In other patients with peripheral neuropathies of the foot and ankle, imaging may establish the etiology of the condition and provide information crucial for management and/or surgical planning. MR imaging and ultrasound provide direct visualization of the nerve and surrounding abnormalities. Bony abnormalities contributing to nerve compression are best assessed by radiographs and CT. Knowledge of the anatomy, the etiology, typical clinical findings, and imaging features of peripheral neuropathies affecting the peripheral nerves of the foot and ankle will allow for a more confident diagnosis. (orig.) [de

  3. [Medical image compression: a review].

    Science.gov (United States)

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  4. Compressed optimization of device architectures

    Energy Technology Data Exchange (ETDEWEB)

    Frees, Adam [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Gamble, John King [Microsoft Research, Redmond, WA (United States). Quantum Architectures and Computation Group; Ward, Daniel Robert [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Blume-Kohout, Robin J [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Eriksson, M. A. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Friesen, Mark [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Coppersmith, Susan N. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics

    2014-09-01

    Recent advances in nanotechnology have enabled researchers to control individual quantum mechanical objects with unprecedented accuracy, opening the door for both quantum and extreme- scale conventional computation applications. As these devices become more complex, designing for facility of control becomes a daunting and computationally infeasible task. Here, motivated by ideas from compressed sensing, we introduce a protocol for the Compressed Optimization of Device Architectures (CODA). It leads naturally to a metric for benchmarking and optimizing device designs, as well as an automatic device control protocol that reduces the operational complexity required to achieve a particular output. Because this protocol is both experimentally and computationally efficient, it is readily extensible to large systems. For this paper, we demonstrate both the bench- marking and device control protocol components of CODA through examples of realistic simulations of electrostatic quantum dot devices, which are currently being developed experimentally for quantum computation.

  5. Massively parallel Fokker-Planck calculations

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1990-01-01

    This paper reports that the Fokker-Planck package FPPAC, which solves the complete nonlinear multispecies Fokker-Planck collision operator for a plasma in two-dimensional velocity space, has been rewritten for the Connection Machine 2. This has involved allocation of variables either to the front end or the CM2, minimization of data flow, and replacement of Cray-optimized algorithms with ones suitable for a massively parallel architecture. Calculations have been carried out on various Connection Machines throughout the country. Results and timings on these machines have been compared to each other and to those on the static memory Cray-2. For large problem size, the Connection Machine 2 is found to be cost-efficient

  6. Large-group psychodynamics and massive violence

    Directory of Open Access Journals (Sweden)

    Vamik D. Volkan

    2006-06-01

    Full Text Available Beginning with Freud, psychoanalytic theories concerning large groups have mainly focused on individuals' perceptions of what their large groups psychologically mean to them. This chapter examines some aspects of large-group psychology in its own right and studies psychodynamics of ethnic, national, religious or ideological groups, the membership of which originates in childhood. I will compare the mourning process in individuals with the mourning process in large groups to illustrate why we need to study large-group psychology as a subject in itself. As part of this discussion I will also describe signs and symptoms of large-group regression. When there is a threat against a large-group's identity, massive violence may be initiated and this violence in turn, has an obvious impact on public health.

  7. Massive cortical reorganization in sighted Braille readers.

    Science.gov (United States)

    Siuda-Krzywicka, Katarzyna; Bola, Łukasz; Paplińska, Małgorzata; Sumera, Ewa; Jednoróg, Katarzyna; Marchewka, Artur; Śliwińska, Magdalena W; Amedi, Amir; Szwed, Marcin

    2016-03-15

    The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills.

  8. Signatures of massive sgoldstinos at hadron colliders

    International Nuclear Information System (INIS)

    Perazzi, Elena; Ridolfi, Giovanni; Zwirner, Fabio

    2000-01-01

    In supersymmetric extensions of the Standard Model with a very light gravitino, the effective theory at the weak scale should contain not only the goldstino G-tilde, but also its supersymmetric partners, the sgoldstinos. In the simplest case, the goldstino is a gauge-singlet and its superpartners are two neutral spin-0 particles, S and P. We study possible signals of massive sgoldstinos at hadron colliders, focusing on those that are most relevant for the Tevatron. We show that inclusive production of sgoldstinos, followed by their decay into two photons, can lead to observable signals or to stringent combined bounds on the gravitino and sgoldstino masses. Sgoldstino decays into two gluon jets may provide a useful complementary signature

  9. Scalable Strategies for Computing with Massive Data

    Directory of Open Access Journals (Sweden)

    Michael Kane

    2013-11-01

    Full Text Available This paper presents two complementary statistical computing frameworks that address challenges in parallel processing and the analysis of massive data. First, the foreach package allows users of the R programming environment to define parallel loops that may be run sequentially on a single machine, in parallel on a symmetric multiprocessing (SMP machine, or in cluster environments without platform-specific code. Second, the bigmemory package implements memory- and file-mapped data structures that provide (a access to arbitrarily large data while retaining a look and feel that is familiar to R users and (b data structures that are shared across processor cores in order to support efficient parallel computing techniques. Although these packages may be used independently, this paper shows how they can be used in combination to address challenges that have effectively been beyond the reach of researchers who lack specialized software development skills or expensive hardware.

  10. Computational chaos in massively parallel neural networks

    Science.gov (United States)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  11. Substructure of Highly Boosted Massive Jets

    Energy Technology Data Exchange (ETDEWEB)

    Alon, Raz [Weizmann Inst. of Science, Rehovot (Israel)

    2012-10-01

    Modern particle accelerators enable researchers to study new high energy frontiers which have never been explored before. This realm opens possibilities to further examine known fields such as Quantum Chromodynamics. In addition, it allows searching for new physics and setting new limits on the existence of such. This study examined the substructure of highly boosted massive jets measured by the CDF II detector. Events from 1.96 TeV proton-antiproton collisions at the Fermilab Tevatron Collider were collected out of a total integrated luminosity of 5.95 fb$^{-1}$. They were selected to have at least one jet with transverse momentum above 400 GeV/c. The jet mass, angularity, and planar flow were measured and compared with predictions of perturbative Quantum Chromodynamics, and were found to be consistent with the theory. A search for boosted top quarks was conducted and resulted in an upper limit on the production cross section of such top quarks.

  12. The Search for Stable, Massive, Elementary Particles

    International Nuclear Information System (INIS)

    Kim, Peter C.

    2001-01-01

    In this paper we review the experimental and observational searches for stable, massive, elementary particles other than the electron and proton. The particles may be neutral, may have unit charge or may have fractional charge. They may interact through the strong, electromagnetic, weak or gravitational forces or through some unknown force. The purpose of this review is to provide a guide for future searches--what is known, what is not known, and what appear to be the most fruitful areas for new searches. A variety of experimental and observational methods such as accelerator experiments, cosmic ray studies, searches for exotic particles in bulk matter and searches using astrophysical observations is included in this review

  13. Planckian Interacting Massive Particles as Dark Matter

    DEFF Research Database (Denmark)

    Garny, Mathias; Sandora, McCullen; Sloth, Martin S.

    2016-01-01

    . In this case the WIMP miracle is a mirage, and instead minimality as dictated by Occam's razor would indicate that dark matter is related to the Planck scale, where quantum gravity is anyway expected to manifest itself. Assuming within this framework that dark matter is a Planckian Interacting Massive Particle......, we show that the most natural mass larger than $0.01\\,\\textrm{M}_p$ is already ruled out by the absence of tensor modes in the CMB. This also indicates that we expect tensor modes in the CMB to be observed soon for this type of minimal dark matter model. Finally, we touch upon the KK graviton mode...... as a possible realization of this scenario within UV complete models, as well as further potential signatures and peculiar properties of this type of dark matter candidate. This paradigm therefore leads to a subtle connection between quantum gravity, the physics of primordial inflation, and the nature of dark...

  14. Effect of massive disks on bulge isophotes

    International Nuclear Information System (INIS)

    Monet, D.G.; Richstone, D.O.; Schechter, P.L.

    1981-01-01

    Massive disks produce flattened equipotentials. Unless the stars in a galaxy bulge are preferentially hotter in the z direction than in the plane, the isophotes will be at least as flat as the equipotentials. The comparison of two galaxy models having flat rotation curves with the available surface photometry for five external galaxies does not restrict the mass fraction which might reside in the disk. However, star counts in our own Galaxy indicate that unless the disk terminates close to the solar circle, no more than half the mass within that circle lies in the disk. The remaining half must lie either in the bulge or, more probably, in a third dark, round, dynamically distinct component

  15. Neural nets for massively parallel optimization

    Science.gov (United States)

    Dixon, Laurence C. W.; Mills, David

    1992-07-01

    To apply massively parallel processing systems to the solution of large scale optimization problems it is desirable to be able to evaluate any function f(z), z (epsilon) Rn in a parallel manner. The theorem of Cybenko, Hecht Nielsen, Hornik, Stinchcombe and White, and Funahasi shows that this can be achieved by a neural network with one hidden layer. In this paper we address the problem of the number of nodes required in the layer to achieve a given accuracy in the function and gradient values at all points within a given n dimensional interval. The type of activation function needed to obtain nonsingular Hessian matrices is described and a strategy for obtaining accurate minimal networks presented.

  16. Climate models on massively parallel computers

    International Nuclear Information System (INIS)

    Vitart, F.; Rouvillois, P.

    1993-01-01

    First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale

  17. Compressed air energy storage system

    Science.gov (United States)

    Ahrens, Frederick W.; Kartsounes, George T.

    1981-01-01

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  18. Compressing spatio-temporal trajectories

    DEFF Research Database (Denmark)

    Gudmundsson, Joachim; Katajainen, Jyrki; Merrick, Damian

    2009-01-01

    such that the most common spatio-temporal queries can still be answered approximately after the compression has taken place. In the process, we develop an implementation of the Douglas–Peucker path-simplification algorithm which works efficiently even in the case where the polygonal path given as input is allowed...... to self-intersect. For a polygonal path of size n, the processing time is O(nlogkn) for k=2 or k=3 depending on the type of simplification....

  19. Massive Outflows Associated with ATLASGAL Clumps

    Science.gov (United States)

    Yang, A. Y.; Thompson, M. A.; Urquhart, J. S.; Tian, W. W.

    2018-03-01

    We have undertaken the largest survey for outflows within the Galactic plane using simultaneously observed {}13{CO} and {{{C}}}18{{O}} data. Out of a total of 919 ATLASGAL clumps, 325 have data suitable to identify outflows, and 225 (69% ± 3%) show high-velocity outflows. The clumps with detected outflows show significantly higher clump masses ({M}clump}), bolometric luminosities ({L}bol}), luminosity-to-mass ratios ({L}bol}/{M}clump}), and peak H2 column densities ({N}{{{H}}2}) compared to those without outflows. Outflow activity has been detected within the youngest quiescent clump (i.e., 70 μ {{m}} weak) in this sample, and we find that the outflow detection rate increases with {M}clump}, {L}bol}, {L}bol}/{M}clump}, and {N}{{{H}}2}, approaching 90% in some cases (UC H II regions = 93% ± 3%; masers = 86% ± 4%; HC H II regions = 100%). This high detection rate suggests that outflows are ubiquitous phenomena of massive star formation (MSF). The mean outflow mass entrainment rate implies a mean accretion rate of ∼ {10}-4 {M}ȯ {yr}}-1, in full agreement with the accretion rate predicted by theoretical models of MSF. Outflow properties are tightly correlated with {M}clump}, {L}bol}, and {L}bol}/{M}clump} and show the strongest relation with the bolometric clump luminosity. This suggests that outflows might be driven by the most massive and luminous source within the clump. The correlations are similar for both low-mass and high-mass outflows over 7 orders of magnitude, indicating that they may share a similar outflow mechanism. Outflow energy is comparable to the turbulent energy within the clump; however, we find no evidence that outflows increase the level of clump turbulence as the clumps evolve. This implies that the origin of turbulence within clumps is fixed before the onset of star formation.

  20. Modular action on the massive algebra

    International Nuclear Information System (INIS)

    Saffary, T.

    2005-12-01

    The subject of this thesis is the modular group of automorphisms (σ m t ) t element of R , m>0, acting on the massive algebra of local observables M m (O) having their support in O is contained in R 4 . After a compact introduction to micro-local analysis and the theory of one-parameter groups of automorphisms, which are used extensively throughout the investigation, we are concerned with modular theory and its consequences in mathematics, e.g., Connes' cocycle theorem and classification of type III factors and Jones' index theory, as well as in physics, e.g., the determination of local von Neumann algebras to be hyperfinite factors of type III 1 , the formulation of thermodynamic equilibrium states for infinite-dimensional quantum systems (KMS states) and the discovery of modular action as geometric transformations. However, our main focus are its applications in physics, in particular the modular action as Lorentz boosts on the Rindler wedge, as dilations on the forward light cone and as conformal mappings on the double cone. Subsequently, their most important implications in local quantum physics are discussed. The purpose of this thesis is to shed more light on the transition from the known massless modular action to the wanted massive one in the case of double cones. First of all the infinitesimal generatore δ m of the group (σ m t ) t element of R is investigated, especially some assumptions on its structure are verified explicitly for the first time for two concrete examples. Then, two strategies for the calculation of σ m t itself are discussed. Some formalisms and results from operator theory and the method of second quantisation used in this thesis are made available in the appendix. (orig.)

  1. METHYL CYANIDE OBSERVATIONS TOWARD MASSIVE PROTOSTARS

    Energy Technology Data Exchange (ETDEWEB)

    Rosero, V.; Hofner, P. [Physics Department, New Mexico Tech, 801 Leroy Place, Socorro, NM 87801 (United States); Kurtz, S. [Centro de Radioastronomia y Astrofisica, Universidad Nacional Autonoma de Mexico, Morelia 58090 (Mexico); Bieging, J. [Department of Astronomy and Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Araya, E. D. [Physics Department, Western Illinois University, 1 University Circle, Macomb, IL 61455 (United States)

    2013-07-01

    We report the results of a survey in the CH{sub 3}CN J = 12 {yields} 11 transition toward a sample of massive proto-stellar candidates. The observations were carried out with the 10 m Submillimeter Telescope on Mount Graham, AZ. We detected this molecular line in 9 out of 21 observed sources. In six cases this is the first detection of this transition. We also obtained full beam sampled cross-scans for five sources which show that the lower K-components can be extended on the arcminute angular scale. The higher K-components, however, are always found to be compact with respect to our 36'' beam. A Boltzmann population diagram analysis of the central spectra indicates CH{sub 3}CN column densities of about 10{sup 14} cm{sup -2}, and rotational temperatures above 50 K, which confirms these sources as hot molecular cores. Independent fits to line velocity and width for the individual K-components resulted in the detection of an increasing blueshift with increasing line excitation for four sources. Comparison with mid-infrared (mid-IR) images from the SPITZER GLIMPSE/IRAC archive for six sources show that the CH{sub 3}CN emission is generally coincident with a bright mid-IR source. Our data clearly show that the CH{sub 3}CN J = 12 {yields} 11 transition is a good probe of the hot molecular gas near massive protostars, and provide the basis for future interferometric studies.

  2. MASSIVE INFANT STARS ROCK THEIR CRADLE

    Science.gov (United States)

    2002-01-01

    Extremely intense radiation from newly born, ultra-bright stars has blown a glowing spherical bubble in the nebula N83B, also known as NGC 1748. A new NASA Hubble Space Telescope image has helped to decipher the complex interplay of gas and radiation of a star-forming region in a nearby galaxy. The image graphically illustrates just how these massive stars sculpt their environment by generating powerful winds that alter the shape of the parent gaseous nebula. These processes are also seen in our Milky Way in regions like the Orion Nebula. The Hubble telescope is famous for its contribution to our knowledge about star formation in very distant galaxies. Although most of the stars in the Universe were born several billions of years ago, when the Universe was young, star formation still continues today. This new Hubble image shows a very compact star-forming region in a small part of one of our neighboring galaxies - the Large Magellanic Cloud. This galaxy lies only 165,000 light-years from our Milky Way and can easily be seen with the naked eye from the Southern Hemisphere. Young, massive, ultra-bright stars are seen here just as they are born and emerge from the shelter of their pre-natal molecular cloud. Catching these hefty stars at their birthplace is not as easy as it may seem. Their high mass means that the young stars evolve very rapidly and are hard to find at this critical stage. Furthermore, they spend a good fraction of their youth hidden from view, shrouded by large quantities of dust in a molecular cloud. The only chance is to observe them just as they start to emerge from their cocoon - and then only with very high-resolution telescopes. Astronomers from France, the U.S., and Germany have used Hubble to study the fascinating interplay between gas, dust, and radiation from the newly born stars in this nebula. Its peculiar and turbulent structure has been revealed for the first time. This high-resolution study has also uncovered several individual stars

  3. Nuclear radiation in water

    International Nuclear Information System (INIS)

    Abrams, H.L.

    1989-01-01

    The manifestations of acute radiation sickness in the post-nuclear attack period must be recognized and understood in order to apply therapeutic measure appropriately. The syndromes observed-hematopoietic, gastrointestinal, central nervous system-are dose dependent and vary in the degree of patient impairment and lethality. Estimates of mortality and morbidity following a massive exchange vary profoundly, depending on the targeting scenarios, the modes employed, and the meteorologic conditions anticipated. Even the LD-50 dose remain the subject of controversy. Using a US Government model of such an exchange, an estimated 23 million survivors would have radiation sickness, frequently complicated by trauma and burns. Among these survivors, an overriding consideration will be the presence and extent of infection, associated with alterations in the immune system, malnutrition, dehydration, exposure and hardship. Triage and treatment will be extraordinarily complex, requiring patient relocation, massive fluid replacement, antibiotics, a sterile environment , and many other measures. Massive disparities between supply and demand for physicians, nurses, other health workers, hospital beds, supplies and equipment, antibiotics, and other pharmaceutical agents will render a coherent physician response virtually impossible. Such disparities will be compounded by the destruction of transport systems and intolerably high radiation levels in many areas. If it is true that the meliorative efforts of physicians in post-attack radiation damage will be incapable of addressing this massive health care problem meaningfully, then clearly their most effective role is to prevent the threat from materializing. (authors)

  4. [Compression treatment for burned skin].

    Science.gov (United States)

    Jaafar, Fadhel; Lassoued, Mohamed A; Sahnoun, Mahdi; Sfar, Souad; Cheikhrouhou, Morched

    2012-02-01

    The regularity of a compressive knit is defined as its ability to perform its function in a burnt skin. This property is essential to avoid the phenomenon of rejection of the material or toxicity problems But: Make knits biocompatible with high burnet of human skin. We fabric knits of elastic material. To ensure good adhesion to the skin, we made elastic material, typically a tight loop knitted. The Length of yarn absorbed by stitch and the raw matter are changed with each sample. The physical properties of each sample are measured and compared. Surface modifications are made to these samples by impregnation of microcapsules based on jojoba oil. Knits are compressif, elastic in all directions, light, thin, comfortable, and washable for hygiene issues. In addition, the washing can find their compressive properties. The Jojoba Oil microcapsules hydrated the human burnet skin. This moisturizer is used to the firmness of the wound and it gives flexibility to the skin. Compressive Knits are biocompatible with burnet skin. The mixture of natural and synthetic fibers is irreplaceable in terms comfort and regularity.

  5. Compressibility effects on turbulent mixing

    Science.gov (United States)

    Panickacheril John, John; Donzis, Diego

    2016-11-01

    We investigate the effect of compressibility on passive scalar mixing in isotropic turbulence with a focus on the fundamental mechanisms that are responsible for such effects using a large Direct Numerical Simulation (DNS) database. The database includes simulations with Taylor Reynolds number (Rλ) up to 100, turbulent Mach number (Mt) between 0.1 and 0.6 and Schmidt number (Sc) from 0.5 to 1.0. We present several measures of mixing efficiency on different canonical flows to robustly identify compressibility effects. We found that, like shear layers, mixing is reduced as Mach number increases. However, data also reveal a non-monotonic trend with Mt. To assess directly the effect of dilatational motions we also present results with both dilatational and soleniodal forcing. Analysis suggests that a small fraction of dilatational forcing decreases mixing time at higher Mt. Scalar spectra collapse when normalized by Batchelor variables which suggests that a compressive mechanism similar to Batchelor mixing in incompressible flows might be responsible for better mixing at high Mt and with dilatational forcing compared to pure solenoidal mixing. We also present results on scalar budgets, in particular on production and dissipation. Support from NSF is gratefully acknowledged.

  6. Image compression of bone images

    International Nuclear Information System (INIS)

    Hayrapetian, A.; Kangarloo, H.; Chan, K.K.; Ho, B.; Huang, H.K.

    1989-01-01

    This paper reports a receiver operating characteristic (ROC) experiment conducted to compare the diagnostic performance of a compressed bone image with the original. The compression was done on custom hardware that implements an algorithm based on full-frame cosine transform. The compression ratio in this study is approximately 10:1, which was decided after a pilot experiment. The image set consisted of 45 hand images, including normal images and images containing osteomalacia and osteitis fibrosa. Each image was digitized with a laser film scanner to 2,048 x 2,048 x 8 bits. Six observers, all board-certified radiologists, participated in the experiment. For each ROC session, an independent ROC curve was constructed and the area under that curve calculated. The image set was randomized for each session, as was the order for viewing the original and reconstructed images. Analysis of variance was used to analyze the data and derive statistically significant results. The preliminary results indicate that the diagnostic quality of the reconstructed image is comparable to that of the original image

  7. Isoscalar compression modes in relativistic random phase approximation

    International Nuclear Information System (INIS)

    Ma, Zhong-yu; Van Giai, Nguyen.; Wandelt, A.; Vretenar, D.; Ring, P.

    2001-01-01

    Monopole and dipole compression modes in nuclei are analyzed in the framework of a fully consistent relativistic random phase approximation (RRPA), based on effective mean-field Lagrangians with nonlinear meson self-interaction terms. The large effect of Dirac sea states on isoscalar strength distribution functions is illustrated for the monopole mode. The main contribution of Fermi and Dirac sea pair states arises through the exchange of the scalar meson. The effect of vector meson exchange is much smaller. For the monopole mode, RRPA results are compared with constrained relativistic mean-field calculations. A comparison between experimental and calculated energies of isoscalar giant monopole resonances points to a value of 250-270 MeV for the nuclear matter incompressibility. A large discrepancy remains between theoretical predictions and experimental data for the dipole compression mode

  8. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  9. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    Science.gov (United States)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress

  10. Nuclear law - Nuclear safety

    International Nuclear Information System (INIS)

    Pontier, Jean-Marie; Roux, Emmanuel; Leger, Marc; Deguergue, Maryse; Vallar, Christian; Pissaloux, Jean-Luc; Bernie-Boissard, Catherine; Thireau, Veronique; Takahashi, Nobuyuki; Spencer, Mary; Zhang, Li; Park, Kyun Sung; Artus, J.C.

    2012-01-01

    This book contains the contributions presented during a one-day seminar. The authors propose a framework for a legal approach to nuclear safety, a discussion of the 2009/71/EURATOM directive which establishes a European framework for nuclear safety in nuclear installations, a comment on nuclear safety and environmental governance, a discussion of the relationship between citizenship and nuclear, some thoughts about the Nuclear Safety Authority, an overview of the situation regarding the safety in nuclear waste burying, a comment on the Nome law with respect to electricity price and nuclear safety, a comment on the legal consequences of the Fukushima accident on nuclear safety in the Japanese law, a presentation of the USA nuclear regulation, an overview of nuclear safety in China, and a discussion of nuclear safety in the medical sector

  11. Envera Variable Compression Ratio Engine

    Energy Technology Data Exchange (ETDEWEB)

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  12. Climatic Consequences of Nuclear Conflict

    Science.gov (United States)

    Robock, A.

    2011-12-01

    A nuclear war between Russia and the United States could still produce nuclear winter, even using the reduced arsenals of about 4000 total nuclear weapons that will result by 2017 in response to the New START treaty. A nuclear war between India and Pakistan, with each country using 50 Hiroshima-sized atom bombs as airbursts on urban areas, could produce climate change unprecedented in recorded human history. This scenario, using much less than 1% of the explosive power of the current global nuclear arsenal, would produce so much smoke from the resulting fires that it would plunge the planet to temperatures colder than those of the Little Ice Age of the 16th to 19th centuries, shortening the growing season around the world and threatening the global food supply. Crop model studies of agriculture in the U.S. and China show massive crop losses, even for this regional nuclear war scenario. Furthermore, there would be massive ozone depletion with enhanced ultraviolet radiation reaching the surface. These surprising conclusions are the result of recent research (see URL) by a team of scientists including those who produced the pioneering work on nuclear winter in the 1980s, using the NASA GISS ModelE and NCAR WACCM GCMs. The soot is self-lofted into the stratosphere, and the effects of regional and global nuclear war would last for more than a decade, much longer than previously thought. Nuclear proliferation continues, with nine nuclear states now, and more working to develop or acquire nuclear weapons. The continued environmental threat of the use of even a small number of nuclear weapons must be considered in nuclear policy deliberations in Russia, the U.S., and the rest of the world.

  13. Nuclear propulsion apparatus with alternate reactor segments

    International Nuclear Information System (INIS)

    Szekely, T.

    1979-01-01

    Nuclear propulsion apparatus comprising: (a) means for compressing incoming air; (b) nuclear fission reactor means for heating said air; (c) means for expanding a portion of the heated air to drive said compressing means; (d) said nuclear fission reactor means being divided into a plurality of radially extending segments; (e) means for directing a portion of the compressed air for heating through alternate segments of said reactor means and another portion of the compressed air for heating through the remaining segments of said reactor means; and (f) means for further expanding the heated air from said drive means and the remaining heated air from said reactor means through nozzle means to effect reactive thrust on said apparatus. 12 claims

  14. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  15. Non-nuclear energies

    International Nuclear Information System (INIS)

    Nifenecker, H.

    2007-01-01

    The different meanings of the word 'energy', as understood by economists, are reviewed and explained. Present rates of consumption of fossil and nuclear fuels are given as well as corresponding reserves and resources. The time left before exhaustion of these reserves is calculated for different energy consumption scenarios. On finds that coal and nuclear only allow to reach the end of this century. Without specific dispositions, the predicted massive use of coal is not compatible with any admissible value of global heating. Thus, we discuss the clean coal techniques, including carbon dioxide capture and storage. One proceeds with the discussion of availability and feasibility of renewable energies, with special attention to electricity production. One distinguishes controllable renewable energies from those which are intermittent. Among the first we find hydroelectricity, biomass, and geothermal and among the second, wind and solar. At world level, hydroelectricity will, most probably, remain the main renewable contributor to electricity production. Photovoltaic is extremely promising for providing villages remote deprived from access to a centralized network. Biomass should be an important source of bio-fuels. Geothermal energy should be an interesting source of low temperature heat. Development of wind energy will be inhibited by the lack of cheap and massive electricity storage; its contribution should not exceed 10% of electricity production. Its present development is totally dependent upon massive public support. A large part of this paper follows chapters of the monograph 'L'energie de demain: technique, environnement, economie', EDP Sciences, 2005. (author)

  16. Non-nuclear energies

    International Nuclear Information System (INIS)

    Nifenecker, Herve

    2006-01-01

    The different meanings of the word 'energy', as understood by economists, are reviewed and explained. Present rates of consumption of fossil and nuclear fuels are given as well as corresponding reserves and resources. The time left before exhaustion of these reserves is calculated for different energy consumption scenarios. On finds that coal and nuclear only allow to reach the end of this century. Without specific dispositions, the predicted massive use of coal is not compatible with any admissible value of global heating. Thus, we discuss the clean coal techniques, including carbon dioxide capture and storage. On proceeds with the discussion of availability and feasibility of renewable energies, with special attention to electricity production. One distinguishes controllable renewable energies from those which are intermittent. Among the first we find hydroelectricity, biomass, and geothermal and among the second, wind and solar. At world level, hydroelectricity will, most probably, remain the main renewable contributor to electricity production. Photovoltaic is extremely promising for providing villages remote deprived from access to a centralized network. Biomass should be an important source of biofuels. Geothermal energy should be an interesting source of low temperature heat. Development of wind energy will be inhibited by the lack of cheap and massive electricity storage; its contribution should not exceed 10% of electricity production. Its present development is totally dependent upon massive public support. (author)

  17. Effect of high image compression on the reproducibility of cardiac Sestamibi reporting

    International Nuclear Information System (INIS)

    Thomas, P.; Allen, L.; Beuzeville, S.

    1999-01-01

    Full text: Compression algorithms have been mooted to minimize storage space and transmission times of digital images. We assessed the impact of high-level lousy compression using JPEG and wavelet algorithms on image quality and reporting accuracy of cardiac Sestamibi studies. Twenty stress/rest Sestamibi cardiac perfusion studies were reconstructed into horizontal short, vertical long and horizontal long axis slices using conventional methods. Each of these six sets of slices were aligned for reporting and saved (uncompressed) as a bitmap. This bitmap was then compressed using JPEG compression, then decompressed and saved as a bitmap for later viewing. This process was repeated using the original bitmap and wavelet compression. Finally, a second copy of the original bitmap was made. All 80 bitmaps were randomly coded to ensure blind reporting. The bitmaps were read blinded and by consensus of 2 experienced nuclear medicine physicians using a 5-point scale and 25 cardiac segments. Subjective image quality was also reported using a 3-point scale. Samples of the compressed images were also subtracted from the original bitmap for visual comparison of differences. Results showed an average compression ratio of 23:1 for wavelet and 13:1 for JPEG. Image subtraction showed only very minor discordance between the original and compressed images. There was no significant difference in subjective quality between the compressed and uncompressed images. There was no significant difference in reporting reproducibility of the identical bitmap copy, the JPEG image and the wavelet image compared with the original bitmap. Use of the high compression algorithms described had no significant impact on reporting reproducibility and subjective image quality of cardiac Sestamibi perfusion studies

  18. The compressed baryonic matter experiment at FAIR

    International Nuclear Information System (INIS)

    Senger, Peter

    2015-01-01

    Substantial experimental and theoretical efforts worldwide are devoted to explore the phase diagram of strongly interacting matter. At top RHIC and LHC energies, the QCD phase diagram is studied at very high temperatures and very low net-baryon densities. These conditions presumably existed in the early universe about a microsecond after the big bang. For larger net-baryon densities and lower temperatures, it is expected that the QCD phase diagram exhibits a rich structure such as a critical point, a first order phase transition between hadronic and partonic matter, or new phases like quarkyonic matter. The experimental discovery of these prominent landmarks of the QCD phase diagram would be a major breakthrough in our understanding of the properties of nuclear matter. The Compressed Baryonic Matter (CBM) experiment will be one of the major scientific pillars of the future Facility for Antiproton and Ion Research (FAIR) in Darmstadt. The goal of the CBM research program is to explore the QCD phase diagram in the region of high baryon densities using high-energy nucleus-nucleus collisions. This includes the study of the equation-of-state of nuclear matter at neutron star core densities, and the search for the deconfinement and chiral phase transitions. The CBM detector is designed to measure rare diagnostic probes such as multi-strange hyperons, charmed particles and vector mesons decaying into lepton pairs with unprecedented precision and statistics. Most of these particles will be studied for the first time in the FAIR energy range. In order to achieve the required precision, the measurements will be performed at very high reaction rates of 100 kHz to 10 MHz. This requires very fast and radiation-hard detectors, and a novel data read-out and analysis concept based on free streaming front-end electronics and a high-performance computing cluster for online event selection. The layout, the physics performance, and the status of the proposed CBM experimental facility

  19. Curtain-Lifting Winds Allow Rare Glimpse into Massive Star Factory

    Science.gov (United States)

    2003-06-01

    that lead to the formation of heavy stars [1] is currently one the most contested areas in stellar astrophysics. While many details related to the formation and early evolution of low-mass stars like the Sun are now well understood, the basic scenario that leads to the formation of high-mass stars still remains a mystery. It is not even known whether the same characterizing observational criteria used to identify and distinguish the individual stages of young low-mass stars (mainly colours measured at near- and mid-infrared wavelengths) can also be used in the case of massive stars. Two possible scenarios for the formation of massive stars are currently being studied. In the first, such stars form by accretion of large amounts of circumstellar material; the infall onto the nascent star varies with time. Another possibility is formation by collision (coalescence) of protostars of intermediate masses, increasing the stellar mass in "jumps". Both scenarios impose strong limitations on the final mass of the young star. On one side, the accretion process must somehow overcome the outward radiation pressure that builds up, following the ignition of the first nuclear processes (e.g., deuterium/hydrogen burning) in the star's interior, once the temperature has risen above the critical value near 10 million degrees. On the other hand, growth by collisions can only be effective in a dense star cluster environment in which a reasonably high probability for close encounters and collisions of stars is guaranteed. Which of these two possibilties is then the more likely one? Massive stars are born in seclusion There are three good reasons that we know so little about the earliest phases of high-mass stars: First, the formation sites of such stars are in general much more distant (many thousands of light-years) than the sites of low-mass star formation. This means that it is much more difficult to observe details in those areas (lack of angular resolution). Next, in all stages, also

  20. Video-Assisted Minithoracotomy for Pulmonary Laceration with a Massive Hemothorax

    Directory of Open Access Journals (Sweden)

    Hideki Ota

    2014-01-01

    Full Text Available Severe intrathoracic hemorrhage from pulmonary parenchyma is the most serious complication of pulmonary laceration after blunt trauma requiring immediate surgical hemostasis through open thoracotomy. The safety and efficacy of video-assisted thoracoscopic surgery (VATS techniques for this life-threatening condition have not been fully evaluated yet. We report a case of pulmonary laceration with a massive hemothorax after blunt trauma successfully treated using a combination of muscle-sparing minithoracotomy with VATS techniques (video-assisted minithoracotomy. A 22-year-old man was transferred to our department after a falling accident. A diagnosis of right-sided pneumothorax was made on physical examination and urgent chest decompression was performed with a tube thoracostomy. Chest computed tomographic scan revealed pulmonary laceration with hematoma in the right lung. The pulmonary hematoma extending along segmental pulmonary artery in the helium of the middle lobe ruptured suddenly into the thoracic cavity, resulting in hemorrhagic shock on the fourth day after admission. Emergency right middle lobectomy was performed through video-assisted minithoracotomy. We used two cotton dissectors as a chopstick for achieving compression hemostasis during surgery. The patient recovered satisfactorily. Video-assisted minithoracotomy can be an alternative approach for the treatment of pulmonary lacerations with a massive hemothorax in hemodynamically unstable patients.

  1. A spin-4 analog of 3D massive gravity

    NARCIS (Netherlands)

    Bergshoeff, Eric A.; Kovacevic, Marija; Rosseel, Jan; Townsend, Paul K.; Yin, Yihao

    2011-01-01

    A sixth-order, but ghost-free, gauge-invariant action is found for a fourth-rank symmetric tensor potential in a three-dimensional (3D) Minkowski spacetime. It propagates two massive modes of spin 4 that are interchanged by parity and is thus a spin-4 analog of linearized 'new massive gravity'. Also

  2. Collaborative Calibrated Peer Assessment in Massive Open Online Courses

    Science.gov (United States)

    Boudria, Asma; Lafifi, Yacine; Bordjiba, Yamina

    2018-01-01

    The free nature and open access courses in the Massive Open Online Courses (MOOC) allow the facilities of disseminating information for a large number of participants. However, the "massive" propriety can generate many pedagogical problems, such as the assessment of learners, which is considered as the major difficulty facing in the…

  3. Massive weight loss-induced mechanical plasticity in obese gait

    NARCIS (Netherlands)

    Hortobagyi, Tibor; Herring, Cortney; Pories, Walter J.; Rider, Patrick; DeVita, Paul

    2011-01-01

    Hortobagyi T, Herring C, Pories WJ, Rider P, DeVita P. Massive weight loss-induced mechanical plasticity in obese gait. J Appl Physiol 111: 1391-1399, 2011. First published August 18, 2011; doi:10.1152/japplphysiol.00291.2011.-We examined the hypothesis that metabolic surgery-induced massive weight

  4. On massive gravitons in 2+1 dimensions

    NARCIS (Netherlands)

    Bergshoeff, Eric; Hohm, Olaf; Townsend, Paul; Lazkoz, R; Vera, R

    2010-01-01

    The Fierz-Pauli (FP) free field theory for massive spin-2 particles can be extended, in a spacetime of (1+2) dimensions (3D), to a generally covariant parity-preserving interacting field theory, in at least two ways. One is "new massive gravity" (NMG), with an action that involves curvature-squared

  5. Limiting Accretion onto Massive Stars by Fragmentation-Induced Starvation

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Thomas; /ZAH, Heidelberg; Klessen, Ralf S.; /ZAH, Heidelberg /KIPAC, Menlo Park; Mac Low, Mordecai-Mark; /Amer. Museum Natural Hist.; Banerjee, Robi; /ZAH, Heidelberg

    2010-08-25

    Massive stars influence their surroundings through radiation, winds, and supernova explosions far out of proportion to their small numbers. However, the physical processes that initiate and govern the birth of massive stars remain poorly understood. Two widely discussed models are monolithic collapse of molecular cloud cores and competitive accretion. To learn more about massive star formation, we perform simulations of the collapse of rotating, massive, cloud cores including radiative heating by both non-ionizing and ionizing radiation using the FLASH adaptive mesh refinement code. These simulations show fragmentation from gravitational instability in the enormously dense accretion flows required to build up massive stars. Secondary stars form rapidly in these flows and accrete mass that would have otherwise been consumed by the massive star in the center, in a process that we term fragmentation-induced starvation. This explains why massive stars are usually found as members of high-order stellar systems that themselves belong to large clusters containing stars of all masses. The radiative heating does not prevent fragmentation, but does lead to a higher Jeans mass, resulting in fewer and more massive stars than would form without the heating. This mechanism reproduces the observed relation between the total stellar mass in the cluster and the mass of the largest star. It predicts strong clumping and filamentary structure in the center of collapsing cores, as has recently been observed. We speculate that a similar mechanism will act during primordial star formation.

  6. Complicated Massive Choledochal Cyst: A Case Report | Okoromah ...

    African Journals Online (AJOL)

    Choledochal cysts are rare congenital anomalies resulting from congenital dilatations of the common bile duct (CBD) and usually they present during infancy with cholestatic jaundice. This report is on a massive-sized choledochal cyst associated with massive abdominal distention, respiratory embarrassment, postprandial ...

  7. Reappraising the concept of massive transfusion in trauma

    NARCIS (Netherlands)

    Stanworth, Simon J.; Morris, Timothy P.; Gaarder, Christine; Goslings, J. Carel; Maegele, Marc; Cohen, Mitchell J.; König, Thomas C.; Davenport, Ross A.; Pittet, Jean-Francois; Johansson, Pär I.; Allard, Shubha; Johnson, Tony; Brohi, Karim

    2010-01-01

    The massive-transfusion concept was introduced to recognize the dilutional complications resulting from large volumes of packed red blood cells (PRBCs). Definitions of massive transfusion vary and lack supporting clinical evidence. Damage-control resuscitation regimens of modern trauma care are

  8. The coupling between pulsation and mass loss in massive stars

    OpenAIRE

    Townsend, Rich

    2007-01-01

    To what extent can pulsational instabilities resolve the mass-loss problem of massive stars? How important is pulsation in structuring and modulating the winds of these stars? What role does pulsation play in redistributing angular momentum in massive stars? Although I cannot offer answers to these questions, I hope at the very least to explain how they come to be asked.

  9. An Alternative Technique in the Control of Massive Presacral Rectal ...

    African Journals Online (AJOL)

    Bleeding control was provided by GORE‑TEX® graft. We conclude that fıxatıon of GORE‑TEX® aortic patch should be kept in mind for uncontrolled massive presacral bleeding. KEYWORDS: GORE‑TEX® graft, presacral bleeding, rectal cancer. An Alternative Technique in the Control of Massive Presacral Rectal. Bleeding: ...

  10. The VLT-FLAMES survey of massive stars

    NARCIS (Netherlands)

    Evans, C.; Langer, N.; Brott, I.; Hunter, I.; Smartt, S.J.; Lennon, D.J.

    2008-01-01

    The VLT-FLAMES Survey of Massive Stars was an ESO Large Programme to understand rotational mixing and stellar mass loss in different metallicity environments, in order to better constrain massive star evolution. We gathered high-quality spectra of over 800 stars in the Galaxy and in the Magellanic

  11. Massive Splenomegaly in Children: Laparoscopic Versus Open Splenectomy

    OpenAIRE

    Hassan, Mohamed E.; Al Ali, Khalid

    2014-01-01

    Background and Objectives: Laparoscopic splenectomy for massive splenomegaly is still a controversial procedure as compared with open splenectomy. We aimed to compare the feasibility of laparoscopic splenectomy versus open splenectomy for massive splenomegaly from different surgical aspects in children. Methods: The data of children aged

  12. Interactions between massive dark halos and warped disks

    NARCIS (Netherlands)

    Kuijken, K; Persic, M; Salucci, P

    1997-01-01

    The normal mode theory for warping of galaxy disks, in which disks are assumed to be tilted with respect to the equator of a massive, flattened dark halo, assumes a rigid, fixed halo. However, consideration of the back-reaction by a misaligned disk on a massive particle halo shows there to be strong

  13. Datafile: [nuclear power in] Japan

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    Japan is third after the USA and France in terms of the Western World's installed nuclear capacity, but it has by far the largest forward programme. Great effort is also being put into the fuel cycle and advanced reactors. There is close co-operation between the government, utilities and manufacturers, but Japan has not sought to export reactors. The government has responded to the growing public opposition to nuclear power with a massive increase in its budget for public relations. Details of the nuclear power programme are given. (author)

  14. FastBit: Interactively Searching Massive Data

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Ahern, Sean; Bethel, E. Wes; Chen, Jacqueline; Childs, Hank; Cormier-Michel, Estelle; Geddes, Cameron; Gu, Junmin; Hagen, Hans; Hamann, Bernd; Koegler, Wendy; Lauret, Jerome; Meredith, Jeremy; Messmer, Peter; Otoo, Ekow; Perevoztchikov, Victor; Poskanzer, Arthur; Prabhat,; Rubel, Oliver; Shoshani, Arie; Sim, Alexander; Stockinger, Kurt; Weber, Gunther; Zhang, Wei-Ming

    2009-06-23

    As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.

  15. Nuclear expert web search and crawler algorithm

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D.

    2013-01-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  16. Nuclear expert web search and crawler algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Baptista, Benedito Filho D., E-mail: thiagoreis@usp.br, E-mail: barroso@ipen.br, E-mail: bdbfilho@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this paper we present preliminary research on web search and crawling algorithm applied specifically to nuclear-related web information. We designed a web-based nuclear-oriented expert system guided by a web crawler algorithm and a neural network able to search and retrieve nuclear-related hyper textual web information in autonomous and massive fashion. Preliminary experimental results shows a retrieval precision of 80% for web pages related to any nuclear theme and a retrieval precision of 72% for web pages related only to nuclear power theme. (author)

  17. Transient compressible flows in porous media

    International Nuclear Information System (INIS)

    Morrison, F.A. Jr.

    1975-09-01

    Transient compressible flow in porous media was investigated analytically. The major portion of the investigation was directed toward improving and understanding of dispersion in these flows and developing rapid accurate numerical techniques for predicting the extent of dispersion. The results are of interest in the containment of underground nuclear experiments. The transient one-dimensional transport of a trace component in a gas flow is analyzed. A conservation equation accounting for the effects of convective transport, dispersive transport, and decay, is developed. This relation, as well as a relation governing the fluid flow, is used to predict trace component concentration as a function of position and time. A detailed analysis of transport associated with the isothermal flow of an ideal gas is done. Because the governing equations are nonlinear, numerical calculations are performed. The ideal gas flow is calculated using a highly stable implicit iterative procedure with an Eulerian mesh. In order to avoid problems of anomolous dispersion associated with finite difference calculation, trace component convection and dispersion are calculated using a Lagrangian mesh. Details of the Eulerian-Lagrangian numerical technique are presented. Computer codes have been developed and implemented on the Lawrence Livermore Laboratory computer system

  18. Massive Born--Infeld and Other Dual Pairs

    CERN Document Server

    Ferrara, S

    2015-01-01

    We consider massive dual pairs of p-forms and (D-p-1)-forms described by non-linear Lagrangians, where non-linear curvature terms in one theory translate into non-linear mass-like terms in the dual theory. In particular, for D=2p and p even the two non-linear structures coincide when the non-linear massless theory is self-dual. This state of affairs finds a natural realization in the four-dimensional massive N=1 supersymmetric Born-Infeld action, which describes either a massive vector multiplet or a massive linear (tensor) multiplet with a Born-Infeld mass-like term. These systems should play a role for the massive gravitino multiplet obtained from a partial super-Higgs in N=2 Supergravity.

  19. The surface compression of nuclei in relativistic mean-field approach

    International Nuclear Information System (INIS)

    Sharma, M.M.

    1991-01-01

    The surface compression properties of nuclei have been studied in the framework of the relativistic non-linear σ-ω model. Using the Thomas-Fermi approximation for semi-infinite nuclear matter, it is shown that by varying the σ-meson mass one can change the surface compression as relative to the bulk compression. This fact is in contrast with the known properties of the phenomenological Skyrme interactions, where the ratio of the surface to the bulk incompressibility (-K S /K V ) is nearly 1 in the scaling mode of compression. The results suggest that the relativistic mean-field model may provide an interaction with the essential ingredients different from those of the Skyrme interactions. (author) 23 refs., 2 figs., 1 tab

  20. Strength and deformation behaviors of veined marble specimens after vacuum heat treatment under conventional triaxial compression

    Science.gov (United States)

    Su, Haijian; Jing, Hongwen; Yin, Qian; Yu, Liyuan; Wang, Yingchao; Wu, Xingjie

    2017-10-01

    The mechanical behaviors of rocks affected by high temperature and stress are generally believed to be significant for the stability of certain projects involving rocks, such as nuclear waste storage and geothermal resource exploitation. In this paper, veined marble specimens were treated to high temperature treatment and then used in conventional triaxial compression tests to investigate the effect of temperature, confining pressure, and vein angle on strength and deformation behaviors. The results show that the strength and deformation parameters of the veined marble specimens changed with the temperature, presenting a critical temperature of 600 °C. The triaxial compression strength of a horizontal vein (β = 90°) is obviously larger than that of a vertical vein (β = 0°). The triaxial compression strength, elasticity modulus, and secant modulus have an approximately linear relation to the confining pressure. Finally, Mohr-Coulomb and Hoek-Brown criteria were respectively used to analyze the effect of confining pressure on triaxial compression strength.