WorldWideScience

Sample records for header compression rohc

  1. RObust header compression (ROHC) performance for multimedia transmission over 3G/4G wireless networks

    DEFF Research Database (Denmark)

    Fitzek, Frank; Rein, S.; Seeling, P.

    2005-01-01

    Robust Header Compression (ROHC) has recently been proposed to reduce the large protocol header overhead when transmitting voice and other continuous meadi over IP based control stacks in wireless networks. In this paper we evaluate the real-time transmission of GSM encoded voice and H. 26L encod...

  2. Packet Header Compression for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Pekka KOSKELA

    2016-01-01

    Full Text Available Due to the extensive growth of Internet of Things (IoT, the number of wireless devices connected to the Internet is forecasted to grow to 26 billion units installed in 2020. This will challenge both the energy efficiency of wireless battery powered devices and the bandwidth of wireless networks. One solution for both challenges could be to utilize packet header compression. This paper reviews different packet compression, and especially packet header compression, methods and studies the performance of Robust Header Compression (ROHC in low speed radio networks such as XBEE, and in high speed radio networks such as LTE and WLAN. In all networks, the compressing and decompressing processing causes extra delay and power consumption, but in low speed networks, energy can still be saved due to the shorter transmission time.

  3. A Proxy Architecture to Enhance the Performance of WAP 2.0 by Data Compression

    Directory of Open Access Journals (Sweden)

    Yin Zhanping

    2005-01-01

    Full Text Available This paper presents a novel proxy architecture for wireless application protocol (WAP employing an advanced data compression scheme. Though optional in WAP , a proxy can isolate the wireless from the wired domain to prevent error propagations and to eliminate wireless session delays (WSD by enabling long-lived connections between the proxy and wireless terminals. The proposed data compression scheme combines content compression together with robust header compression (ROHC, which minimizes the air-interface traffic data, thus significantly reduces the wireless access time. By using the content compression at the transport layer, it also enables TLS tunneling, which overcomes the end-to-end security problem in WAP 1.x. Performance evaluations show that while WAP 1.x is optimized for narrowband wireless channels, WAP utilizing TCP/IP outperforms WAP 1.x over wideband wireless channels even without compression. The proposed data compression scheme reduces the wireless access time of WAP by over in CDMA2000 1XRTT channels, and in low-speed IS-95 channels, substantially reduces access time to give comparable performance to WAP 1.x. The performance enhancement is mainly contributed by the reply content compression, with ROHC offering further enhancements.

  4. Enabling IP Header Compression in COTS Routers via Frame Relay on a Simplex Link

    Science.gov (United States)

    Nguyen, Sam P.; Pang, Jackson; Clare, Loren P.; Cheng, Michael K.

    2010-01-01

    NASA is moving toward a networkcentric communications architecture and, in particular, is building toward use of Internet Protocol (IP) in space. The use of IP is motivated by its ubiquitous application in many communications networks and in available commercial off-the-shelf (COTS) technology. The Constellation Program intends to fit two or more voice (over IP) channels on both the forward link to, and the return link from, the Orion Crew Exploration Vehicle (CEV) during all mission phases. Efficient bandwidth utilization of the links is key for voice applications. In Voice over IP (VoIP), the IP packets are limited to small sizes to keep voice latency at a minimum. The common voice codec used in VoIP is G.729. This new algorithm produces voice audio at 8 kbps and in packets of 10-milliseconds duration. Constellation has designed the VoIP communications stack to use the combination of IP/UDP/RTP protocols where IP carries a 20-byte header, UDP (User Datagram Protocol) carries an 8-byte header, and RTP (Real Time Transport Protocol) carries a 12-byte header. The protocol headers total 40 bytes and are equal in length to a 40-byte G.729 payload, doubling the VoIP latency. Since much of the IP/UDP/RTP header information does not change from IP packet to IP packet, IP/UDP/RTP header compression can avoid transmission of much redundant data as well as reduce VoIP latency. The benefits of IP header compression are more pronounced at low data rate links such as the forward and return links during CEV launch. IP/UDP/RTP header compression codecs are well supported by many COTS routers. A common interface to the COTS routers is through frame relay. However, enabling IP header compression over frame relay, according to industry standard (Frame Relay IP Header Compression Agreement FRF.20), requires a duplex link and negotiations between the compressor router and the decompressor router. In Constellation, each forward to and return link from the CEV in space is treated

  5. A Proxy Architecture to Enhance the Performance of WAP 2.0 by Data Compression

    Directory of Open Access Journals (Sweden)

    Yin Zhanping

    2005-01-01

    Full Text Available This paper presents a novel proxy architecture for wireless application protocol (WAP 2.0 employing an advanced data compression scheme. Though optional in WAP 2.0 , a proxy can isolate the wireless from the wired domain to prevent error propagations and to eliminate wireless session delays (WSD by enabling long-lived connections between the proxy and wireless terminals. The proposed data compression scheme combines content compression together with robust header compression (ROHC, which minimizes the air-interface traffic data, thus significantly reduces the wireless access time. By using the content compression at the transport layer, it also enables TLS tunneling, which overcomes the end-to-end security problem in WAP 1.x. Performance evaluations show that while WAP 1.x is optimized for narrowband wireless channels, WAP 2.0 utilizing TCP/IP outperforms WAP 1.x over wideband wireless channels even without compression. The proposed data compression scheme reduces the wireless access time of WAP 2.0 by over 45% in CDMA2000 1XRTT channels, and in low-speed IS-95 channels, substantially reduces access time to give comparable performance to WAP 1.x. The performance enhancement is mainly contributed by the reply content compression, with ROHC offering further enhancements.

  6. Design and Evaluation of IP Header Compression for Cellular-Controlled P2P Networks

    DEFF Research Database (Denmark)

    Madsen, T.K.; Zhang, Qi; Fitzek, F.H.P.

    2007-01-01

    In this paper we advocate to exploit terminal cooperation to stabilize IP communication using header compression. The terminal cooperation is based on direct communication between terminals using short range communication and simultaneously being connected to the cellular service access point....... The short range link is than used to provide first aid information to heal the decompressor state of the neighboring node in case of a packet loss on the cellular link. IP header compression schemes are used to increase the spectral and power efficiency loosing robustness of the communication compared...

  7. Predicting the fidelity of JPEG2000 compressed CT images using DICOM header information

    International Nuclear Information System (INIS)

    Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Choi, Hosik; Jeon, Jong-June; Ahn, Jeong-Hwan; Lee, Kyoung Ho

    2011-01-01

    Purpose: To propose multiple logistic regression (MLR) and artificial neural network (ANN) models constructed using digital imaging and communications in medicine (DICOM) header information in predicting the fidelity of Joint Photographic Experts Group (JPEG) 2000 compressed abdomen computed tomography (CT) images. Methods: Our institutional review board approved this study and waived informed patient consent. Using a JPEG2000 algorithm, 360 abdomen CT images were compressed reversibly (n = 48, as negative control) or irreversibly (n = 312) to one of different compression ratios (CRs) ranging from 4:1 to 10:1. Five radiologists independently determined whether the original and compressed images were distinguishable or indistinguishable. The 312 irreversibly compressed images were divided randomly into training (n = 156) and testing (n = 156) sets. The MLR and ANN models were constructed regarding the DICOM header information as independent variables and the pooled radiologists' responses as dependent variable. As independent variables, we selected the CR (DICOM tag number: 0028, 2112), effective tube current-time product (0018, 9332), section thickness (0018, 0050), and field of view (0018, 0090) among the DICOM tags. Using the training set, an optimal subset of independent variables was determined by backward stepwise selection in a four-fold cross-validation scheme. The MLR and ANN models were constructed with the determined independent variables using the training set. The models were then evaluated on the testing set by using receiver-operating-characteristic (ROC) analysis regarding the radiologists' pooled responses as the reference standard and by measuring Spearman rank correlation between the model prediction and the number of radiologists who rated the two images as distinguishable. Results: The CR and section thickness were determined as the optimal independent variables. The areas under the ROC curve for the MLR and ANN predictions were 0.91 (95% CI; 0

  8. Header integrity assessment

    Energy Technology Data Exchange (ETDEWEB)

    Rotvel, F [ELSAMPROJEKT, Fredericia (Denmark); Sampietri, C [ENEL, Milano (Italy); Verelst, L [LABORELEC, Linkebeek (Belgium); Wortel, H van [TNO, Apeldoorn (Netherlands); Zhi, Li Ying [KEMA, Arnhem (Netherlands)

    1999-12-31

    In the late eighties creep cracks in the nozzle-to-header welds of high temperature headers became internationally recognized as a problem in older steam power plants. To study the problem a 2 1/4Cr1Mo service-exposed header, which was scrapped due to creep damage, was made available for testing. A full-scale model was fabricated with partly repaired nozzle to header welds and then tested at increased temperature. Loads included internal pressure and system loads. Damage accumulation and creep crack initiation and growth were predicted and experimentally verified. Conclusions and the practical implications for power plant operation are described. (orig.) 7 refs.

  9. Header integrity assessment

    Energy Technology Data Exchange (ETDEWEB)

    Rotvel, F. [ELSAMPROJEKT, Fredericia (Denmark); Sampietri, C. [ENEL, Milano (Italy); Verelst, L. [LABORELEC, Linkebeek (Belgium); Wortel, H. van [TNO, Apeldoorn (Netherlands); Li Ying Zhi [KEMA, Arnhem (Netherlands)

    1998-12-31

    In the late eighties creep cracks in the nozzle-to-header welds of high temperature headers became internationally recognized as a problem in older steam power plants. To study the problem a 2 1/4Cr1Mo service-exposed header, which was scrapped due to creep damage, was made available for testing. A full-scale model was fabricated with partly repaired nozzle to header welds and then tested at increased temperature. Loads included internal pressure and system loads. Damage accumulation and creep crack initiation and growth were predicted and experimentally verified. Conclusions and the practical implications for power plant operation are described. (orig.) 7 refs.

  10. TETRA Backhauling via Satellite: Improving Call Setup Times and Saving Bandwidth

    Directory of Open Access Journals (Sweden)

    Anton Donner

    2014-01-01

    Full Text Available In disaster management scenarios with seriously damaged, not existing, or saturated communication infrastructures satellite communications can be an ideal means to provide connectivity with unaffected remote terrestrial trunked radio (TETRA core networks. However, the propagation delay imposed by the satellite link affects the signalling protocols. This paper discusses the suitability of using a satellite link for TETRA backhauling, introducing two different architectures. In order to cope with the signal delay of the satellite link, the paper proposes and analyzes a suitable solution based on the use of a performance enhancing proxy (PEP. Additionally, robust header compression (ROHC is discussed as suitable technology to transmit TETRA voice via IP-based satellite networks.

  11. Dose surveys in two digital mammography units using DICOM headers

    International Nuclear Information System (INIS)

    Tsalafoutas, I.; Michalaki, C.; Papagiannopoulou, C.; Efstathopoulos, E.

    2012-01-01

    Background and objective: Digital mammography units store images in DICOM format. Thus, data regarding the acquisition parameters are available within DICOM headers, including among others, the anode/filter combination, tube potential and tube current exposure time product, compressed breast thickness, entrance surface air kerma (ESAK) and mean glandular dose (MGD). However, manual extraction of these data for the verification of the displayed values' accuracy and for dose survey purposes is time consuming. Our objective was to develop a method that enables the automation of such procedures. Materials and methods: Two hundred mammographic examinations (800 mammograms) performed in two digital units (GE, Essential) were recorded on CD-roms. Using appropriate software (DICOM Info Extractor) all dose related DICOM headers were extracted into a Microsoft Excel based spreadsheet, containing embedded algorithms for the calculation of ESAK and MGD according to Dance et al (Phys. Med. Biol. 45, 2000) methodology. Results: The ESAK and MGD values stored in the DICOM headers were compared with those calculated and in most cases were within ±10%. The basic difference among the two mammographic units is that, the older one calculates MGD assuming a breast composition 50% glandular-50% adipose tissue, while the newer one calculates the actual breast glandularity and stores this value in a DICOM header. The average MGD values were 1.21 mGy and 1.38 mGy, respectively. Conclusion: For the units studied, the ESAK and MGD values stored in DICOM headers are reliable. Utilizing tools for their automatic extraction provides an easy way to perform dose surveys. (authors)

  12. Control and Non-Payload Communications (CNPC) Prototype Radio - Generation 2 Security Architecture Lab Test Report

    Science.gov (United States)

    Iannicca, Dennis C.; McKim, James H.; Stewart, David H.; Thadhani, Suresh K.; Young, Daniel P.

    2015-01-01

    NASA Glenn Research Center, in cooperation with Rockwell Collins, is working to develop a prototype Control and Non-Payload Communications (CNPC) radio platform as part of NASA Integrated Systems Research Program's (ISRP) Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) project. A primary focus of the project is to work with the FAA and industry standards bodies to build and demonstrate a safe, secure, and efficient CNPC architecture that can be used by industry to evaluate the feasibility of deploying a system using these technologies in an operational capacity. GRC has been working in conjunction with these groups to assess threats, identify security requirements, and to develop a system of standards-based security controls that can be applied to the current GRC prototype CNPC architecture as a demonstration platform. The security controls were integrated into a lab test bed mock-up of the Mobile IPv6 architecture currently being used for NASA flight testing, and a series of network tests were conducted to evaluate the security overhead of the controls compared to the baseline CNPC link without any security. The aim of testing was to evaluate the performance impact of the additional security control overhead when added to the Mobile IPv6 architecture in various modes of operation. The statistics collected included packet captures at points along the path to gauge packet size as the sample data traversed the CNPC network, round trip latency, jitter, and throughput. The effort involved a series of tests of the baseline link, a link with Robust Header Compression (ROHC) and without security controls, a link with security controls and without ROHC, and finally a link with both ROHC and security controls enabled. The effort demonstrated that ROHC is both desirable and necessary to offset the additional expected overhead of applying security controls to the CNPC link.

  13. Conceptual design of a chickpea harvesting header

    Directory of Open Access Journals (Sweden)

    H. Golpira

    2013-07-01

    Full Text Available Interest in the development of stripper headers is growing owing to the excessive losses of combine harvesters and costs of manually harvesting for chickpeas. The design of a new concept can enhance the mechanized process for chickpea harvesting. A modified stripper platform was designed, in which passive fingers with V-shape slots removes the pods from the anchored plant. The floating platform was accompanied by a reel to complete the harvesting header. Black-box modeling was used to redesign the functional operators of the header followed by an investigation of the system behavior. Physical models of the platform and reel were modified to determine the crucial variables of the header arrangement during field trials. The slot width was fixed at 40 mm, finger length at 40 mm, keyhole diameter at 10 mm and entrance width at 6 mm; the batted reel at peripheral diameter of 700 mm and speed at 50 rpm. A tractor-mounted experimental harvester was built to evaluate the work quality of the stripper header. The performance of the prototype was tested with respect to losses and results confirmed the efficiency of the modified stripper header for chickpea harvesting. Furthermore, the header with a 1.4 m working width produced the spot work rates of 0.42 ha h-1.

  14. Multi-protocol header generation system

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, David A.; Ignatowski, Michael; Jayasena, Nuwan; Loh, Gabriel

    2017-09-05

    A communication device includes a data source that generates data for transmission over a bus, and a data encoder that receives and encodes outgoing data. An encoder system receives outgoing data from a data source and stores the outgoing data in a first queue. An encoder encodes outgoing data with a header type that is based upon a header type indication from a controller and stores the encoded data that may be a packet or a data word with at least one layered header in a second queue for transmission. The device is configured to receive at a payload extractor, a packet protocol change command from the controller and to remove the encoded data and to re-encode the data to create a re-encoded data packet and placing the re-encoded data packet in the second queue for transmission.

  15. Testing Header Component of Electricity Power Industry Boiler

    International Nuclear Information System (INIS)

    Soedardjo, S.A; Andryansyah, B; Artahari, Dewi; Natsir, Muhammad; Triyadi, Ari; Farokhi

    2000-01-01

    Testing of header component of Suralaya Unit II electricity power by replication method has been carried out. That header component is cross over pipe which interconnection between Primary and Superheater Outlet Header Secondary Superheater Outlet Header with the operation time over 14 years. The main composition of cross over pipe is 2 1/4 Cr 1 Mo or frequently specified as ferritique steel. The replication testing shown that the damage classification on those cross over pipe in A class based on failure classification from Neubauer and Wedel. Simple calculation in favor of cross over pipe remaining lifetime is about 16.5 years moreover

  16. Threats and surprises behind IPv6 extension headers

    NARCIS (Netherlands)

    Hendriks, Luuk; Velan, Petr; de Oliveira Schmidt, Ricardo; De Boer, Pieter Tjerk; Pras, Aiko

    2017-01-01

    The concept of Extension Headers, newly introduced with IPv6, is elusive and enables new types of threats in the Internet. Simply dropping all traffic containing any Extension Header - a current practice by operators-seemingly is an effective solution, but at the cost of possibly dropping legitimate

  17. A proven twin header design for small PWRs

    International Nuclear Information System (INIS)

    Davidov, Maurice

    1987-01-01

    A unique design of PWR steam generator, developed by Foster Wheeler in Britain more than 30 years ago, avoids the problem of tubesheet sludge accumulation. The twin header steam generator uses a vertical, inverted U-tube bundle connected to cylindrical inlet and outlet headers. The advantages of the design and operating experience are outlined. (author)

  18. Remaining life assessment of carbon steel boiler headers by repeated creep testing

    Energy Technology Data Exchange (ETDEWEB)

    Drew, M. [ANSTO, Materials and Engineering Science, New Illawarra Road, Lucas Heights, PMB 1 Menai, NSW 2234 (Australia)]. E-mail: michael.drew@ansto.gov.au; Humphries, S. [ANSTO, Materials and Engineering Science, New Illawarra Road, Lucas Heights, PMB 1 Menai, NSW 2234 (Australia); Thorogood, K. [ANSTO, Materials and Engineering Science, New Illawarra Road, Lucas Heights, PMB 1 Menai, NSW 2234 (Australia); Barnett, N. [BlueScope Steel, P.O. Box 1854, Wollongong, NSW (Australia)

    2006-05-15

    The condition of carbon steel boiler headers that have been in service for over 25 years has been assessed periodically by NDT, dimensional measurements, replication and accelerated creep testing. Historical temperature records were limited, so estimates of effective header temperatures were made from replicas. These estimates were compared with header stub thermocouple readings. At about 280,000 service hours, samples were chain-drilled from the headers for accelerated creep testing. These test results indicated that the headers had satisfactory remaining life. Nine years after the original samples were taken, additional samples were removed from one header at 337,000 service hours. The creep rupture properties measured from the repeated tests were almost identical to the initial results. A mild degree of random, nodular graphite was found in the samples and its effect on creep properties is discussed.

  19. Creep-fatigue monitoring system for header ligaments of fossil power plants

    International Nuclear Information System (INIS)

    Chen, K.L.; Deardorf, A.F.; Copeland, J.F.; Pflasterer, R.; Beckerdite, G.

    1993-01-01

    The cracking of headers (primary and secondary superheater outlet, and reheater outlet headers) at ligament locations is an important issue for fossil power plants. A model for crack initiation and growth has been developed, based on creep-fatigue damage mechanisms. This cracking model is included in a creep-fatigue monitoring system to assess header structural integrity under high temperature operating conditions. The following principal activities are required to achieve this goal: (1) the development of transfer functions and (2) the development of a ligament cracking model. The first task is to develop stress transfer functions to convert measured (monitored) temperatures, pressures and flow rates into stresses to be used to compute damage. Elastic three-dimensional finite element analyses were performed to study transient thermal stress behavior. The sustained pressure stress redistribution due to high temperature creep was studied by nonlinear finite element analyses. The preceding results are used to derive Green's functions and pressure stress gradient transfer functions for monitoring at the juncture of the tube with the header inner surface, and for crack growth at the ligaments. The virtual crack closure method is applied to derive a stress intensity factor K solution for a corner crack at the tube/header juncture. Similarly, using the reference stress method, the steady state creep crack growth parameter C * is derived for a header corner crack. The C * solution for a small corner crack in a header can be inserted directed into the available C t solution, along with K to provide the complete transient creep solution

  20. Lightweight SIP/SDP compression scheme (LSSCS)

    Science.gov (United States)

    Wu, Jian J.; Demetrescu, Cristian

    2001-10-01

    In UMTS new IP based services with tight delay constraints will be deployed over the W-CDMA air interface such as IP multimedia and interactive services. To integrate the wireline and wireless IP services, 3GPP standard forum adopted the Session Initiation Protocol (SIP) as the call control protocol for the UMTS Release 5, which will implement next generation, all IP networks for real-time QoS services. In the current form the SIP protocol is not suitable for wireless transmission due to its large message size which will need either a big radio pipe for transmission or it will take far much longer to transmit than the current GSM Call Control (CC) message sequence. In this paper we present a novel compression algorithm called Lightweight SIP/SDP Compression Scheme (LSSCS), which acts at the SIP application layer and therefore removes the information redundancy before it is sent to the network and transport layer. A binary octet-aligned header is added to the compressed SIP/SDP message before sending it to the network layer. The receiver uses this binary header as well as the pre-cached information to regenerate the original SIP/SDP message. The key features of the LSSCS compression scheme are presented in this paper along with implementation examples. It is shown that this compression algorithm makes SIP transmission efficient over the radio interface without losing the SIP generality and flexibility.

  1. Technology Corner: Analysing E-mail Headers For Forensic Investigation

    Directory of Open Access Journals (Sweden)

    M. Tariq Banday

    2011-06-01

    Full Text Available Electronic Mail (E-Mail, which is one of the most widely used applications of Internet, has become a global communication infrastructure service.  However, security loopholes in it enable cybercriminals to misuse it by forging its headers or by sending it anonymously for illegitimate purposes, leading to e-mail forgeries. E-mail messages include transit handling envelope and trace information in the form of structured fields which are not stripped after messages are delivered, leaving a detailed record of e-mail transactions.  A detailed header analysis can be used to map the networks traversed by messages, including information on the messaging software and patching policies of clients and gateways, etc. Cyber forensic e-mail analysis is employed to collect credible evidence to bring criminals to justice. This paper projects the need for e-mail forensic investigation and lists various methods and tools used for its realization. A detailed header analysis of a multiple tactic spoofed e-mail message is carried out in this paper. It also discusses various possibilities for detection of spoofed headers and identification of its originator. Further, difficulties that may be faced by investigators during forensic investigation of an e-mail message have been discussed along with their possible solutions.

  2. Simulation of neuro-fuzzy model for optimization of combine header setting

    Directory of Open Access Journals (Sweden)

    S Zareei

    2016-09-01

    Full Text Available Introduction The noticeable proportion of producing wheat losses occur during production and consumption steps and the loss due to harvesting with combine harvester is regarded as one of the main factors. A grain combines harvester consists of different sets of equipment and one of the most important parts is the header which comprises more than 50% of the entire harvesting losses. Some researchers have presented regression equation to estimate grain loss of combine harvester. The results of their study indicated that grain moisture content, reel index, cutter bar speed, service life of cutter bar, tine spacing, tine clearance over cutter bar, stem length were the major parameters affecting the losses. On the other hand, there are several researchswhich have used the variety of artificial intelligence methods in the different aspects of combine harvester. In neuro-fuzzy control systems, membership functions and if-then rules were defined through neural networks. Sugeno- type fuzzy inference model was applied to generate fuzzy rules from a given input-output data set due to its less time-consuming and mathematically tractable defuzzification operation for sample data-based fuzzy modeling. In this study, neuro-fuzzy model was applied to develop forecasting models which can predict the combine header loss for each set of the header parameter adjustments related to site-specific information and therefore can minimize the header loss. Materials and Methods The field experiment was conducted during the harvesting season of 2011 at the research station of the Faulty of Agriculture, Shiraz University, Shiraz, Iran. The wheat field (CV. Shiraz was harvested with a Claas Lexion-510 combine harvester. The factors which were selected as main factors influenced the header performance were three levels of reel index (RI (forward speed of combine harvester divided by peripheral speed of reel (1, 1.2, 1.5, three levels of cutting height (CH(25, 30, 35 cm, three

  3. Orthogonal transformations for change detection, Matlab code (ENVI-like headers)

    DEFF Research Database (Denmark)

    2007-01-01

    Matlab code to do (iteratively reweighted) multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data; accommodates ENVI (like) header files.......Matlab code to do (iteratively reweighted) multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data; accommodates ENVI (like) header files....

  4. Secured Hash Based Burst Header Authentication Design for Optical Burst Switched Networks

    Science.gov (United States)

    Balamurugan, A. M.; Sivasubramanian, A.; Parvathavarthini, B.

    2017-12-01

    The optical burst switching (OBS) is a promising technology that could meet the fast growing network demand. They are featured with the ability to meet the bandwidth requirement of applications that demand intensive bandwidth. OBS proves to be a satisfactory technology to tackle the huge bandwidth constraints, but suffers from security vulnerabilities. The objective of this proposed work is to design a faster and efficient burst header authentication algorithm for core nodes. There are two important key features in this work, viz., header encryption and authentication. Since the burst header is an important in optical burst switched network, it has to be encrypted; otherwise it is be prone to attack. The proposed MD5&RC4-4S based burst header authentication algorithm runs 20.75 ns faster than the conventional algorithms. The modification suggested in the proposed RC4-4S algorithm gives a better security and solves the correlation problems between the publicly known outputs during key generation phase. The modified MD5 recommended in this work provides 7.81 % better avalanche effect than the conventional algorithm. The device utilization result also shows the suitability of the proposed algorithm for header authentication in real time applications.

  5. METHOD AND APPARATUS FOR INSPECTION OF COMPRESSED DATA PACKAGES

    DEFF Research Database (Denmark)

    2008-01-01

    to be transferred over the data network. The method comprises the steps of: a) extracting payload data from the payload part of the package, b) appending the extracted payload data to a stream of data, c) probing the data package header so as to determine the compression scheme that is applied to the payload data...

  6. Low-complexity Compression of High Dynamic Range Infrared Images with JPEG compatibility

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-01-01

    data size, then we include the raw residual image instead. If the residual image contains only zero values or the quality factor for it is 0 then we do not include the residual image into the header. Experimental results show that compared with JPEG-XT Part 6 with ’global Reinhard’ tone-mapping....... Then we compress each image by a JPEG baseline encoder and include the residual image bit stream into the application part of JPEG header of the base image. As a result, the base image can be reconstructed by JPEG baseline decoder. If the JPEG bit stream size of the residual image is higher than the raw...

  7. Phishtest: Measuring the Impact of Email Headers on the Predictive Accuracy of Machine Learning Techniques

    Science.gov (United States)

    Tout, Hicham

    2013-01-01

    The majority of documented phishing attacks have been carried by email, yet few studies have measured the impact of email headers on the predictive accuracy of machine learning techniques in detecting email phishing attacks. Research has shown that the inclusion of a limited subset of email headers as features in training machine learning…

  8. 40 Gbit/s NRZ Packet-Length Insensitive Header Extraction for Optical Label Switching Networks

    DEFF Research Database (Denmark)

    Seoane, Jorge; Kehayas, E; Avramopoulos, H.

    2006-01-01

    A simple method for 40 Gbit/s NRZ header extraction based on envelope detection for optical label switching networks is presented. The scheme is insensitive to packet length and spacing and can be single-chip integrated cost-effectively......A simple method for 40 Gbit/s NRZ header extraction based on envelope detection for optical label switching networks is presented. The scheme is insensitive to packet length and spacing and can be single-chip integrated cost-effectively...

  9. Mechanical design of the hot steam headers of the THTR-300 steam generators

    International Nuclear Information System (INIS)

    Blumer, U.; Stumpf, M.

    1988-01-01

    The high pressure steam headers of the THTR steam generators have been subject to special attention during the design phase due to the following reasons: these components are the pressure retaining parts with the heaviest wall thickness in the region of the steam generators; they therefore are sensitive to thermal transient conditions; they are operated in the elevated temperature regime, where creep effects cannot be neglected; there is almost no service experience from fossil steam generators with this type of material (Alloy 800). Safety consideration therefore have been rather extensive and have focussed on two main areas which will be treated in this paper: 1. Analytical investigations on the cyclic material behaviour under all specified operating conditions, taking into account the non-elastic response of the material. 2. Limitation of the consequences of a header rupture by installation of heavy whip restraints. Elastic-plastic-creep analyses: The analyses were performed in different stages and are explained in the corresponding order: Evaluation of the critical location on the header and establishment of a simplified model of a nozzle region for further analysis. Preliminary thermal analyses of all specified transient conditions on simplified procedures, in order to establish a severity ranking of the conditions. Establishment of representative loading blocks. Evaluation of the material properties for thermal and structural, especially non-elastic behaviour. Detailed thermal analyses. Detailed structural analyses of the non-elastic cyclic response. Extrapolation for all cycles and assessment of the results by design codes. Discussion of the results. Header whip restraint design: In addition to the above analysis efforts, heavy whip restraints were provided to assure limitation of the effects of a header failure. This pager shows the measures that were taken to restrain the movement in case of longitudinal and transverse breaks: The anti-whip designs are

  10. Experience with compressed air system of Dhruva and Cirus

    International Nuclear Information System (INIS)

    Shelar, V.G.; Patil, U.D.; Singh, V.K.; Zope, A.K.; Kharpate, A.V.

    2006-01-01

    Dhruva and Cirus reactors have independent compressed air plants with provision for sharing. Dhruva has the reciprocating oil free air compressors where as Cirus has oil lubricated compressors. Over the years, several improvements have been done in the equipments to combat various problems, among these noise mitigation in Dhruva and measures to extend life of compressors in Cirus and also incidence of discharge header catching fire are interesting. This paper details these experiences. (author)

  11. Detection and Repair of Ligament Cracks in a 109mm Thick Superheater Outlet Header

    International Nuclear Information System (INIS)

    Day, Peter

    2006-01-01

    Conventional thermal power station boilers are constructed of drums and a series of headers which are interconnected with many hundreds of tubes. Typically feed water enters the boiler at about 250 deg C at a pressure of around 250 bar with steam outlet temperatures of 540 deg C and a pressure of 170 bar. Superheater outlet headers may be subjected to quite arduous conditions during service. Not only are they exposed to high pressure stresses but also to high thermal stresses due to varying thermal gradients through the section thickness particularly at start up and during two shift operation. The area that is exposed to the greatest thermal gradients is the narrow ligament that exists between the tube hole penetrations in the header bore. In the mid the 1980's industry wide surveys found cracking in a large percentage (25-50%) of headers after 15 years of service. Detection and sizing of ligament cracking and estimates of the rate of growth are therefore a major consideration especially in plant that is two shifted. In order to manage the risk both remote visual and ultrasonic inspection are performed during each major unit overhaul. Conclusion: Ultrasonic techniques used for this inspection need to be carefully evaluated with respect to their effectiveness. Conventional pulse echo is capable of detection but using for example a technique such as AS2207 level 1 will not show the defect size. Time of flight diffraction has shown itself to be effective in accurately sizing ligament cracking. However the complex geometry of header ligaments appears to cause a narrowing of the beam with the effect that crack tip responses can be concentrated at the centre of the ligament. Therefore great care needs to be taken during data interrogation because errors in sizing can occur. Wherever possible both 'B' and 'D' scan data should be collected. It appears that the greatest accuracy is obtained with respect to defect growth from the B scan image. With respect to the welding a

  12. Qualifying the use of RIS data for patient dose by comparison with DICOM header data

    International Nuclear Information System (INIS)

    Wilde, R.; Charnock, P.; McDonald, S.; Moores, B. M.

    2011-01-01

    A system was developed in 2008 to calculate patient doses using Radiology Information System (RIS) data and presents these data as a patient dose audit. One of the issues with this system was the quality of user-entered data. It has been shown that Digital Imaging and Communication in Medicine (DICOM) header data can be used to perform dose audits with a high level of data accuracy. This study aims to show that using RIS data for dose audits is not only a viable alternative to using DICOM header data, but that it has advantages. A new system was developed to pull header data from DICOM images easily and was installed on a workstation within a hospital department. Data were recovered for a common set of examinations using both RIS and DICOM header data. The data were compared on a result-by-result basis to check for consistency of common fields between RIS and DICOM, as well as assessing the value of data fields uncommon to both systems. The study shows that whilst RIS is not as accurate as DICOM, it does provide enough accurate data and that it has other advantages over using a DICOM approach. These results suggest that a 'best of both worlds' may be achievable using Modality Performed Procedure Step (MPPS). (authors)

  13. Novel Scheme for Packet Forwarding without Header Modifications in Optical Networks

    DEFF Research Database (Denmark)

    Wessing, Henrik; Christiansen, Henrik Lehrmann; Fjelde, Tina

    2002-01-01

    We present a novel scheme for packet forwarding in optical packet-switched networks and we further demonstrate its good scalability through simulations. The scheme requires neither header modification nor any label distribution protocol, thus reducing component cost while simplifying network...

  14. A LOCA analysis for AHWR caused by ECCS header rupture

    International Nuclear Information System (INIS)

    Chatterjee, B.; Gawai, Amol; Gupta, S.K.; Kushwaha, H.S.

    2000-01-01

    Loss of coolant accident (LOCA) analyses for the proposed 750 MWth Advanced Heavy Water Reactor (AHWR), initiated by the rupture of 8 inch NB ECCS header has been carried out. This paper narrates the description of AHWR and associated ECCS, postulated scenario with which the analyses is carried out, results, discussion and conclusion

  15. A computational fluid dynamics and effectiveness-NTU based co-simulation approach for flow mal-distribution analysis in microchannel heat exchanger headers

    International Nuclear Information System (INIS)

    Huang, Long; Lee, Moon Soo; Saleh, Khaled; Aute, Vikrant; Radermacher, Reinhard

    2014-01-01

    Refrigerant flow mal-distribution is a practical challenge in most microchannel heat exchangers (MCHXs) applications. Geometry design, uneven heat transfer and pressure drop in the different microchannel tubes are three main reasons leading to the flow mal-distribution. To efficiently and accurately account for these three effects, a new MCHX co-simulation approach is proposed in this paper. The proposed approach combines a detailed header simulation based on computational fluid dynamics (CFD) and a robust effectiveness-based finite volume tube-side heat transfer and refrigerant flow modeling tool. The co-simulation concept is demonstrated on a ten-tube MCHX case study. Gravity effect and uneven airflow effect were numerically analyzed using both water and condensing R134a as the working fluids. The approach was validated against experimental data for an automotive R134a condenser. The inlet header was cut open after the experimental data had been collected. The detailed header geometry was reproduced using the proposed CFD header model. Good prediction accuracy was achieved compared to the experimental data. The presented co-simulation approach is capable of predicting detailed refrigerant flow behavior while accurately predicts the overall heat exchanger performance. - Highlights: •MCHX header flow distribution is analyzed by a co-simulation approach. •The proposed method is capable of simulating both single-phase and two-phase flow. •An actual header geometry is reproduced in the CFD header model. •The modeling work is experimentally validated with good accuracy. •Gravity effect and air side mal-distribution are accounted for

  16. Daily Press Headers as a Reinforcement to Brand Identity in Spanish Sport Newspapers both Print and Online

    Directory of Open Access Journals (Sweden)

    Belén PUEBLA MARTÍNEZ

    2015-06-01

    Full Text Available Press headers are to daily newspapers the same as brands are to products. Because a newspaper is an object itself which also participates in a double designing process: from information and from product design; and in both cases this serves the same purpose, that the reader feels attracted and comes back for more every day. On that trip, which takes place either to the newsstand or to the computer, to become visible and unique is of paramount importance and the header is the element that best identifies not only the publication but also the tone of the language that the reader expects to find in it. This study intends to dive into the Spanish sport daily press headers, both print and digital, to establish how newspapers achieve their pretended brand identity.

  17. Optimising residual stresses at a repair in a steam header to tubeplate weld

    International Nuclear Information System (INIS)

    Soanes, T.P.T.; Bell, W.; Vibert, A.J.

    2005-01-01

    Following the discovery of incorrect weld metal in the steam side shell to tubeplate weld in a type 316H stainless steel superheater steam header, a repair strategy had to be determined. The strategy adopted was to remove the incorrect weld material, which extended around the full circumference, by machining from the inside of the header, followed by rewelding from the inside using an automatic welding process and localised post-weld heat treatment. Due to concern over potential reheat cracking of the repair after return to service, a considerable amount of residual stress modelling was carried out to support the development and optimisation of a successful repair and heat treatment strategy and thus underwrite the safety case for return to service

  18. Summary report on underground road header environmental control.

    CSIR Research Space (South Africa)

    Belle, BK

    2002-01-01

    Full Text Available and on monitoring should be reassessed to take into consideration the recent findings and current international trends. 5 6. No conclusive results were obtained with regard to the use of a wet cutter head in conjunction with the Bank 2000 Road Header Dust... this response time interval is lower than the T90 response time, a good indication of the methane gas trends can be obtained. To protect the methane sensors from the harsh environment around an active RH, the 22 Custodian sensors were placed in polycarbonate...

  19. 20% inlet header break analysis of Advanced Heavy Water Reactor

    International Nuclear Information System (INIS)

    Srivastava, A.; Gupta, S.K.; Venkat Raj, V.; Singh, R.; Iyer, K.

    2001-01-01

    The proposed Advanced Heavy Water Reactor (AHWR) is a 750 MWt vertical pressure tube type boiling light water cooled and heavy water moderated reactor. A passive design feature of this reactor is that the heat removal is achieved through natural circulation of primary coolant at all power levels, with no primary coolant pumps. Loss of coolant due to failure of inlet header results in depressurization of primary heat transport (PHT) system and containment pressure rise. Depressurization activates various protective and engineered safety systems like reactor trip, isolation condenser and advanced accumulator, limiting the consequences of the event. This paper discusses the thermal hydraulic transient analysis for evaluating the safety of the reactor, following 20% inlet header break using RELAP5/MOD3.2. For the analysis, the system is discretized appropriately to simulate possible flow reversal in one of the core paths during the transient. Various modeling aspects are discussed in this paper and predictions are made for different parameters like pressure, temperature, steam quality and flow in different parts of the Primary Heat Transport (PHT) system. Flow and energy discharges into the containment are also estimated for use in containment analysis. (author)

  20. Control and Non-Payload Communications (CNPC) Prototype Radio - Generation 2 Security Flight Test Report

    Science.gov (United States)

    Iannicca, Dennis C.; Ishac, Joseph A.; Shalkhauser, Kurt A.

    2015-01-01

    NASA Glenn Research Center (GRC), in cooperation with Rockwell Collins, is working to develop a prototype Control and Non-Payload Communications (CNPC) radio platform as part of NASA Integrated Systems Research Program's (ISRP) Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) project. A primary focus of the project is to work with the Federal Aviation Administration (FAA) and industry standards bodies to build and demonstrate a safe, secure, and efficient CNPC architecture that can be used by industry to evaluate the feasibility of deploying a system using these technologies in an operational capacity. GRC has been working in conjunction with these groups to assess threats, identify security requirements, and to develop a system of standards-based security controls that can be applied to the GRC prototype CNPC architecture as a demonstration platform. The proposed security controls were integrated into the GRC flight test system aboard our S-3B Viking surrogate aircraft and several network tests were conducted during a flight on November 15th, 2014 to determine whether the controls were working properly within the flight environment. The flight test was also the first to integrate Robust Header Compression (ROHC) as a means of reducing the additional overhead introduced by the security controls and Mobile IPv6. The effort demonstrated the complete end-to-end secure CNPC link in a relevant flight environment.

  1. Researching of Covert Timing Channels Based on HTTP Cache Headers in Web API

    Directory of Open Access Journals (Sweden)

    Denis Nikolaevich Kolegov

    2015-12-01

    Full Text Available In this paper, it is shown how covert timing channels based on HTTP cache headers can be implemented using different Web API of Google Drive, Dropbox and Facebook  Internet services.

  2. Multiple-output all-optical header processing technique based on two-pulse correlation principle

    NARCIS (Netherlands)

    Calabretta, N.; Liu, Y.; Waardt, de H.; Hill, M.T.; Khoe, G.D.; Dorren, H.J.S.

    2001-01-01

    A serial all-optical header processing technique based on a two-pulse correlation principle in a semiconductor laser amplifier in a loop mirror (SLALOM) configuration that can have a large number of output ports is presented. The operation is demonstrated experimentally at a 10Gbit/s Manchester

  3. Asynchronous broadcast for ordered delivery between compute nodes in a parallel computing system where packet header space is limited

    Science.gov (United States)

    Kumar, Sameer

    2010-06-15

    Disclosed is a mechanism on receiving processors in a parallel computing system for providing order to data packets received from a broadcast call and to distinguish data packets received at nodes from several incoming asynchronous broadcast messages where header space is limited. In the present invention, processors at lower leafs of a tree do not need to obtain a broadcast message by directly accessing the data in a root processor's buffer. Instead, each subsequent intermediate node's rank id information is squeezed into the software header of packet headers. In turn, the entire broadcast message is not transferred from the root processor to each processor in a communicator but instead is replicated on several intermediate nodes which then replicated the message to nodes in lower leafs. Hence, the intermediate compute nodes become "virtual root compute nodes" for the purpose of replicating the broadcast message to lower levels of a tree.

  4. The effect of the flow direction inside the header on two-phase flow distribution in parallel vertical channels

    International Nuclear Information System (INIS)

    Marchitto, A.; Fossa, M.; Guglielmini, G.

    2012-01-01

    Uniform fluid distribution is essential for efficient operation of chemical-processing equipment such as contactors, reactors, mixers, burners and in most refrigeration equipment, where two phases are acting together. To obtain optimum distribution, proper consideration must be given to flow behaviour in the distributor, flow conditions upstream and downstream of the distributor, and the distribution requirements (fluid or phase) of the equipment. Even though the principles of single phase distribution have been well developed for more than three decades, they are frequently not taken in the right account by equipment designers when a mixture is present, and a significant fraction of process equipment consequently suffers from maldistribution. The experimental investigation presented in this paper is aimed at understanding the main mechanisms which drive the flow distribution inside a two-phase horizontal header in order to design improved distributors and to optimise the flow distribution inside compact heat exchanger. Experimentation was devoted to establish the influence of the inlet conditions and of the channel/distributor geometry on the phase/mass distribution into parallel vertical channels. The study is carried out with air–water mixtures and it is based on the measurement of component flow rates in individual channels and on pressure drops across the distributor. The effects of the operating conditions, the header geometry and the inlet port nozzle were investigated in the ranges of liquid and gas superficial velocities of 0.2–1.2 and 1.5–16.5 m/s, respectively. In order to control the main flow direction inside the header, different fitting devices were tested; the insertion of a co-axial, multi-hole distributor inside the header has confirmed the possibility of greatly improving the liquid and gas flow distribution by the proper selection of position, diameter and number of the flow openings between the supplying distributor and the system of

  5. Continuous monitoring of variations in the 235U enrichment of uranium in the header pipework of a centrifuge enrichment plant

    International Nuclear Information System (INIS)

    Packer, T.W.

    1991-01-01

    Non-destructive assay equipment, based on gamma-ray spectrometry and x-ray fluorescence analysis has previously been developed for confirming the presence of low enriched uranium in the header pipework of UF 6 gas centrifuge enrichment plants. However inspections can only be carried out occasionally on a limited number of pipes. With the development of centrifuge enrichment technology it has been suggested that more frequent, or ideally, continuous measurements should be made in order to improve safeguards assurance between inspections. For this purpose we have developed non-destructive assay equipment based on continuous gamma-ray spectrometry and x-ray transmission measurements. This equipment is suitable for detecting significant changes in the 235 U enrichment of uranium in the header pipework of new centrifuge enrichment plants. Results are given in this paper of continuous measurements made in the laboratory and also on header pipework of a centrifuge enrichment plant at Capenhurst

  6. Innovative hyperchaotic encryption algorithm for compressed video

    Science.gov (United States)

    Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang

    2002-12-01

    It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.

  7. A high-speed lossless data compression system for space applications

    Science.gov (United States)

    Miko, Joe; Fong, Wai; Miller, Warner

    1993-01-01

    This paper reports on the integration of a lossless data compression/decompression chipset into a space data system architecture. For its compression engine, the data system incorporates the Universal Source Encoder (USE) designed for the NASA/Goddard Space Flight Center. Currently, the data compression testbed generates video frames consisting of 512 lines of 512 pixels having 8-bit resolution. Each image is passed through the USE where the lines are internally partitioned into 16-word blocks. These blocks are adaptively encoded across widely varying entropy levels using a Rice 12-option set coding algorithm. The current system operates at an Input/Output rate of 10 Msamples/s or 80 Mbits/s for each buffered input line. Frame and line synchronization for each image are maintained through the use of uniquely decodable command words. Length information of each variable length compressed image line is also included in the output stream. The data and command information are passed to the next stage of the system architecture through a serial fiber-optic transmitter. The initial segment of this stage consists of packetizer hardware which adds an appropriate CCSDS header to the received source data. An uncompressed mode is optionally available to pass image lines directly to the packetizer hardware. A data decompression testbed has also been developed to confirm the data compression operation.

  8. Twin header bore welded steam generator for pressurized water reactors

    International Nuclear Information System (INIS)

    Davies, R.J.; Hirst, B.

    1979-01-01

    A description is given of a pressurized water reactor (PWR) steam generator concept, several examples of which have been in service for up to fourteen years. Details are given of the highly successful service record of this equipment and the features which have been incorporated to minimize corrosion and deposition pockets. The design employs a vertical U tube bundle carried off two horizontal headers to which the tubes are welded by the Foster Wheeler Power Products (FWPP) bore welding process. The factors to be considered in uprating the design to meet the current operating conditions for a 1000 MW unit are discussed. (author)

  9. Hot steam header of a high temperature reactor as a benchmark problem

    International Nuclear Information System (INIS)

    Demierre, J.

    1990-01-01

    The International Atomic Energy Agency (IAEA) initiated a Coordinated Research Programme (CRP) on ''Design Codes for Gas-Cooled Reactor Components''. The specialists proposed to start with a benchmark design of a hot steam header in order to get a better understanding of the methods in the participating countries. The contribution of Switzerland carried out by Sulzer. The following report summarized the detailed calculations of dimensioning procedure and analysis. (author). 5 refs, 2 figs, 2 tabs

  10. A Study on Thermal Performance of a Novel All-Glass Evacuated Tube Solar Collector Manifold Header with an Inserted Tube

    Directory of Open Access Journals (Sweden)

    Jichun Yang

    2015-01-01

    Full Text Available A novel all-glass evacuated tube collector manifold header with an inserted tube is proposed in this paper which makes water in all-glass evacuated solar collector tube be forced circulated to improve the performance of solar collector. And a dynamic numerical model was presented for the novel all-glass evacuated tube collector manifold header water heater system. Also, a test rig was built for model validation and comparison with traditional all-glass evacuated tube collector. The experiment results show that the efficiency of solar water heater with a novel collector manifold header is higher than traditional all-glass evacuated tube collector by about 5% and the heat transfer model of water heater system is valid. Based on the model, the relationship between the average temperature of water tank and inserted tube diameter (water mass flow has been studied. The results show that the optimized diameter of inserted tube is 32 mm for the inner glass with the diameter of 47 mm and the water flow mass should be less than 1.6 Kg/s.

  11. Novel scheme for efficient and cost-effective forwarding of packets in optical networks without header modification

    DEFF Research Database (Denmark)

    Wessing, Henrik; Fjelde, Tina; Christiansen, Henrik Lehrmann

    2001-01-01

    We present a novel scheme for addressing the outputs in optical packet switches and demonstrate its good scalability. The scheme requires neither header modification nor distribution of routing information to the packet switches, thus reducing optical component count while simplifying network...

  12. Fabrication of an improved tube-to-pipe header heat exchanger for the Fuel Failure Mockup (FFM) Facility

    International Nuclear Information System (INIS)

    Prislinger, J.J.; Jones, R.H.

    1977-05-01

    The procedure used in fabricating an improved tube-to-pipe header heat exchanger for the Fuel Failure Mockup (FFM) Facility is described. Superior performance is accomplished at reduced cost with adherence to the ASME Boiler and Pressure Vessel Code. The techniques used and the method of fabrication are described in detail

  13. Experimental use of road header (AM-50) as face cutting machine for extraction of coal in longwall panel

    Energy Technology Data Exchange (ETDEWEB)

    Passi, K.K.; Kumar, C.R.; Prasad, P. [DGMS, Dhanbad (India)

    2001-07-01

    The scope of this paper has been limited to the use of available machines and techniques for attaining higher and more efficient production in underground coal mines. Under certain conditions of strata and higher degree of gassiness, the longwall method with hydraulic sand stowing is the only appropriate method of work for extraction of thick seam. In Moonidih Jitpur Colliery of M/S IISCO, No. 14 seam, Degree III gassy seam, 9.07 m thick, is extracted in multilift system with hydraulic sand stowing. In general, the bottom lift is extracted by Single Ended Ranging Arm Shearer and the middle and top lift are extracted by conventional method. However, in one of the panels spare road header machine was used as face cutting machine in bottom lift, on an experimental basis. This paper presents a successful case study of extraction of bottom lift coal by the longwall method with hydraulic sand stowing using road header (AM 50) as the face cutting machines. 9 figs.

  14. LHCb: Dynamically Adaptive Header Generator and Front-End Source Emulator for a 100 Gbps FPGA Based DAQ

    CERN Multimedia

    Srikanth, S

    2014-01-01

    The proposed upgrade for the LHCb experiment envisages a system of 500 Data sources each generating data at 100 Gbps, the acquisition and processing of which is a big challenge even for the current state of the art FPGAs. This requires an FPGA DAQ module that not only handles the data generated by the experiment but also is versatile enough to dynamically adapt to potential inadequacies of other components like the network and PCs. Such a module needs to maintain real time operation while at the same time maintaining system stability and overall data integrity. This also creates a need for a Front-end source Emulator capable of generating the various data patterns, that acts as a testbed to validate the functionality and performance of the Header Generator. The rest of the abstract briefly describes these modules and their implementation. The Header Generator is used to packetize the streaming data from the detectors before it is sent to the PCs for further processing. This is achieved by continuously scannin...

  15. Air Emission Projections During Acid Cleaning of F-Canyon Waste Header No.2

    International Nuclear Information System (INIS)

    CHOI, ALEXANDER

    2004-01-01

    The purpose of this study was to develop the air emission projections for the maintenance operation to dissolve and flush out the scale material inside the F-Canyon Waste Header No.2. The chemical agent used for the dissolution is a concentrated nitric acid solution, so the pollutant of concern is the nitric acid vapor. Under the very conservative operating scenarios considered in this study, it was determined that the highest possible rate of nitric acid emission during the acid flush would be 0.048 lb. per hr. It turns out that this worst-case air emission projection is just below the current exemption limit of 0.05 lb. per hr. for permit applications

  16. Block selective redaction for minimizing loss during de-identification of burned in text in irreversibly compressed JPEG medical images.

    Science.gov (United States)

    Clunie, David A; Gebow, Dan

    2015-01-01

    Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.

  17. Thermal-hydraulic analysis of Ignalina NPP compartments response to group distribution header rupture using RALOC4 code

    International Nuclear Information System (INIS)

    Urbonavicius, E.

    2000-01-01

    The Accident Localisation System (ALS) of Ignalina NPP is a containment of pressure suppression type designed to protect the environment from the dangerous impact of the radioactivity. The failure of ALS could lead to contamination of the environment and prescribed public radiation doses could be exceeded. The purpose of the presented analysis is to perform long term thermal-hydraulic analysis of compartments response to Group Distribution Header rupture and verify if design pressure values are not exceeded. (authors)

  18. Design and analysis of reactor headers for Narora Atomic Power Project

    International Nuclear Information System (INIS)

    Danak, M.R.

    1975-01-01

    Reactor header for Narora Atomic Power Reactor is a 400 mm O.D. 10 metres long pressure vessel in the primary coolant circuit connecting 153 feeders to PHT pumps or steam generators. The vessel dimensions are restricted are by containment philosophy. The outlet connections for pumps or steam generators are to be of the size of vessel diameter and DO/t ratio for the vessel is approximately 10. The design and stresses induced meet the code requirements except that at times it is difficult to get precise stress values in absence of certain data and lack of code or available literature giving practical approach to the problem. It can be seen that the 400 mm equal tees used as part of the vessel cannot be penetrated in the light of code reinforcement requirements. However if the tees have to penetrated to retain established feeder layout, it should be established experimentally or by some detailed stress analysis that it will meet the intent of code. (author)

  19. Damage distribution and remnant life assessment of a super-heater outlet header used for long time

    Energy Technology Data Exchange (ETDEWEB)

    Hiroyuki, Okamura [Science Univ. of Tokyo (Japan); Ryuichi, Ohotani [Kyoto Univ. (Japan); Kazuya, Fujii [Japan Power Engineering and Inspection Corp., Tokyo (Japan); Masashi, Nakashiro; Fumio, Takemasa; Hideo, Umaki; Tomiyasu, Masumura [Ishikawajima-Harima Heavy Industries Co. Ltd., Tokyo (Japan)

    1998-11-01

    This paper presents the results of investigation on evaluating damage distribution to base metals and welded joints in the thickness direction and evaluate damage on ligaments. Thick wall tested sample was the superheater outlet header component long term serviced in high pressure and temperature condition in thermal power plant. The simulate unused steel of component material was made from sample by suitable heat treatment, and the extent of damage was assessed based on a comparison of nondestructive and destructive test results between simulate unused and aged samples. Damage evaluation was also made by FEM structural stress analysis. (orig./MM)

  20. Pacemaker syndrome with sub-acute left ventricular systolic dysfunction in a patient with a dual-chamber pacemaker: consequence of lead switch at the header.

    Science.gov (United States)

    Khurwolah, Mohammad Reeaze; Vezi, Brian Zwelethini

    In the daily practice of pacemaker insertion, the occurrence of atrial and ventricular lead switch at the pacemaker box header is a rare and unintentional phenomenon, with less than five cases reported in the literature. The lead switch may have dire consequences, depending on the indication for the pacemaker. One of these consequences is pacemaker syndrome, in which the normal sequence of atrial and ventricular activation is impaired, leading to sub-optimal ventricular filling and cardiac output. It is important for the attending physician to recognise any worsening of symptoms in a patient who has recently had a permanent pacemaker inserted. In the case of a dual-chamber pacemaker, switching of the atrial and ventricular leads at the pacemaker box header should be strongly suspected. We present an unusual case of pacemaker syndrome and right ventricular-only pacinginduced left ventricular systolic dysfunction in a patient with a dual-chamber pacemaker.

  1. CFD simulation for thermal mixing of a SMART flow mixing header assembly

    International Nuclear Information System (INIS)

    Kim, Young In; Bae, Youngmin; Chung, Young Jong; Kim, Keung Koo

    2015-01-01

    Highlights: • Thermal mixing performance of a FMHA installed in SMART is investigated numerically. • Effects of operating condition and discharge hole configuration are examined. • FMHA performance satisfies the design requirements under various abnormal conditions. - Abstract: A flow mixing header assembly (FMHA) is installed in a system-integrated modular advanced reactor (SMART) to enhance the thermal mixing capability and create a uniform core flow distribution under both normal operation and accident conditions. In this study, the thermal mixing characteristics of the FMHA are investigated for various steam generator conditions using a commercial CFD code. Simulations include investigations for the effects of FMHA discharge flow rate differences, turbulence models, and steam generator conditions. The results of the analysis show that the FMHA works effectively for thermal mixing in various conditions and makes the temperature difference at the core inlet decrease noticeably. We verified that the mixing capability of the FMHA is excellent and satisfies the design requirement in all simulation cases tested here

  2. Investigating a reduced size real-time transport protocol for low-bandwidth networks

    CSIR Research Space (South Africa)

    Kakande, JN

    2011-09-01

    Full Text Available in this work as RTP-Lite, requires investigation. A cyclical approach to compression of the RTP headers was used with different compression cycle patterns for Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) transport. Measurements over...

  3. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  4. Relap5/Mod3.1 analysis of main steam header rupture in VVER- 440/213 NPP

    Energy Technology Data Exchange (ETDEWEB)

    Kral, P. [Nuclear Research Inst. Rez (Switzerland)

    1995-12-31

    The presentation is focused on two main topics. First the applied modelling of PGV-4 steam generator for RELAP5 code are described. The results of steady-state calculation under reference conditions are compared against measured data. The problem of longitudinal subdivision of SG tubes is analysed and evaluated. Secondly, a best-estimate analysis of main steam header (MSH) rupture accident in WWER-440/213 NPP is presented. The low reliability of initiation of ESFAS signal `MSH Rupture` leads in this accident to big loss of secondary coolant, full depressurization of main steam system, extremely fast cool-down of both secondary and primary system, opening of PRZ SV-bypass valve with later liquid outflow, potential reaching of secondary criticality by failure of HPIS. 7 refs.

  5. Relap5/Mod3.1 analysis of main steam header rupture in VVER- 440/213 NPP

    Energy Technology Data Exchange (ETDEWEB)

    Kral, P [Nuclear Research Inst. Rez (Switzerland)

    1996-12-31

    The presentation is focused on two main topics. First the applied modelling of PGV-4 steam generator for RELAP5 code are described. The results of steady-state calculation under reference conditions are compared against measured data. The problem of longitudinal subdivision of SG tubes is analysed and evaluated. Secondly, a best-estimate analysis of main steam header (MSH) rupture accident in WWER-440/213 NPP is presented. The low reliability of initiation of ESFAS signal `MSH Rupture` leads in this accident to big loss of secondary coolant, full depressurization of main steam system, extremely fast cool-down of both secondary and primary system, opening of PRZ SV-bypass valve with later liquid outflow, potential reaching of secondary criticality by failure of HPIS. 7 refs.

  6. Relap5/Mod3.1 analysis of main steam header rupture in VVER- 440/213 NPP

    International Nuclear Information System (INIS)

    Kral, P.

    1995-01-01

    The presentation is focused on two main topics. First the applied modelling of PGV-4 steam generator for RELAP5 code are described. The results of steady-state calculation under reference conditions are compared against measured data. The problem of longitudinal subdivision of SG tubes is analysed and evaluated. Secondly, a best-estimate analysis of main steam header (MSH) rupture accident in WWER-440/213 NPP is presented. The low reliability of initiation of ESFAS signal 'MSH Rupture' leads in this accident to big loss of secondary coolant, full depressurization of main steam system, extremely fast cool-down of both secondary and primary system, opening of PRZ SV-bypass valve with later liquid outflow, potential reaching of secondary criticality by failure of HPIS

  7. Power raise through improved reactor inlet header temperature measurement at Bruce A Nuclear Generation Station

    International Nuclear Information System (INIS)

    Basu, S.; Bruggemn, D.

    1997-01-01

    Reactor Inlet Header (RIH) temperature has become a factor limiting the performance of the Ontario Hydro Bruce A units. Specifically, the RIH temperature is one of several parameters that is preventing the Bruce A units from returning to 94% power operation. RIH temperature is one of several parameters which affect the critical heat flux in the reactor channel, and hence the integrity of the fuel. Ideally, RIH temperature should be lowered, but this cannot be done without improving the heat transfer performance of the boilers and feedwater pre-heaters. Unfortunately, the physical performance of the boilers and pre-heaters has decayed and continues to decay over time and as a result the RIH temperature has been rising and approaching its defined limit. With an understanding of the current RIH temperature measurement loop and methods available to improve it, a solution to reduce the measurement uncertainty is presented

  8. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  9. A Document-Based EHR System That Controls the Disclosure of Clinical Documents Using an Access Control List File Based on the HL7 CDA Header.

    Science.gov (United States)

    Takeda, Toshihiro; Ueda, Kanayo; Nakagawa, Akito; Manabe, Shirou; Okada, Katsuki; Mihara, Naoki; Matsumura, Yasushi

    2017-01-01

    Electronic health record (EHR) systems are necessary for the sharing of medical information between care delivery organizations (CDOs). We developed a document-based EHR system in which all of the PDF documents that are stored in our electronic medical record system can be disclosed to selected target CDOs. An access control list (ACL) file was designed based on the HL7 CDA header to manage the information that is disclosed.

  10. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  11. Experimental and numerical study of two-phase flows at the inlet of evaporators in vapour compression cycles

    International Nuclear Information System (INIS)

    Ahmad, M.

    2007-09-01

    Maldistribution of liquid-vapour two phase flows causes a significant decrease of the thermal and hydraulic performance of evaporators in thermodynamic vapour compression cycles. A first experimental installation was used to visualize the two phase flow evolution between the expansion valve and the evaporator inlet. A second experimental set-up simulating a compact heat exchanger has been designed to identify the functional and geometrical parameters creating the best distribution of the two phases in the different channels. An analysis and a comprehension of the relation between the geometrical and functional parameters with the flow pattern inside the header and the two phase distribution, has been established. A numerical simulations of a stratified flow and a stratified jet flow have been carried out using two CFD codes: FLUENT and NEPTUNE. In the case of a fragmented jet configuration, a global definition of the interfacial area concentration for a separated phases and dispersed phases flow has been established and a model calculating the fragmented mass fraction has been developed. (author)

  12. Evaluation of intergranular cracks on the ring header cross at Grand Gulf Unit No. 1

    International Nuclear Information System (INIS)

    Czajkowski, C.J.

    1987-01-01

    A metallurgical investigation was performed on a sample of cracked ring header cross material from the Grand Gulf Unit No. 1 Nuclear Power Station. The cracks were located in a 6-7 in (15-17.5 cm) width band running circumferentially below the cross to cap weld with a similar band above the cross to discharger pipe weld. The indications were up to 19 mm in length and 6.0 mm in depth. This particular sample was cut from a cross which had not seen actual service but which had been used to qualify the induction heating stress improvement (IHSI) technique for the Grand Gulf units. The base material was SA 182 material manufactured to SA 403-type WP 304 stainless steel. The investigation consisted of visual/dye penetrant examination, chemical analysis, hardness testing, optical microscopy, scanning electron microscopy and energy dispersive spectroscopy. The evaluated cracks were intergranular and initiated on the forging's exterior surface. The grain size of the material was larger than ASTM 00 and no definitive corrosive species were found by Energy Dispersive Spectroscopy (EDS). The cracking is considered to be the result of the forging having been overheated/burned during manufacture. (author)

  13. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  14. Plataformas de colheita e colheita manual com trilha mecânica sobre a qualidade de sementes de arroz ( Oryza sativa, L. Harvest header and manual harvest with mechanical strip on rice ( Oryza sativa, L. seeds quality

    Directory of Open Access Journals (Sweden)

    Daniel Fernandez Franco

    1999-06-01

    Full Text Available Durante a colheita do arroz irrigado ocorrem perdas e danos físicos e fisiológicos às sementes. No final da década de oitenta, surgiram as plataformas recolhedoras, que retiram ou arrancam o grão ao invés de cortar a panícula, porém, pouco se conhece a respeito dos danos físicos e fisiológicos que este sistema de plataforma pode causar às sementes. Este trabalho teve por objetivo avaliar os danos mecânicos causados às sementes dos cultivares de arroz BR-IRGA 409 e BR-IRGA 410, por três formas de colheita: (a colheita manual e trilha mecânica; (b colheita com plataforma de corte; (c colheita com plataforma recolhedora. Quando a colheita foi mecânica, realizou-se a coleta das amostras diretamente no graneleiro. O delineamento experimental foi blocos ao acaso, com seis repetições. Os resultados demonstraram que as sementes de arroz irrigado dos cultivares estudados não apresentaram diferenças significativas em suas qualidades físicas e fisiológicas, quando colhidas com plataforma de corte e com a plataforma recolhedora. Estes dois métodos de colheita, porém, apresentaram danos significativamente maiores quando comparados à colheita manual e trilha mecânica.During irrigated rice harvesting occur losses and physical and phisiological seed damage. Late 80's, appeared the strippers headers that strip the grain, instead of cutting the spike. However, little is know about physical and phisiological seed damage by harvest header. The objective of this work was to evaluate the mechanical damage caused to BR-IRGA 409 and BR-IRGA 410 rice cultivars by three harvesting methods: (a manual harvesting and mechanical strip; (b cutterbar harvesting and; (c stripper header harvesting. Samples were collected directly in the grain tank when the harvest was mechanical. The experimental design was randomized blocks with six replications. Results demonstrated that the rice seeds of the studied variety didn't showed significant differences in

  15. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  16. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  17. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  19. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  20. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  1. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  2. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest.

    Science.gov (United States)

    Monsieurs, Koenraad G; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F; Calle, Paul A

    2012-11-01

    BACKGROUND AND GOAL OF STUDY: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with decreased depth. In patients undergoing prehospital cardiopulmonary resuscitation by health care professionals, chest compression rate and depth were recorded using an accelerometer (E-series monitor-defibrillator, Zoll, U.S.A.). Compression depth was compared for rates 120/min. A difference in compression depth ≥0.5 cm was considered clinically significant. Mixed models with repeated measurements of chest compression depth and rate (level 1) nested within patients (level 2) were used with compression rate as a continuous and as a categorical predictor of depth. Results are reported as means and standard error (SE). One hundred and thirty-three consecutive patients were analysed (213,409 compressions). Of all compressions 2% were 120/min, 36% were 5 cm. In 77 out of 133 (58%) patients a statistically significant lower depth was observed for rates >120/min compared to rates 80-120/min, in 40 out of 133 (30%) this difference was also clinically significant. The mixed models predicted that the deepest compression (4.5 cm) occurred at a rate of 86/min, with progressively lower compression depths at higher rates. Rates >145/min would result in a depth compression depth for rates 80-120/min was on average 4.5 cm (SE 0.06) compared to 4.1 cm (SE 0.06) for compressions >120/min (mean difference 0.4 cm, Pcompression rates and lower compression depths. Avoiding excessive compression rates may lead to more compressions of sufficient depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. First application of a partially automated road header at Prosper-Haniel colliery; Ersteinsatz einer teilautomatisierten Teilschnittmaschine auf dem Bergwerk Prosper-Haniel

    Energy Technology Data Exchange (ETDEWEB)

    Reinewardt, Klaus-Juergen [Bergwerk Prosper-Haniel, Bottrop (Germany). Betrieb Produktion; Achilles, Peter [RAG Deutsche Steinkohle AG, Herne (Germany). Abt. PPE-V Vorleistungstechnik

    2010-09-15

    Mechanical road heading in the RAG Deutsche Steinkohle mines makes use of AM 105 road headers. Within the scope of an EU-subsidised R and D project the machine has been subjected to an automation of its control features and an integration of sensors for seam position identification and for navigation. The focal points of the automation are: - the scheduled performance of automated cutting operations, - the adherence to a defined loading height due to seam position identification and - the incorporation of first auxiliary functions for navigation (position sensing). In addition, the machine is expected to determine its respective functional state and recognised potential functional faults by itself, and - subject to its present load condition - it is to deliver the basis for a maintenance scheme geared to its current condition. This paper describes above-mentioned development steps and reports on the experience gathered in the underground use of that machine in the Prosper-Haniel colliery. (orig.)

  5. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  6. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  7. Compression stockings

    Science.gov (United States)

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  8. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  9. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  10. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  11. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    Science.gov (United States)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  12. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  13. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  14. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  15. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  16. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  17. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  18. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    Science.gov (United States)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  19. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  20. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  1. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  2. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  3. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  4. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  5. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  6. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    Science.gov (United States)

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  7. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  8. Does the quality of chest compressions deteriorate when the chest compression rate is above 120/min?

    Science.gov (United States)

    Lee, Soo Hoon; Kim, Kyuseok; Lee, Jae Hyuk; Kim, Taeyun; Kang, Changwoo; Park, Chanjong; Kim, Joonghee; Jo, You Hwan; Rhee, Joong Eui; Kim, Dong Hoon

    2014-08-01

    The quality of chest compressions along with defibrillation is the cornerstone of cardiopulmonary resuscitation (CPR), which is known to improve the outcome of cardiac arrest. We aimed to investigate the relationship between the compression rate and other CPR quality parameters including compression depth and recoil. A conventional CPR training for lay rescuers was performed 2 weeks before the 'CPR contest'. CPR anytime training kits were distributed to respective participants for self-training on their own in their own time. The participants were tested for two-person CPR in pairs. The quantitative and qualitative data regarding the quality of CPR were collected from a standardised check list and SkillReporter, and compared by the compression rate. A total of 161 teams consisting of 322 students, which includes 116 men and 206 women, participated in the CPR contest. The mean depth and rate for chest compression were 49.0±8.2 mm and 110.2±10.2/min. Significantly deeper chest compression depths were noted at rates over 120/min than those at any other rates (47.0±7.4, 48.8±8.4, 52.3±6.7, p=0.008). Chest compression depth was proportional to chest compression rate (r=0.206, pcompression including chest compression depth and chest recoil by chest compression rate. Further evaluation regarding the upper limit of the chest compression rate is needed to ensure complete full chest wall recoil while maintaining an adequate chest compression depth. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  10. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  11. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  12. Reactive scattering with row-orthonormal hyperspherical coordinates. 4. Four-dimensional-space Wigner rotation function for pentaatomic systems.

    Science.gov (United States)

    Kuppermann, Aron

    2011-05-14

    The row-orthonormal hyperspherical coordinate (ROHC) approach to calculating state-to-state reaction cross sections and bound state levels of N-atom systems requires the use of angular momentum tensors and Wigner rotation functions in a space of dimension N - 1. The properties of those tensors and functions are discussed for arbitrary N and determined for N = 5 in terms of the 6 Euler angles involved in 4-dimensional space.

  13. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  14. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  15. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  16. Experimental and numerical study of two-phase flows at the inlet of evaporators in vapour compression cycles; Etude experimentale et numerique d'ecoulements diphasiques a l'entree des evaporateurs de cycles thermodynamiques

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, M

    2007-09-15

    Maldistribution of liquid-vapour two phase flows causes a significant decrease of the thermal and hydraulic performance of evaporators in thermodynamic vapour compression cycles. A first experimental installation was used to visualize the two phase flow evolution between the expansion valve and the evaporator inlet. A second experimental set-up simulating a compact heat exchanger has been designed to identify the functional and geometrical parameters creating the best distribution of the two phases in the different channels. An analysis and a comprehension of the relation between the geometrical and functional parameters with the flow pattern inside the header and the two phase distribution, has been established. A numerical simulations of a stratified flow and a stratified jet flow have been carried out using two CFD codes: FLUENT and NEPTUNE. In the case of a fragmented jet configuration, a global definition of the interfacial area concentration for a separated phases and dispersed phases flow has been established and a model calculating the fragmented mass fraction has been developed. (author)

  17. 29 CFR 1917.154 - Compressed air.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  18. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  19. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  20. Compressibility of the protein-water interface

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  1. Compressibility of the protein-water interface.

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  2. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  3. EFFECTIVENESS OF ADJUVANT USE OF POSTERIOR MANUAL COMPRESSION WITH GRADED COMPRESSION IN THE SONOGRAPHIC DIAGNOSIS OF ACUTE APPENDICITIS

    Directory of Open Access Journals (Sweden)

    Senthilnathan V

    2018-01-01

    Full Text Available BACKGROUND Diagnosing appendicitis by Graded Compression Ultrasonogram is a difficult task because of limiting factors such as operator– dependent technique, retrocaecal location of the appendix and patient obesity. Posterior manual compression technique visualizes the appendix better in the Grey-scale Ultrasonogram. The Aim of this study is to determine the accuracy of ultrasound in detecting or excluding acute appendicitis and to evaluate the usefulness of the adjuvant use of posterior manual compression technique in visualization of the appendix and in the diagnosis of acute appendicitis MATERIALS AND METHODS This prospective study involved a total of 240 patients in all age groups and both sexes. All these patients underwent USG for suspected appendicitis. Ultrasonography was performed with transverse and longitudinal graded compression sonography. If the appendix is not visualized on graded compression sonography, posterior manual compression technique was used to further improve the detection of appendix. RESULTS The vermiform appendix was visualized in 185 patients (77.1% out of 240 patients with graded compression alone. 55 out of 240 patients whose appendix could not be visualized by graded compression alone were subjected to both graded followed by posterior manual compression technique among that Appendix was visualized in 43 patients on posterior manual compression technique amounting to 78.2% of cases, Appendix could not be visualized in the remaining 12 patients (21.8% out of 55. CONCLUSION Combined method of graded compression with posterior manual compression technique is better than the graded compression technique alone in diagnostic accuracy and detection rate of the vermiform appendix.

  4. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  5. Nonlinear viscoelasticity of pre-compressed layered polymeric composite under oscillatory compression

    KAUST Repository

    Xu, Yangguang

    2018-05-03

    Describing nonlinear viscoelastic properties of polymeric composites when subjected to dynamic loading is essential for development of practical applications of such materials. An efficient and easy method to analyze nonlinear viscoelasticity remains elusive because the dynamic moduli (storage modulus and loss modulus) are not very convenient when the material falls into nonlinear viscoelastic range. In this study, we utilize two methods, Fourier transform and geometrical nonlinear analysis, to quantitatively characterize the nonlinear viscoelasticity of a pre-compressed layered polymeric composite under oscillatory compression. We discuss the influences of pre-compression, dynamic loading, and the inner structure of polymeric composite on the nonlinear viscoelasticity. Furthermore, we reveal the nonlinear viscoelastic mechanism by combining with other experimental results from quasi-static compressive tests and microstructural analysis. From a methodology standpoint, it is proved that both Fourier transform and geometrical nonlinear analysis are efficient tools for analyzing the nonlinear viscoelasticity of a layered polymeric composite. From a material standpoint, we consequently posit that the dynamic nonlinear viscoelasticity of polymeric composites with complicated inner structures can also be well characterized using these methods.

  6. Effect of compressibility on the hypervelocity penetration

    Science.gov (United States)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  7. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  8. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  9. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  10. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  11. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  12. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  13. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang

    2014-01-01

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  14. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  15. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  16. Compression experiments on the TOSKA tokamak

    International Nuclear Information System (INIS)

    Cima, G.; McGuire, K.M.; Robinson, D.C.; Wootton, A.J.

    1980-10-01

    Results from minor radius compression experiments on a tokamak plasma in TOSCA are reported. The compression is achieved by increasing the toroidal field up to twice its initial value in 200μs. Measurements show that particles and magnetic flux are conserved. When the initial energy confinement time is comparable with the compression time, energy gains are greater than for an adiabatic change of state. The total beta value increases. Central beta values approximately 3% are measured when a small major radius compression is superimposed on a minor radius compression. Magnetic field fluctuations are affected: both the amplitude and period decrease. Starting from low energy confinement times, approximately 200μs, increases in confinement times up to approximately 1 ms are measured. The increase in plasma energy results from a large reduction in the power losses during the compression. When the initial energy confinement time is much longer than the compression time, the parameter changes are those expected for an adiabatic change of state. (author)

  17. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  18. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  19. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  20. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  1. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  2. The Distinction of Hot Herbal Compress, Hot Compress, and Topical Diclofenac as Myofascial Pain Syndrome Treatment.

    Science.gov (United States)

    Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara

    2018-01-01

    This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.

  3. Compression of the digitized X-ray images

    International Nuclear Information System (INIS)

    Terae, Satoshi; Miyasaka, Kazuo; Fujita, Nobuyuki; Takamura, Akio; Irie, Goro; Inamura, Kiyonari.

    1987-01-01

    Medical images are using an increased amount of space in the hospitals, while they are not accessed easily. Thus, suitable data filing system and precise data compression will be necessitated. Image quality was evaluated before and after image data compression, using local filing system (MediFile 1000, NEC Co.) and forty-seven modes of compression parameter. For this study X-ray images of 10 plain radiographs and 7 contrast examinations were digitized using a film reader of CCD sensor in MediFile 1000. Those images were compressed into forty-seven kinds of image data to save in an optical disc and then the compressed images were reconstructed. Each reconstructed image was compared with non-compressed images in respect to several regions of our interest by four radiologists. Compression and extension of radiological images were promptly made by employing the local filing system. Image quality was much more affected by the ratio of data compression than by the mode of parameter itself. In another word, the higher compression ratio became, the worse the image quality were. However, image quality was not significantly degraded until the compression ratio was about 15: 1 on plain radiographs and about 8: 1 on contrast studies. Image compression by this technique will be admitted by diagnostic radiology. (author)

  4. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  5. Development and assessment of compression technique for medical images using neural network. I. Assessment of lossless compression

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi

    2007-01-01

    This paper describes assessment of the lossless compression of a new efficient compression technique (JIS system) using neural network that the author and co-workers have recently developed. At first, theory is explained for encoding and decoding the data. Assessment is done on 55 images each of chest digital roentgenography, digital mammography, 64-row multi-slice CT, 1.5 Tesla MRI, positron emission tomography (PET) and digital subtraction angiography, which are lossless-compressed by the present JIS system to see the compression rate and loss. For comparison, those data are also JPEG lossless-compressed. Personal computer (PC) is an Apple MacBook Pro with configuration of Boot Camp for Windows environment. The present JIS system is found to have a more than 4 times higher efficiency than the usual compressions which compressing the file volume to only 1/11 in average, and thus to be importantly responsible to the increasing medical imaging data. (R.T.)

  6. A comparative experimental study on engine operating on premixed charge compression ignition and compression ignition mode

    Directory of Open Access Journals (Sweden)

    Bhiogade Girish E.

    2017-01-01

    Full Text Available New combustion concepts have been recently developed with the purpose to tackle the problem of high emissions level of traditional direct injection Diesel engines. A good example is the premixed charge compression ignition combustion. A strategy in which early injection is used causing a burning process in which the fuel burns in the premixed condition. In compression ignition engines, soot (particulate matter and NOx emissions are an extremely unsolved issue. Premixed charge compression ignition is one of the most promising solutions that combine the advantages of both spark ignition and compression ignition combustion modes. It gives thermal efficiency close to the compression ignition engines and resolves the associated issues of high NOx and particulate matter, simultaneously. Premixing of air and fuel preparation is the challenging part to achieve premixed charge compression ignition combustion. In the present experimental study a diesel vaporizer is used to achieve premixed charge compression ignition combustion. A vaporized diesel fuel was mixed with the air to form premixed charge and inducted into the cylinder during the intake stroke. Low diesel volatility remains the main obstacle in preparing premixed air-fuel mixture. Exhaust gas re-circulation can be used to control the rate of heat release. The objective of this study is to reduce exhaust emission levels with maintaining thermal efficiency close to compression ignition engine.

  7. Pulsed Compression Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  8. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  9. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  10. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  11. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  12. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  13. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  14. Study of CSR longitudinal bunch compression cavity

    International Nuclear Information System (INIS)

    Yin Dayu; Li Peng; Liu Yong; Xie Qingchun

    2009-01-01

    The scheme of longitudinal bunch compression cavity for the Cooling Storage Ring (CSR)is an important issue. Plasma physics experiments require high density heavy ion beam and short pulsed bunch,which can be produced by non-adiabatic compression of bunch implemented by a fast compression with 90 degree rotation in the longitudinal phase space. The phase space rotation in fast compression is initiated by a fast jump of the RF-voltage amplitude. For this purpose, the CSR longitudinal bunch compression cavity, loaded with FINEMET-FT-1M is studied and simulated with MAFIA code. In this paper, the CSR longitudinal bunch compression cavity is simulated and the initial bunch length of 238 U 72+ with 250 MeV/u will be compressed from 200 ns to 50 ns.The construction and RF properties of the CSR longitudinal bunch compression cavity are simulated and calculated also with MAFIA code. The operation frequency of the cavity is 1.15 MHz with peak voltage of 80 kV, and the cavity can be used to compress heavy ions in the CSR. (authors)

  15. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  16. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  17. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  18. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  19. Limiting density ratios in piston-driven compressions

    International Nuclear Information System (INIS)

    Lee, S.

    1985-07-01

    By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximum compression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)

  20. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2013-01-01

    Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and an extensive discussion of the various approaches used in predicting both free shear and wall bounded flows is presented. Experimental measurement techniques common to the compressible flow regime are introduced with particular emphasis on the unique challenges presented by high speed flows. Both experimental and numerical simulation work is supplied throughout to provide the reader with an overall perspective of current tre...

  1. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  2. 30 CFR 77.412 - Compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  3. Two divergent paths: compression vs. non-compression in deep venous thrombosis and post thrombotic syndrome

    Directory of Open Access Journals (Sweden)

    Eduardo Simões Da Matta

    Full Text Available Abstract Use of compression therapy to reduce the incidence of postthrombotic syndrome among patients with deep venous thrombosis is a controversial subject and there is no consensus on use of elastic versus inelastic compression, or on the levels and duration of compression. Inelastic devices with a higher static stiffness index, combine relatively small and comfortable pressure at rest with pressure while standing strong enough to restore the “valve mechanism” generated by plantar flexion and dorsiflexion of the foot. Since the static stiffness index is dependent on the rigidity of the compression system and the muscle strength within the bandaged area, improvement of muscle mass with muscle-strengthening programs and endurance training should be encouraged. Therefore, in the acute phase of deep venous thrombosis events, anticoagulation combined with inelastic compression therapy can reduce the extension of the thrombus. Notwithstanding, prospective studies evaluating the effectiveness of inelastic therapy in deep venous thrombosis and post-thrombotic syndrome are needed.

  4. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  5. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  6. Poor chest compression quality with mechanical compressions in simulated cardiopulmonary resuscitation: a randomized, cross-over manikin study.

    Science.gov (United States)

    Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob

    2011-10-01

    Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  8. Medullary compression syndrome

    International Nuclear Information System (INIS)

    Barriga T, L.; Echegaray, A.; Zaharia, M.; Pinillos A, L.; Moscol, A.; Barriga T, O.; Heredia Z, A.

    1994-01-01

    The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

  9. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation.

    Science.gov (United States)

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-07-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects were positioned in a 30° lateral recumbent position, and a 2-kgf compression was applied. For expiratory rib cage compression, the rib cage was compressed unilaterally; for expiratory abdominal compression, the area directly above the navel was compressed. Tidal volume values were the actual measured values divided by body weight. [Results] Tidal volume values were as follows: at rest, 7.2 ± 1.7 mL/kg; during expiratory rib cage compression, 8.3 ± 2.1 mL/kg; during expiratory abdominal compression, 9.1 ± 2.2 mL/kg. There was a significant difference between the tidal volume during expiratory abdominal compression and that at rest. The tidal volume in expiratory rib cage compression was strongly correlated with that in expiratory abdominal compression. [Conclusion] These results indicate that expiratory abdominal compression may be an effective alternative to the manual breathing assist procedure.

  10. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  11. MP3 compression of Doppler ultrasound signals.

    Science.gov (United States)

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  12. Plasma heating by adiabatic compression

    International Nuclear Information System (INIS)

    Ellis, R.A. Jr.

    1972-01-01

    These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

  13. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  14. Compressible Fluid Suspension Performance Testing

    National Research Council Canada - National Science Library

    Hoogterp, Francis

    2003-01-01

    ... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

  15. Systolic Compression of Epicardial Coronary and Intramural Arteries

    Science.gov (United States)

    Mohiddin, Saidi A.; Fananapazir, Lameh

    2002-01-01

    It has been suggested that systolic compression of epicardial coronary arteries is an important cause of myocardial ischemia and sudden death in children with hypertrophic cardiomyopathy. We examined the associations between sudden death, systolic coronary compression of intra- and epicardial arteries, myocardial perfusion abnormalities, and severity of hypertrophy in children with hypertrophic cardiomyopathy. We reviewed the angiograms from 57 children with hypertrophic cardiomyopathy for the presence of coronary and septal artery compression; coronary compression was present in 23 (40%). The left anterior descending artery was most often affected, and multiple sites were found in 4 children. Myocardial perfusion abnormalities were more frequently present in children with coronary compression than in those without (94% vs 47%, P = 0.002). Coronary compression was also associated with more severe septal hypertrophy and greater left ventricular outflow gradient. Septal branch compression was present in 65% of the children and was significantly associated with coronary compression, severity of septal hypertrophy, and outflow obstruction. Multivariate analysis showed that septal thickness and septal branch compression, but not coronary compression, were independent predictors of perfusion abnormalities. Coronary compression was not associated with symptom severity, ventricular tachycardia, or a worse prognosis. We conclude that compression of coronary arteries and their septal branches is common in children with hypertrophic cardiomyopathy and is related to the magnitude of left ventricular hypertrophy. Our findings suggest that coronary compression does not make an important contribution to myocardial ischemia in hypertrophic cardiomyopathy; however, left ventricular hypertrophy and compression of intramural arteries may contribute significantly. (Tex Heart Inst J 2002;29:290–8) PMID:12484613

  16. Insertion profiles of 4 headless compression screws.

    Science.gov (United States)

    Hart, Adam; Harvey, Edward J; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A

    2013-09-01

    In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. The peak compression occurs at an insertion depth of -3.1 mm, -2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of -2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of -2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws, and enable the surgeon to optimize compression. Copyright

  17. Energy Conservation In Compressed Air Systems

    International Nuclear Information System (INIS)

    Yusuf, I.Y.; Dewu, B.B.M.

    2004-01-01

    Compressed air is an essential utility that accounts for a substantial part of the electricity consumption (bill) in most industrial plants. Although the general saying Air is free of charge is not true for compressed air, the utility's cost is not accorded the rightful importance due to its by most industries. The paper will show that the cost of 1 unit of energy in the form of compressed air is at least 5 times the cost electricity (energy input) required to produce it. The paper will also provide energy conservation tips in compressed air systems

  18. Compressed Data Structures for Range Searching

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vind, Søren Juhl

    2015-01-01

    matrices and web graphs. Our contribution is twofold. First, we show how to compress geometric repetitions that may appear in standard range searching data structures (such as K-D trees, Quad trees, Range trees, R-trees, Priority R-trees, and K-D-B trees), and how to implement subsequent range queries...... on the compressed representation with only a constant factor overhead. Secondly, we present a compression scheme that efficiently identifies geometric repetitions in point sets, and produces a hierarchical clustering of the point sets, which combined with the first result leads to a compressed representation...

  19. Compression therapy after ankle fracture surgery

    DEFF Research Database (Denmark)

    Winge, R; Bayer, L; Gottlieb, H

    2017-01-01

    PURPOSE: The main purpose of this systematic review was to investigate the effect of compression treatment on the perioperative course of ankle fractures and describe its effect on edema, pain, ankle joint mobility, wound healing complication, length of stay (LOS) and time to surgery (TTS). The aim...... undergoing surgery, testing either intermittent pneumatic compression, compression bandage and/or compression stocking and reporting its effect on edema, pain, ankle joint mobility, wound healing complication, LOS and TTS. To conclude on data a narrative synthesis was performed. RESULTS: The review included...

  20. Effect of Kollidon VA®64 particle size and morphology as directly compressible excipient on tablet compression properties.

    Science.gov (United States)

    Chaudhary, R S; Patel, C; Sevak, V; Chan, M

    2018-01-01

    The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.

  1. Isentropic Compression of Argon

    International Nuclear Information System (INIS)

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

  2. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: I. general description

    Energy Technology Data Exchange (ETDEWEB)

    Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex

    2012-06-21

    Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the

  3. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  4. Confounding compression: the effects of posture, sizing and garment type on measured interface pressure in sports compression clothing.

    Science.gov (United States)

    Brophy-Williams, Ned; Driller, Matthew William; Shing, Cecilia Mary; Fell, James William; Halson, Shona Leigh; Halson, Shona Louise

    2015-01-01

    The purpose of this investigation was to measure the interface pressure exerted by lower body sports compression garments, in order to assess the effect of garment type, size and posture in athletes. Twelve national-level boxers were fitted with sports compression garments (tights and leggings), each in three different sizes (undersized, recommended size and oversized). Interface pressure was assessed across six landmarks on the lower limb (ranging from medial malleolus to upper thigh) as athletes assumed sitting, standing and supine postures. Sports compression leggings exerted a significantly higher mean pressure than sports compression tights (P sports compression garments is significantly affected by garment type, size and posture assumed by the wearer.

  5. Selecting a general-purpose data compression algorithm

    Science.gov (United States)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  6. Compression force behaviours: An exploration of the beliefs and values influencing the application of breast compression during screening mammography

    International Nuclear Information System (INIS)

    Murphy, Fred; Nightingale, Julie; Hogg, Peter; Robinson, Leslie; Seddon, Doreen; Mackay, Stuart

    2015-01-01

    This research project investigated the compression behaviours of practitioners during screening mammography. The study sought to provide a qualitative understanding of ‘how’ and ‘why’ practitioners apply compression force. With a clear conflict in the existing literature and little scientific evidence base to support the reasoning behind the application of compression force, this research project investigated the application of compression using a phenomenological approach. Following ethical approval, six focus group interviews were conducted at six different breast screening centres in England. A sample of 41 practitioners were interviewed within the focus groups together with six one-to-one interviews of mammography educators or clinical placement co-ordinators. The findings revealed two broad humanistic and technological categories consisting of 10 themes. The themes included client empowerment, white-lies, time for interactions, uncertainty of own practice, culture, power, compression controls, digital technology, dose audit-safety nets, numerical scales. All of these themes were derived from 28 units of significant meaning (USM). The results demonstrate a wide variation in the application of compression force, thus offering a possible explanation for the difference between practitioner compression forces found in quantitative studies. Compression force was applied in many different ways due to individual practitioner experiences and behaviour. Furthermore, the culture and the practice of the units themselves influenced beliefs and attitudes of practitioners in compression force application. The strongest recommendation to emerge from this study was the need for peer observation to enable practitioners to observe and compare their own compression force practice to that of their colleagues. The findings are significant for clinical practice in order to understand how and why compression force is applied

  7. Memory hierarchy using row-based compression

    Science.gov (United States)

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  8. Compressed Sensing with Rank Deficient Dictionaries

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn

    2012-01-01

    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C...

  9. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    International Nuclear Information System (INIS)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex

    2012-01-01

    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  10. On the characterisation of the dynamic compressive behaviour of silicon carbides subjected to isentropic compression experiments

    Directory of Open Access Journals (Sweden)

    Zinszner Jean-Luc

    2015-01-01

    Full Text Available Ceramic materials are commonly used as protective materials particularly due to their very high hardness and compressive strength. However, the microstructure of a ceramic has a great influence on its compressive strength and on its ballistic efficiency. To study the influence of microstructural parameters on the dynamic compressive behaviour of silicon carbides, isentropic compression experiments have been performed on two silicon carbide grades using a high pulsed power generator called GEPI. Contrary to plate impact experiments, the use of the GEPI device and of the lagrangian analysis allows determining the whole loading path. The two SiC grades studied present different Hugoniot elastic limit (HEL due to their different microstructures. For these materials, the experimental technique allowed evaluating the evolution of the equivalent stress during the dynamic compression. It has been observed that these two grades present a work hardening more or less pronounced after the HEL. The densification of the material seems to have more influence on the HEL than the grain size.

  11. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  12. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  13. Selective encryption for H.264/AVC video coding

    Science.gov (United States)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  14. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  15. On Normalized Compression Distance and Large Malware

    OpenAIRE

    Borbely, Rebecca Schuller

    2015-01-01

    Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...

  16. A hybrid data compression approach for online backup service

    Science.gov (United States)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  17. Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation: a manikin study.

    Science.gov (United States)

    Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Cho, Yun Kyung; You, Je Sung; Choi, Sung Wook; Kim, Ok Jun

    2013-07-01

    Recent studies have shown that there may be an interaction between duty cycle and other factors related to the quality of chest compression. Duty cycle represents the fraction of compression phase. We aimed to investigate the effect of shorter compression phase on average chest compression depth during metronome-guided cardiopulmonary resuscitation. Senior medical students performed 12 sets of chest compressions following the guiding sounds, with three down-stroke patterns (normal, fast and very fast) and four rates (80, 100, 120 and 140 compressions/min) in random sequence. Repeated-measures analysis of variance was used to compare the average chest compression depth and duty cycle among the trials. The average chest compression depth increased and the duty cycle decreased in a linear fashion as the down-stroke pattern shifted from normal to very fast (pmetronome-guided cardiopulmonary resuscitation.

  18. Compression Characteristics of Solid Wastes as Backfill Materials

    OpenAIRE

    Meng Li; Jixiong Zhang; Rui Gao

    2016-01-01

    A self-made large-diameter compression steel chamber and a SANS material testing machine were chosen to perform a series of compression tests in order to fully understand the compression characteristics of differently graded filling gangue samples. The relationship between the stress-deformation modulus and stress-compression degree was analyzed comparatively. The results showed that, during compression, the deformation modulus of gangue grew linearly with stress, the overall relationship bet...

  19. Exploring compression techniques for ROOT IO

    Science.gov (United States)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  20. Stress analysis of shear/compression test

    International Nuclear Information System (INIS)

    Nishijima, S.; Okada, T.; Ueno, S.

    1997-01-01

    Stress analysis has been made on the glass fiber reinforced plastics (GFRP) subjected to the combined shear and compression stresses by means of finite element method. The two types of experimental set up were analyzed, that is parallel and series method where the specimen were compressed by tilted jigs which enable to apply the combined stresses, to the specimen. Modified Tsai-Hill criterion was employed to judge the failure under the combined stresses that is the shear strength under the compressive stress. The different failure envelopes were obtained between the two set ups. In the parallel system the shear strength once increased with compressive stress then decreased. On the contrary in the series system the shear strength decreased monotonicly with compressive stress. The difference is caused by the different stress distribution due to the different constraint conditions. The basic parameters which control the failure under the combined stresses will be discussed

  1. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  2. Prevention of deep vein thrombosis in potential neurosurgical patients. A randomized trial comparing graduated compression stockings alone or graduated compression stockings plus intermittent pneumatic compression with control

    International Nuclear Information System (INIS)

    Turpie, A.G.; Hirsh, J.; Gent, M.; Julian, D.; Johnson, J.

    1989-01-01

    In a randomized trial of neurosurgical patients, groups wearing graduated compression stockings alone (group 1) or graduated compression stockings plus intermittent pneumatic compression (IPC) (group 2) were compared with an untreated control group in the prevention of deep vein thrombosis (DVT). In both active treatment groups, the graduated compression stockings were continued for 14 days or until hospital discharge, if earlier. In group 2, IPC was continued for seven days. All patients underwent DVT surveillance with iodine 125-labeled fibrinogen leg scanning and impedance plethysmography. Venography was carried out if either test became abnormal. Deep vein thrombosis occurred in seven (8.8%) of 80 patients in group 1, in seven (9.0%) of 78 patients in group 2, and in 16 (19.8%) of 81 patients in the control group. The observed differences among these rates are statistically significant. The results of this study indicate that graduated compression stockings alone or in combination with IPC are effective methods of preventing DVT in neurosurgical patients

  3. Compression-absorption (resorption) refrigerating machinery. Modeling of reactors; Machine frigorifique a compression-absorption (resorption). Modelisation des reacteurs

    Energy Technology Data Exchange (ETDEWEB)

    Lottin, O; Feidt, M; Benelmir, R [LEMTA-UHP Nancy-1, 54 - Vandoeuvre-les-Nancy (France)

    1998-12-31

    This paper is a series of transparencies presenting a comparative study of the thermal performances of different types of refrigerating machineries: di-thermal with vapor compression, tri-thermal with moto-compressor, with ejector, with free piston, adsorption-type, resorption-type, absorption-type, compression-absorption-type. A prototype of ammonia-water compression-absorption heat pump is presented and modeled. (J.S.)

  4. Compression-absorption (resorption) refrigerating machinery. Modeling of reactors; Machine frigorifique a compression-absorption (resorption). Modelisation des reacteurs

    Energy Technology Data Exchange (ETDEWEB)

    Lottin, O.; Feidt, M.; Benelmir, R. [LEMTA-UHP Nancy-1, 54 - Vandoeuvre-les-Nancy (France)

    1997-12-31

    This paper is a series of transparencies presenting a comparative study of the thermal performances of different types of refrigerating machineries: di-thermal with vapor compression, tri-thermal with moto-compressor, with ejector, with free piston, adsorption-type, resorption-type, absorption-type, compression-absorption-type. A prototype of ammonia-water compression-absorption heat pump is presented and modeled. (J.S.)

  5. Data Compression with Linear Algebra

    OpenAIRE

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  6. Tokamak plasma variations under rapid compression

    International Nuclear Information System (INIS)

    Holmes, J.A.; Peng, Y.K.M.; Lynch, S.J.

    1980-04-01

    Changes in plasmas undergoing large, rapid compressions are examined numerically over the following range of aspect ratios A:3 greater than or equal to A greater than or equal to 1.5 for major radius compressions of circular, elliptical, and D-shaped cross sections; and 3 less than or equal to A less than or equal to 6 for minor radius compressions of circular and D-shaped cross sections. The numerical approach combines the computation of fixed boundary MHD equilibria with single-fluid, flux-surface-averaged energy balance, particle balance, and magnetic flux diffusion equations. It is found that the dependences of plasma current I/sub p/ and poloidal beta anti β/sub p/ on the compression ratio C differ significantly in major radius compressions from those proposed by Furth and Yoshikawa. The present interpretation is that compression to small A dramatically increases the plasma current, which lowers anti β/sub p/ and makes the plasma more paramagnetic. Despite large values of toroidal beta anti β/sub T/ (greater than or equal to 30% with q/sub axis/ approx. = 1, q/sub edge/ approx. = 3), this tends to concentrate more toroidal flux near the magnetic axis, which means that a reduced minor radius is required to preserve the continuity of the toroidal flux function F at the plasma edge. Minor radius compressions to large aspect ratio agree well with the Furth-Yoshikawa scaling laws

  7. Benign compression fractures of the spine: signal patterns

    International Nuclear Information System (INIS)

    Ryu, Kyung Nam; Choi, Woo Suk; Lee, Sun Wha; Lim, Jae Hoon

    1992-01-01

    Fifteen patients with 38 compression fractures of the spine underwent magnetic resonance(MR) imaging. We retrospectively evaluated MR images in those benign compression fractures. MR images showed four patterns in T1-weighted images. MR imaging patterns were normal signal(21), band like low signal(8), low signal with preservation of peripheral portion of the body(8), and diffuse low signal through the vertebral body(1). The low signal portions were changed to high signal intensities in T2-weighted images. In 7 of 15 patients (11 compression fractures), there was a history of trauma, and the remaining 8 patients (27 compression fractures) had no history of trauma. Benign compression fractures of trauma, remained 8 patients (27 compression fractures) were non-traumatic. Benign compression fractures of the spine reveal variable signal intensities in MR imagings. These patterns of benign compression fractures may be useful in interpretation of MR imagings of the spine

  8. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  9. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  10. An efficient compression scheme for bitmap indices

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  11. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  12. A biological compression model and its applications.

    Science.gov (United States)

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  13. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  14. The compressed word problem for groups

    CERN Document Server

    Lohrey, Markus

    2014-01-01

    The Compressed Word Problem for Groups provides a detailed exposition of known results on the compressed word problem, emphasizing efficient algorithms for the compressed word problem in various groups. The author presents the necessary background along with the most recent results on the compressed word problem to create a cohesive self-contained book accessible to computer scientists as well as mathematicians. Readers will quickly reach the frontier of current research which makes the book especially appealing for students looking for a currently active research topic at the intersection of group theory and computer science. The word problem introduced in 1910 by Max Dehn is one of the most important decision problems in group theory. For many groups, highly efficient algorithms for the word problem exist. In recent years, a new technique based on data compression for providing more efficient algorithms for word problems, has been developed, by representing long words over group generators in a compres...

  15. ERGC: an efficient referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Nonpainful wide-area compression inhibits experimental pain.

    Science.gov (United States)

    Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena

    2016-09-01

    Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.

  17. Density ratios in compressions driven by radiation pressure

    International Nuclear Information System (INIS)

    Lee, S.

    1988-01-01

    It has been suggested that in the cannonball scheme of laser compression the pellet may be considered to be compressed by the 'brute force' of the radiation pressure. For such a radiation-driven compression, an energy balance method is applied to give an equation fixing the radius compression ratio K which is a key parameter for such intense compressions. A shock model is used to yield specific results. For a square-pulse driving power compressing a spherical pellet with a specific heat ratio of 5/3, a density compression ratio Γ of 27 is computed. Double (stepped) pulsing with linearly rising power enhances Γ to 1750. The value of Γ is not dependent on the absolute magnitude of the piston power, as long as this is large enough. Further enhancement of compression by multiple (stepped) pulsing becomes obvious. The enhanced compression increases the energy gain factor G for a 100 μm DT pellet driven by radiation power of 10 16 W from 6 for a square pulse power with 0.5 MJ absorbed energy to 90 for a double (stepped) linearly rising pulse with absorbed energy of 0.4 MJ assuming perfect coupling efficiency. (author)

  18. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  19. Compression of surface myoelectric signals using MP3 encoding.

    Science.gov (United States)

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  20. Compression and fast retrieval of SNP data.

    Science.gov (United States)

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Relationship between medical compression and intramuscular pressure as an explanation of a compression paradox.

    Science.gov (United States)

    Uhl, J-F; Benigni, J-P; Cornu-Thenard, A; Fournier, J; Blin, E

    2015-06-01

    Using standing magnetic resonance imaging (MRI), we recently showed that medical compression, providing an interface pressure (IP) of 22 mmHg, significantly compressed the deep veins of the leg but not, paradoxically, superficial varicose veins. To provide an explanation for this compression paradox by studying the correlation between the IP exerted by medical compression and intramuscular pressure (IMP). In 10 legs of five healthy subjects, we studied the effects of different IPs on the IMP of the medial gastrocnemius muscle. The IP produced by a cuff manometer was verified by a Picopress® device. The IMP was measured with a 21G needle connected to a manometer. Pressure data were recorded in the prone and standing positions with cuff manometer pressures from 0 to 50 mmHg. In the prone position, an IP of less than 20 did not significantly change the IMP. On the contrary, a perfect linear correlation with the IMP (r = 0.99) was observed with an IP from 20 to 50 mmHg. We found the same correlation in the standing position. We found that an IP of 22 mmHg produced a significant IMP increase from 32 to 54 mmHg, in the standing position. At the same time, the subcutaneous pressure is only provided by the compression device, on healthy subjects. In other words, the subcutaneous pressure plus the IP is only a little higher than 22 mmHg-a pressure which is too low to reduce the caliber of the superficial veins. This is in accordance with our standing MRI 3D anatomical study which showed that, paradoxically, when applying low pressures (IP), the deep veins are compressed while the superficial veins are not. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  2. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  3. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  4. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  5. Crystal and Particle Engineering Strategies for Improving Powder Compression and Flow Properties to Enable Continuous Tablet Manufacturing by Direct Compression.

    Science.gov (United States)

    Chattoraj, Sayantan; Sun, Changquan Calvin

    2018-04-01

    Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  6. Compressed gas fuel storage system

    Science.gov (United States)

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  7. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  8. A review on compressed pattern matching

    Directory of Open Access Journals (Sweden)

    Surya Prakash Mishra

    2016-09-01

    Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.

  9. 30 CFR 57.13020 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  10. Relationship between the edgewise compression strength of ...

    African Journals Online (AJOL)

    The results of this study were used to determine the linear regression constants in the Maltenfort model by correlating the measured board edgewise compression strength (ECT) with the predicted strength, using the paper components' compression strengths, measured with the short-span compression test (SCT) and the ...

  11. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.

  12. Quasi-isentropic compression using compressed water flow generated by underwater electrical explosion of a wire array

    Science.gov (United States)

    Gurovich, V.; Virozub, A.; Rososhek, A.; Bland, S.; Spielman, R. B.; Krasik, Ya. E.

    2018-05-01

    A major experimental research area in material equation-of-state today involves the use of off-Hugoniot measurements rather than shock experiments that give only Hugoniot data. There is a wide range of applications using quasi-isentropic compression of matter including the direct measurement of the complete isentrope of materials in a single experiment and minimizing the heating of flyer plates for high-velocity shock measurements. We propose a novel approach to generating quasi-isentropic compression of matter. Using analytical modeling and hydrodynamic simulations, we show that a working fluid composed of compressed water, generated by an underwater electrical explosion of a planar wire array, might be used to efficiently drive the quasi-isentropic compression of a copper target to pressures ˜2 × 1011 Pa without any complex target designs.

  13. Eccentric crank variable compression ratio mechanism

    Science.gov (United States)

    Lawrence, Keith Edward [Kobe, JP; Moser, William Elliott [Peoria, IL; Roozenboom, Stephan Donald [Washington, IL; Knox, Kevin Jay [Peoria, IL

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  14. How Wage Compression Affects Job Turnover

    OpenAIRE

    Heyman, Fredrik

    2008-01-01

    I use Swedish establishment-level panel data to test Bertola and Rogerson’s (1997) hypothesis of a positive relation between the degree of wage compression and job reallocation. Results indicate that the effect of wage compression on job turnover is positive and significant in the manufacturing sector. The wage compression effect is stronger on job destruction than on job creation, consistent with downward wage rigidity. Further results include a strong positive relationship between the fract...

  15. CoGI: Towards Compressing Genomes as an Image.

    Science.gov (United States)

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  16. 30 CFR 56.13020 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...

  17. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  18. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  19. Modeling the mechanical and compression properties of polyamide/elastane knitted fabrics used in compression sportswear

    NARCIS (Netherlands)

    Maqsood, Muhammad

    2016-01-01

    A compression sportswear fabric should have excellent stretch and recovery properties in order to improve the performance of the sportsman. The objective of this study was to investigate the effect of elastane linear density and loop length on the stretch, recovery, and compression properties of the

  20. An analysis of the efficacy of bag-valve-mask ventilation and chest compression during different compression-ventilation ratios in manikin-simulated paediatric resuscitation.

    Science.gov (United States)

    Kinney, S B; Tibballs, J

    2000-01-01

    The ideal chest compression and ventilation ratio for children during performance of cardiopulmonary resuscitation (CPR) has not been determined. The efficacy of chest compression and ventilation during compression ventilation ratios of 5:1, 10:2 and 15:2 was examined. Eighteen nurses, working in pairs, were instructed to provide chest compression and bag-valve-mask ventilation for 1 min with each ratio in random on a child-sized manikin. The subjects had been previously taught paediatric CPR within the last 3 or 5 months. The efficacy of ventilation was assessed by measurement of the expired tidal volume and the number of breaths provided. The rate of chest compression was guided by a metronome set at 100/min. The efficacy of chest compressions was assessed by measurement of the rate and depth of compression. There was no significant difference in the mean tidal volume or the percentage of effective chest compressions delivered for each compression-ventilation ratio. The number of breaths delivered was greatest with the ratio of 5:1. The percentage of effective chest compressions was equal with all three methods but the number of effective chest compressions was greatest with a ratio of 5:1. This study supports the use of a compression-ventilation ratio of 5:1 during two-rescuer paediatric cardiopulmonary resuscitation.

  1. A worked example using the SP249 advanced assessment route: the carregado unit 6 final superheater outlet header

    Energy Technology Data Exchange (ETDEWEB)

    Brear, J.M.; Jarvis, P.; Jones, G.T. [ERA Technology (United Kingdom); Jovanovic, A.S.; Friemann, M.; Kluttig, B.; Ober, M. [Stuttgart Univ. (Germany). Staatliche Materialpruefungsanstalt; Batista, A. [EDP-PROET (Portugal); Araujo, C.L. de; Pires, A. [ISQ (Portugal)

    1995-12-31

    As a key part of its information resource, the SP249 Project contains a number of case studies, drawn from the collective experience of the partners and from the literature. The user of the system may search this data-base by component type and material or by assessment method, to find a practical example close to his own current problem. He can thus draw upon past experience as well as state-of-the-art knowledge to obtain advice. To facilitate this, a set of key-words has been defined to create links between the case studies and the overall assessment methodology. These relate to damage and failure types and causes as well as to techniques of investigation and assessment. For demonstration, validation and didactic purposes, certain of these case studies - one per end-user utility in the project - have been chosen for full elaboration as `worked-examples`. These real component evaluations are worked through by an expert group from the project team so as to provide the utility staff with `hands-on` training in both the practical techniques of component life. The assessment and the use of the knowledge based system. The exercise also provides valuable opportunity for feedback, allowing refinement of the technology package and the software. Amongst these worked examples, an assessment of EDP`s Carregado Unit 6 Final Superheater Outlet Header has been chosen for special attention - as the operators have kindly allowed direct CSS to the component during two outages. This article summarises the Carregado Case Study. It is intended to serve as a demonstration and as to how the Advanced Assessment Route (AAR) is used in practice. The actions performed and results obtained are summarised

  2. Task-oriented lossy compression of magnetic resonance images

    Science.gov (United States)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  3. Light-weight reference-based compression of FASTQ data.

    Science.gov (United States)

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  4. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  5. Comparison of compression properties of stretchable knitted fabrics and bi-stretch woven fabrics for compression garments

    NARCIS (Netherlands)

    Maqsood, Muhammad

    2017-01-01

    Stretchable fabrics have diverse applications ranging from casual apparel to performance sportswear and compression therapy. Compression therapy is the universally accepted treatment for the management of hypertrophic scarring after severe burns. Mostly stretchable knitted fabrics are used in

  6. Compressed Air/Vacuum Transportation Techniques

    Science.gov (United States)

    Guha, Shyamal

    2011-03-01

    General theory of compressed air/vacuum transportation will be presented. In this transportation, a vehicle (such as an automobile or a rail car) is powered either by compressed air or by air at near vacuum pressure. Four version of such transportation is feasible. In all versions, a ``c-shaped'' plastic or ceramic pipe lies buried a few inches under the ground surface. This pipe carries compressed air or air at near vacuum pressure. In type I transportation, a vehicle draws compressed air (or vacuum) from this buried pipe. Using turbine or reciprocating air cylinder, mechanical power is generated from compressed air (or from vacuum). This mechanical power transferred to the wheels of an automobile (or a rail car) drives the vehicle. In type II-IV transportation techniques, a horizontal force is generated inside the plastic (or ceramic) pipe. A set of vertical and horizontal steel bars is used to transmit this force to the automobile on the road (or to a rail car on rail track). The proposed transportation system has following merits: virtually accident free; highly energy efficient; pollution free and it will not contribute to carbon dioxide emission. Some developmental work on this transportation will be needed before it can be used by the traveling public. The entire transportation system could be computer controlled.

  7. Wave energy devices with compressible volumes.

    Science.gov (United States)

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-12-08

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  8. Isostatic compression of buffer blocks. Middle scale

    International Nuclear Information System (INIS)

    Ritola, J.; Pyy, E.

    2012-01-01

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  9. Isostatic compression of buffer blocks. Middle scale

    Energy Technology Data Exchange (ETDEWEB)

    Ritola, J.; Pyy, E. [VTT Technical Research Centre of Finland, Espoo (Finland)

    2012-01-15

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  10. Fast lossless compression via cascading Bloom filters.

    Science.gov (United States)

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  11. An unusual case: right proximal ureteral compression by the ovarian vein and distal ureteral compression by the external iliac vein

    Directory of Open Access Journals (Sweden)

    Halil Ibrahim Serin

    2015-12-01

    Full Text Available A 32-years old woman presented to the emergency room of Bozok University Research Hospital with right renal colic. Multidetector computed tomography (MDCT showed compression of the proximal ureter by the right ovarian vein and compression of the right distal ureter by the right external iliac vein. To the best of our knowledge, right proximal ureteral compression by the ovarian vein together with distal ureteral compression by the external iliac vein have not been reported in the literature. Ovarian vein and external iliac vein compression should be considered in patients presenting to the emergency room with renal colic or low back pain and a dilated collecting system.

  12. Quantization Distortion in Block Transform-Compressed Data

    Science.gov (United States)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  13. Compressed Air Production Using Vehicle Suspension

    OpenAIRE

    Ninad Arun Malpure; Sanket Nandlal Bhansali

    2015-01-01

    Abstract Generally compressed air is produced using different types of air compressors which consumes lot of electric energy and is noisy. In this paper an innovative idea is put forth for production of compressed air using movement of vehicle suspension which normal is wasted. The conversion of the force energy into the compressed air is carried out by the mechanism which consists of the vehicle suspension system hydraulic cylinder Non-return valve air compressor and air receiver. We are co...

  14. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  15. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  16. LSP Simulations of the Neutralized Drift Compression Experiment

    CERN Document Server

    Thoma, Carsten H; Gilson, Erik P; Henestroza, Enrique; Roy, Prabir K; Welch, Dale; Yu, Simon

    2005-01-01

    The Neutralized Drift Compression Experiment (NDCX) at Lawrence Berkeley National Laboratory involves the longitudinal compression of a singly-stripped K ion beam with a mean energy of 250 keV in a meter long plasma. We present simulation results of compression of the NDCX beam using the PIC code LSP. The NDCX beam encounters an acceleration gap with a time-dependent voltage that decelerates the front and accelerates the tail of a 500 ns pulse which is to be compressed 110 cm downstream. The simulations model both ideal and experimental voltage waveforms. Results show good longitudinal compression without significant emittance growth.

  17. Rectal perforation by compressed air.

    Science.gov (United States)

    Park, Young Jin

    2017-07-01

    As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.

  18. Comparison of Open-Hole Compression Strength and Compression After Impact Strength on Carbon Fiber/Epoxy Laminates for the Ares I Composite Interstage

    Science.gov (United States)

    Hodge, Andrew J.; Nettles, Alan T.; Jackson, Justin R.

    2011-01-01

    Notched (open hole) composite laminates were tested in compression. The effect on strength of various sizes of through holes was examined. Results were compared to the average stress criterion model. Additionally, laminated sandwich structures were damaged from low-velocity impact with various impact energy levels and different impactor geometries. The compression strength relative to damage size was compared to the notched compression result strength. Open-hole compression strength was found to provide a reasonable bound on compression after impact.

  19. Mathematical transforms and image compression: A review

    Directory of Open Access Journals (Sweden)

    Satish K. Singh

    2010-07-01

    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.

  20. Logarithmic compression methods for spectral data

    Science.gov (United States)

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  1. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  2. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Christiansen, Anders Roy; Cording, Patrick Hagge

    2017-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  3. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2016-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  4. H.264/AVC Video Compression on Smartphones

    Science.gov (United States)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  5. Lossless medical image compression with a hybrid coder

    Science.gov (United States)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  6. Parallel Algorithm for Wireless Data Compression and Encryption

    Directory of Open Access Journals (Sweden)

    Qin Jiancheng

    2017-01-01

    Full Text Available As the wireless network has limited bandwidth and insecure shared media, the data compression and encryption are very useful for the broadcasting transportation of big data in IoT (Internet of Things. However, the traditional techniques of compression and encryption are neither competent nor efficient. In order to solve this problem, this paper presents a combined parallel algorithm named “CZ algorithm” which can compress and encrypt the big data efficiently. CZ algorithm uses a parallel pipeline, mixes the coding of compression and encryption, and supports the data window up to 1 TB (or larger. Moreover, CZ algorithm can encrypt the big data as a chaotic cryptosystem which will not decrease the compression speed. Meanwhile, a shareware named “ComZip” is developed based on CZ algorithm. The experiment results show that ComZip in 64 b system can get better compression ratio than WinRAR and 7-zip, and it can be faster than 7-zip in the big data compression. In addition, ComZip encrypts the big data without extra consumption of computing resources.

  7. NRGC: a novel referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2016-11-15

    Next-generation sequencing techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article, we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences. We have done rigorous experiments to evaluate NRGC by taking a set of real human genomes. The simulation results show that our algorithm is indeed an effective genome compression algorithm that performs better than the best-known algorithms in most of the cases. Compression and decompression times are also very impressive. The implementations are freely available for non-commercial purposes. They can be downloaded from: http://www.engr.uconn.edu/~rajasek/NRGC.zip CONTACT: rajasek@engr.uconn.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Adiabatic liquid piston compressed air energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Petersen, Tage [Danish Technological Institute, Aarhus (Denmark); Elmegaard, B. [Technical Univ. of Denmark. DTU Mechanical Engineering, Kgs. Lyngby (Denmark); Schroeder Pedersen, A. [Technical Univ. of Denmark. DTU Energy Conversion, Risoe Campus, Roskilde (Denmark)

    2013-01-15

    This project investigates the potential of a Compressed Air Energy Storage system (CAES system). CAES systems are used to store mechanical energy in the form of compressed air. The systems use electricity to drive the compressor at times of low electricity demand with the purpose of converting the mechanical energy into electricity at times of high electricity demand. Two such systems are currently in operation; one in Germany (Huntorf) and one in the USA (Macintosh, Alabama). In both cases, an underground cavern is used as a pressure vessel for the storage of the compressed air. Both systems are in the range of 100 MW electrical power output with several hours of production stored as compressed air. In this range, enormous volumes are required, which make underground caverns the only economical way to design the pressure vessel. Both systems use axial turbine compressors to compress air when charging the system. The compression leads to a significant increase in temperature, and the heat generated is dumped into the ambient. This energy loss results in a low efficiency of the system, and when expanding the air, the expansion leads to a temperature drop reducing the mechanical output of the expansion turbines. To overcome this, fuel is burned to heat up the air prior to expansion. The fuel consumption causes a significant cost for the storage. Several suggestions have been made to store compression heat for later use during expansion and thereby avoid the use of fuel (so called Adiabatic CAES units), but no such units are in operation at present. The CAES system investigated in this project uses a different approach to avoid compression heat loss. The system uses a pre-compressed pressure vessel full of air. A liquid is pumped into the bottom of the vessel when charging and the same liquid is withdrawn through a turbine when discharging. In this case, the liquid works effectively as a piston compressing the gas in the vessel, hence the name &apos

  9. Sudden viscous dissipation in compressing plasma turbulence

    Science.gov (United States)

    Davidovits, Seth; Fisch, Nathaniel

    2015-11-01

    Compression of a turbulent plasma or fluid can cause amplification of the turbulent kinetic energy, if the compression is fast compared to the turnover and viscous dissipation times of the turbulent eddies. The consideration of compressing turbulent flows in inviscid fluids has been motivated by the suggestion that amplification of turbulent kinetic energy occurred on experiments at the Weizmann Institute of Science Z-Pinch. We demonstrate a sudden viscous dissipation mechanism whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, which further increases the temperature, feeding back to further enhance the dissipation. Application of this mechanism in compression experiments may be advantageous, if the plasma can be kept comparatively cold during much of the compression, reducing radiation and conduction losses, until the plasma suddenly becomes hot. This work was supported by DOE through contract 67350-9960 (Prime # DOE DE-NA0001836) and by the DTRA.

  10. Compressed Baryonic Matter of Astrophysics

    OpenAIRE

    Guo, Yanjun; Xu, Renxin

    2013-01-01

    Baryonic matter in the core of a massive and evolved star is compressed significantly to form a supra-nuclear object, and compressed baryonic matter (CBM) is then produced after supernova. The state of cold matter at a few nuclear density is pedagogically reviewed, with significant attention paid to a possible quark-cluster state conjectured from an astrophysical point of view.

  11. Efficient access of compressed data

    International Nuclear Information System (INIS)

    Eggers, S.J.; Shoshani, A.

    1980-06-01

    A compression technique is presented that allows a high degree of compression but requires only logarithmic access time. The technique is a constant suppression scheme, and is most applicable to stable databases whose distribution of constants is fairly clustered. Furthermore, the repeated use of the technique permits the suppression of a multiple number of different constants. Of particular interest is the application of the constant suppression technique to databases the composite key of which is made up of an incomplete cross product of several attribute domains. The scheme for compressing the full cross product composite key is well known. This paper, however, also handles the general, incomplete case by applying the constant suppression technique in conjunction with a composite key suppression scheme

  12. Beam dynamics of the Neutralized Drift Compression Experiment-II (NDCX-II),a novel pulse-compressing ion accelerator

    International Nuclear Information System (INIS)

    Friedman, A.; Barnard, J.J.; Cohen, R.H.; Grote, D.P.; Lund, S.M.; Sharp, W.M.; Faltens, A.; Henestroza, E.; Jung, J.-Y.; Kwan, J.W.; Lee, E.P.; Leitner, M.A.; Logan, B.G.; Vay, J.-L.; Waldron, W.L.; Davidson, R.C.; Dorf, M.; Gilson, E.P.; Kaganovich, I.D.

    2009-01-01

    Intense beams of heavy ions are well suited for heating matter to regimes of emerging interest. A new facility, NDCX-II, will enable studies of warm dense matter at ∼1 eV and near-solid density, and of heavy-ion inertial fusion target physics relevant to electric power production. For these applications the beam must deposit its energy rapidly, before the target can expand significantly. To form such pulses, ion beams are temporally compressed in neutralizing plasma; current amplification factors of ∼50-100 are routinely obtained on the Neutralized Drift Compression Experiment (NDCX) at LBNL. In the NDCX-II physics design, an initial non-neutralized compression renders the pulse short enough that existing high-voltage pulsed power can be employed. This compression is first halted and then reversed by the beam's longitudinal space-charge field. Downstream induction cells provide acceleration and impose the head-to-tail velocity gradient that leads to the final neutralized compression onto the target. This paper describes the discrete-particle simulation models (1-D, 2-D, and 3-D) employed and the space-charge-dominated beam dynamics being realized.

  13. On Compression of a Heavy Compressible Layer of an Elastoplastic or Elastoviscoplastic Medium

    Science.gov (United States)

    Kovtanyuk, L. V.; Panchenko, G. L.

    2017-11-01

    The problem of deformation of a horizontal plane layer of a compressible material is solved in the framework of the theory of small strains. The upper boundary of the layer is under the action of shear and compressing loads, and the no-slip condition is satisfied on the lower boundary of the layer. The loads increase in absolute value with time, then become constant, and then decrease to zero.Various plasticity conditions are consideredwith regard to the material compressibility, namely, the Coulomb-Mohr plasticity condition, the von Mises-Schleicher plasticity condition, and the same conditions with the viscous properties of the material taken into account. To solve the system of partial differential equations for the components of irreversible strains, a finite-difference scheme is developed for a spatial domain increasing with time. The laws of motion of elastoplastic boundaries are presented, the stresses, strains, rates of strain, and displacements are calculated, and the residual stresses and strains are found.

  14. Compressible turbulent flows: aspects of prediction and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, R. [TU Muenchen, Garching (Germany). Fachgebiet Stroemungsmechanik

    2007-03-15

    Compressible turbulent flows are an important element of high-speed flight. Boundary layers developing along fuselage and wings of an aircraft and along engine compressor and turbine blades are compressible and mostly turbulent. The high-speed flow around rockets and through rocket nozzles involves compressible turbulence and flow separation. Turbulent mixing and combustion in scramjet engines is another example where compressibility dominates the flow physics. Although compressible turbulent flows have attracted researchers since the fifties of the last century, they are not completely understood. Especially interactions between compressible turbulence and combustion lead to challenging, yet unsolved problems. Direct numerical simulation (DNS) and large-eddy simulation (LES) represent modern powerful research tools which allow to mimic such flows in great detail and to analyze underlying physical mechanisms, even those which cannot be accessed by the experiment. The present lecture provides a short description of these tools and some of their numerical characteristics. It then describes DNS and LES results of fully-developed channel and pipe flow and highlights effects of compressibility on the turbulence structure. The analysis of pressure fluctuations in such flows with isothermal cooled walls leads to the conclusion that the pressure-strain correlation tensor decreases in the wall layer and that the turbulence anisotropy increases, since the mean density falls off relative to the incompressible flow case. Similar increases in turbulence anisotropy due to compressibility are observed in inert and reacting temporal mixing layers. The nature of the pressure fluctuations is however two-facetted. While inert compressible mixing layers reveal wave-propagation effects in the pressure and density fluctuations, compressible reacting mixing layers seem to generate pressure fluctuations that are controlled by the time-rate of change of heat release and mean density

  15. High-speed and high-ratio referential genome compression.

    Science.gov (United States)

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. A 172 $\\mu$W Compressively Sampled Photoplethysmographic (PPG) Readout ASIC With Heart Rate Estimation Directly From Compressively Sampled Data.

    Science.gov (United States)

    Pamula, Venkata Rajesh; Valero-Sarmiento, Jose Manuel; Yan, Long; Bozkurt, Alper; Hoof, Chris Van; Helleputte, Nick Van; Yazicioglu, Refet Firat; Verhelst, Marian

    2017-06-01

    A compressive sampling (CS) photoplethysmographic (PPG) readout with embedded feature extraction to estimate heart rate (HR) directly from compressively sampled data is presented. It integrates a low-power analog front end together with a digital back end to perform feature extraction to estimate the average HR over a 4 s interval directly from compressively sampled PPG data. The application-specified integrated circuit (ASIC) supports uniform sampling mode (1x compression) as well as CS modes with compression ratios of 8x, 10x, and 30x. CS is performed through nonuniformly subsampling the PPG signal, while feature extraction is performed using least square spectral fitting through Lomb-Scargle periodogram. The ASIC consumes 172  μ W of power from a 1.2 V supply while reducing the relative LED driver power consumption by up to 30 times without significant loss of relevant information for accurate HR estimation.

  17. Fluffy dust forms icy planetesimals by static compression

    Science.gov (United States)

    Kataoka, Akimasa; Tanaka, Hidekazu; Okuzumi, Satoshi; Wada, Koji

    2013-09-01

    Context. Several barriers have been proposed in planetesimal formation theory: bouncing, fragmentation, and radial drift problems. Understanding the structure evolution of dust aggregates is a key in planetesimal formation. Dust grains become fluffy by coagulation in protoplanetary disks. However, once they are fluffy, they are not sufficiently compressed by collisional compression to form compact planetesimals. Aims: We aim to reveal the pathway of dust structure evolution from dust grains to compact planetesimals. Methods: Using the compressive strength formula, we analytically investigate how fluffy dust aggregates are compressed by static compression due to ram pressure of the disk gas and self-gravity of the aggregates in protoplanetary disks. Results: We reveal the pathway of the porosity evolution from dust grains via fluffy aggregates to form planetesimals, circumventing the barriers in planetesimal formation. The aggregates are compressed by the disk gas to a density of 10-3 g/cm3 in coagulation, which is more compact than is the case with collisional compression. Then, they are compressed more by self-gravity to 10-1 g/cm3 when the radius is 10 km. Although the gas compression decelerates the growth, the aggregates grow rapidly enough to avoid the radial drift barrier when the orbital radius is ≲6 AU in a typical disk. Conclusions: We propose a fluffy dust growth scenario from grains to planetesimals. It enables icy planetesimal formation in a wide range beyond the snowline in protoplanetary disks. This result proposes a concrete initial condition of planetesimals for the later stages of the planet formation.

  18. Shock compression of synthetic opal

    International Nuclear Information System (INIS)

    Inoue, A; Okuno, M; Okudera, H; Mashimo, T; Omurzak, E; Katayama, S; Koyano, M

    2010-01-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO 4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO 2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  19. Shock compression of synthetic opal

    Science.gov (United States)

    Inoue, A.; Okuno, M.; Okudera, H.; Mashimo, T.; Omurzak, E.; Katayama, S.; Koyano, M.

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  20. Shock compression of synthetic opal

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, A; Okuno, M; Okudera, H [Department of Earth Sciences, Kanazawa University Kanazawa, Ishikawa, 920-1192 (Japan); Mashimo, T; Omurzak, E [Shock Wave and Condensed Matter Research Center, Kumamoto University, Kumamoto, 860-8555 (Japan); Katayama, S; Koyano, M, E-mail: okuno@kenroku.kanazawa-u.ac.j [JAIST, Nomi, Ishikawa, 923-1297 (Japan)

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO{sub 4} tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO{sub 2} glass. However, internal silanole groups still remain even at 38.1 GPa.

  1. Football Equipment Removal Improves Chest Compression and Ventilation Efficacy.

    Science.gov (United States)

    Mihalik, Jason P; Lynall, Robert C; Fraser, Melissa A; Decoster, Laura C; De Maio, Valerie J; Patel, Amar P; Swartz, Erik E

    2016-01-01

    Airway access recommendations in potential catastrophic spine injury scenarios advocate for facemask removal, while keeping the helmet and shoulder pads in place for ensuing emergency transport. The anecdotal evidence to support these recommendations assumes that maintaining the helmet and shoulder pads assists inline cervical stabilization and that facial access guarantees adequate airway access. Our objective was to determine the effect of football equipment interference on performing chest compressions and delivering adequate ventilations on patient simulators. We hypothesized that conditions with more football equipment would decrease chest compression and ventilation efficacy. Thirty-two certified athletic trainers were block randomized to participate in six different compression conditions and six different ventilation conditions using human patient simulators. Data for chest compression (mean compression depth, compression rate, percentage of correctly released compressions, and percentage of adequate compressions) and ventilation (total ventilations, mean ventilation volume, and percentage of ventilations delivering adequate volume) conditions were analyzed across all conditions. The fully equipped athlete resulted in the lowest mean compression depth (F5,154 = 22.82; P Emergency medical personnel should remove the helmet and shoulder pads from all football athletes who require cardiopulmonary resuscitation, while maintaining appropriate cervical spine stabilization when injury is suspected. Further research is needed to confirm our findings supporting full equipment removal for chest compression and ventilation delivery.

  2. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    Science.gov (United States)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  3. Data compression considerations for detectors with local intelligence

    International Nuclear Information System (INIS)

    Garcia-Sciveres, M; Wang, X

    2014-01-01

    This note summarizes the outcome of discussions about how data compression considerations apply to tracking detectors with local intelligence. The method for analyzing data compression efficiency is taken from a previous publication and applied to module characteristics from the WIT2014 workshop. We explore local intelligence and coupled layer structures in the language of data compression. In this context the original intelligent tracker concept of correlating hits to find matches of interest and discard others is just a form of lossy data compression. We now explore how these features (intelligence and coupled layers) can be exploited for lossless compression, which could enable full readout at higher trigger rates than previously envisioned, or even triggerless

  4. Efficiency of Compressed Air Energy Storage

    DEFF Research Database (Denmark)

    Elmegaard, Brian; Brix, Wiebke

    2011-01-01

    The simplest type of a Compressed Air Energy Storage (CAES) facility would be an adiabatic process consisting only of a compressor, a storage and a turbine, compressing air into a container when storing and expanding when producing. This type of CAES would be adiabatic and would if the machines...... were reversible have a storage efficiency of 100%. However, due to the specific capacity of the storage and the construction materials the air is cooled during and after compression in practice, making the CAES process diabatic. The cooling involves exergy losses and thus lowers the efficiency...... of the storage significantly. The efficiency of CAES as an electricity storage may be defined in several ways, we discuss these and find that the exergetic efficiency of compression, storage and production together determine the efficiency of CAES. In the paper we find that the efficiency of the practical CAES...

  5. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2009-01-01

    This book introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. For the computation of turbulent compressible flows, current methods of averaging and filtering are presented so that the reader is exposed to a consistent development of applicable equation sets for both the mean or resolved fields as well as the transport equations for the turbulent stress field. For the measurement of turbulent compressible flows, current techniques ranging from hot-wire anemometry to PIV are evaluated and limitations assessed. Characterizing dynamic features of free shear flows, including jets, mixing layers and wakes, and wall-bounded flows, including shock-turbulence and shock boundary-layer interactions, obtained from computations, experiments and simulations are discussed. Key features: * Describes prediction methodologies in...

  6. FRC translation into a compression coil

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1985-01-01

    Several features of the problem of FRC translation into a compression coil are considered. First, the magnitude of the guide field is calculated and found to exceed that which would be applied to a flux conserver. Second, energy conservation is applied to FRC translation from a flux conserver into a compression coil. It is found that a significant temperature decrease is required for translation to be energetically possible. The temperature change depends on the external inductance in the compression circuit. An analogous case is that of a compression region composed of a compound magnet; in this case the temperature change depends on the ratio of inner and outer coil radii. Finally, the kinematics of intermediate translation states are calculated using an ''abrupt transition'' model. It is found, in this model, that the FRC must overcome a potential hill during translation, which requires a small initial velocity

  7. Breast compression in mammography: how much is enough?

    Science.gov (United States)

    Poulos, Ann; McLean, Donald; Rickard, Mary; Heard, Robert

    2003-06-01

    The amount of breast compression that is applied during mammography potentially influences image quality and the discomfort experienced. The aim of this study was to determine the relationship between applied compression force, breast thickness, reported discomfort and image quality. Participants were women attending routine breast screening by mammography at BreastScreen New South Wales Central and Eastern Sydney. During the mammographic procedure, an 'extra' craniocaudal (CC) film was taken at a reduced level of compression ranging from 10 to 30 Newtons. Breast thickness measurements were recorded for both the normal and the extra CC film. Details of discomfort experienced, cup size, menstrual status, existing breast pain and breast problems were also recorded. Radiologists were asked to compare the image quality of the normal and manipulated film. The results indicated that 24% of women did not experience a difference in thickness when the compression was reduced. This is an important new finding because the aim of breast compression is to reduce breast thickness. If breast thickness is not reduced when compression force is applied then discomfort is increased with no benefit in image quality. This has implications for mammographic practice when determining how much breast compression is sufficient. Radiologists found a decrease in contrast resolution within the fatty area of the breast between the normal and the extra CC film, confirming a decrease in image quality due to insufficient applied compression force.

  8. Fundamental study of compression for movie files of coronary angiography

    Science.gov (United States)

    Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.

  9. Free-beam soliton self-compression in air

    Science.gov (United States)

    Voronin, A. A.; Mitrofanov, A. V.; Sidorov-Biryukov, D. A.; Fedotov, A. B.; Pugžlys, A.; Panchenko, V. Ya; Shumakova, V.; Ališauskas, S.; Baltuška, A.; Zheltikov, A. M.

    2018-02-01

    We identify a physical scenario whereby soliton transients generated in freely propagating laser beams within the regions of anomalous dispersion in air can be compressed as a part of their free-beam spatiotemporal evolution to yield few-cycle mid- and long-wavelength-infrared field waveforms, whose peak power is substantially higher than the peak power of the input pulses. We show that this free-beam soliton self-compression scenario does not require ionization or laser-induced filamentation, enabling high-throughput self-compression of mid- and long-wavelength-infrared laser pulses within a broad range of peak powers from tens of gigawatts up to the terawatt level. We also demonstrate that this method of pulse compression can be extended to long-range propagation, providing self-compression of high-peak-power laser pulses in atmospheric air within propagation ranges as long as hundreds of meters, suggesting new ways towards longer-range standoff detection and remote sensing.

  10. Interactive computer graphics applications for compressible aerodynamics

    Science.gov (United States)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  11. Adiabatic compression of ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.

    1982-01-01

    A study has been made of the compression of collisionless ion rings in an increasing external magnetic field, B/sub e/ = zB/sub e/(t), by numerically implementing a previously developed kinetic theory of ring compression. The theory is general in that there is no limitation on the ring geometry or the compression ratio, lambdaequivalentB/sub e/ (final)/B/sub e/ (initial)> or =1. However, the motion of a single particle in an equilibrium is assumed to be completely characterized by its energy H and canonical angular momentum P/sub theta/ with the absence of a third constant of the motion. The present computational work assumes that plasma currents are negligible, as is appropriate for a low-temperature collisional plasma. For a variety of initial ring geometries and initial distribution functions (having a single value of P/sub theta/), it is found that the parameters for ''fat'', small aspect ratio rings follow general scaling laws over a large range of compression ratios, 1 3 : The ring radius varies as lambda/sup -1/2/; the average single particle energy as lambda/sup 0.72/; the root mean square energy spread as lambda/sup 1.1/; and the total current as lambda/sup 0.79/. The field reversal parameter is found to saturate at values typically between 2 and 3. For large compression ratios the current density is found to ''hollow out''. This hollowing tends to improve the interchange stability of an embedded low β plasma. The implications of these scaling laws for fusion reactor systems are discussed

  12. POLYCOMP: Efficient and configurable compression of astronomical timelines

    Science.gov (United States)

    Tomasi, M.

    2016-07-01

    This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.

  13. Graph Compression by BFS

    Directory of Open Access Journals (Sweden)

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  14. Compression device for feeding a waste material to a reactor

    Science.gov (United States)

    Williams, Paul M.; Faller, Kenneth M.; Bauer, Edward J.

    2001-08-21

    A compression device for feeding a waste material to a reactor includes a waste material feed assembly having a hopper, a supply tube and a compression tube. Each of the supply and compression tubes includes feed-inlet and feed-outlet ends. A feed-discharge valve assembly is located between the feed-outlet end of the compression tube and the reactor. A feed auger-screw extends axially in the supply tube between the feed-inlet and feed-outlet ends thereof. A compression auger-screw extends axially in the compression tube between the feed-inlet and feed-outlet ends thereof. The compression tube is sloped downwardly towards the reactor to drain fluid from the waste material to the reactor and is oriented at generally right angle to the supply tube such that the feed-outlet end of the supply tube is adjacent to the feed-inlet end of the compression tube. A programmable logic controller is provided for controlling the rotational speed of the feed and compression auger-screws for selectively varying the compression of the waste material and for overcoming jamming conditions within either the supply tube or the compression tube.

  15. Prechamber Compression-Ignition Engine Performance

    Science.gov (United States)

    Moore, Charles S; Collins, John H , Jr

    1938-01-01

    Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.

  16. Spectral Compressive Sensing with Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco

    2013-01-01

    . In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...

  17. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    Science.gov (United States)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  18. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation

    OpenAIRE

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-01-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects ...

  19. Thermo-fluid dynamic analysis of wet compression process

    Energy Technology Data Exchange (ETDEWEB)

    Mohan, Abhay; Kim, Heuy Dong [School of Mechanical Engineering, Andong National University, Andong (Korea, Republic of); Chidambaram, Palani Kumar [FMTRC, Daejoo Machinery Co. Ltd., Daegu (Korea, Republic of); Suryan, Abhilash [Dept. of Mechanical Engineering, College of Engineering Trivandrum, Kerala (India)

    2016-12-15

    Wet compression systems increase the useful power output of a gas turbine by reducing the compressor work through the reduction of air temperature inside the compressor. The actual wet compression process differs from the conventional single phase compression process due to the presence of latent heat component being absorbed by the evaporating water droplets. Thus the wet compression process cannot be assumed isentropic. In the current investigation, the gas-liquid two phase has been modeled as air containing dispersed water droplets inside a simple cylinder-piston system. The piston moves in the axial direction inside the cylinder to achieve wet compression. Effects on the thermodynamic properties such as temperature, pressure and relative humidity are investigated in detail for different parameters such as compression speeds and overspray. An analytical model is derived and the requisite thermodynamic curves are generated. The deviations of generated thermodynamic curves from the dry isentropic curves (PV{sup γ} = constant) are analyzed.

  20. Thermo-fluid dynamic analysis of wet compression process

    International Nuclear Information System (INIS)

    Mohan, Abhay; Kim, Heuy Dong; Chidambaram, Palani Kumar; Suryan, Abhilash

    2016-01-01

    Wet compression systems increase the useful power output of a gas turbine by reducing the compressor work through the reduction of air temperature inside the compressor. The actual wet compression process differs from the conventional single phase compression process due to the presence of latent heat component being absorbed by the evaporating water droplets. Thus the wet compression process cannot be assumed isentropic. In the current investigation, the gas-liquid two phase has been modeled as air containing dispersed water droplets inside a simple cylinder-piston system. The piston moves in the axial direction inside the cylinder to achieve wet compression. Effects on the thermodynamic properties such as temperature, pressure and relative humidity are investigated in detail for different parameters such as compression speeds and overspray. An analytical model is derived and the requisite thermodynamic curves are generated. The deviations of generated thermodynamic curves from the dry isentropic curves (PV γ = constant) are analyzed

  1. Statistical conditional sampling for variable-resolution video compression.

    Directory of Open Access Journals (Sweden)

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  2. Modelling for Fuel Optimal Control of a Variable Compression Engine

    OpenAIRE

    Nilsson, Ylva

    2007-01-01

    Variable compression engines are a mean to meet the demand on lower fuel consumption. A high compression ratio results in high engine efficiency, but also increases the knock tendency. On conventional engines with fixed compression ratio, knock is avoided by retarding the ignition angle. The variable compression engine offers an extra dimension in knock control, since both ignition angle and compression ratio can be adjusted. The central question is thus for what combination of compression ra...

  3. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    Directory of Open Access Journals (Sweden)

    Herzke Tobias

    2005-01-01

    Full Text Available The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase

  4. Emittance Growth during Bunch Compression in the CTF-II

    Energy Technology Data Exchange (ETDEWEB)

    Raubenheimer, Tor O

    1999-02-26

    Measurements of the beam emittance during bunch compression in the CLIC Test Facility (CTF-II) are described. The measurements were made with different beam charges and different energy correlations versus the bunch compressor settings which were varied from no compression through the point of full compression and to over-compression. Significant increases in the beam emittance were observed with the maximum emittance occurring near the point of full (maximal) compression. Finally, evaluation of possible emittance dilution mechanisms indicate that coherent synchrotron radiation was the most likely cause.

  5. Type-I cascaded quadratic soliton compression in lithium niobate: Compressing femtosecond pulses from high-power fiber lasers

    DEFF Research Database (Denmark)

    Bache, Morten; Wise, Frank W.

    2010-01-01

    The output pulses of a commercial high-power femtosecond fiber laser or amplifier are typically around 300–500 fs with wavelengths of approximately 1030 nm and tens of microjoules of pulse energy. Here, we present a numerical study of cascaded quadratic soliton compression of such pulses in LiNbO3....... However, the strong group-velocity dispersion implies that the pulses can achieve moderate compression to durations of less than 130 fs in available crystal lengths. Most of the pulse energy is conserved because the compression is moderate. The effects of diffraction and spatial walk-off are addressed......, and in particular the latter could become an issue when compressing such long crystals (around 10 cm long). We finally show that the second harmonic contains a short pulse locked to the pump and a long multi-picosecond red-shifted detrimental component. The latter is caused by the nonlocal effects...

  6. Compact torus compression of microwaves

    International Nuclear Information System (INIS)

    Hewett, D.W.; Langdon, A.B.

    1985-01-01

    The possibility that a compact torus (CT) might be accelerated to large velocities has been suggested by Hartman and Hammer. If this is feasible one application of these moving CTs might be to compress microwaves. The proposed mechanism is that a coaxial vacuum region in front of a CT is prefilled with a number of normal electromagnetic modes on which the CT impinges. A crucial assumption of this proposal is that the CT excludes the microwaves and therefore compresses them. Should the microwaves penetrate the CT, compression efficiency is diminished and significant CT heating results. MFE applications in the same parameters regime have found electromagnetic radiation capable of penetrating, heating, and driving currents. We report here a cursory investigation of rf penetration using a 1-D version of a direct implicit PIC code

  7. Compressibility characteristics of Sabak Bernam Marine Clay

    Science.gov (United States)

    Lat, D. C.; Ali, N.; Jais, I. B. M.; Baharom, B.; Yunus, N. Z. M.; Salleh, S. M.; Azmi, N. A. C.

    2018-04-01

    This study is carried out to determine the geotechnical properties and compressibility characteristics of marine clay collected at Sabak Bernam. The compressibility characteristics of this soil are determined from 1-D consolidation test and verified by existing correlations by other researchers. No literature has been found on the compressibility characteristics of Sabak Bernam Marine Clay. It is important to carry out this study since this type of marine clay covers large coastal area of west coast Malaysia. This type of marine clay was found on the main road connecting Klang to Perak and the road keeps experiencing undulation and uneven settlement which jeopardise the safety of the road users. The soil is indicated in the Generalised Soil Map of Peninsular Malaysia as a CLAY with alluvial soil on recent marine and riverine alluvium. Based on the British Standard Soil Classification and Plasticity Chart, the soil is classified as a CLAY with very high plasticity (CV). Results from laboratory test on physical properties and compressibility parameters show that Sabak Bernam Marine Clay (SBMC) is highly compressible, has low permeability and poor drainage characteristics. The compressibility parameters obtained for SBMC is in a good agreement with other researchers in the same field.

  8. Compressing bitmap indexes for faster search operations

    International Nuclear Information System (INIS)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-01-01

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed

  9. Compressing bitmap indexes for faster search operations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  10. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-01-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data. (author)

  11. Shock absorbing properties of toroidal shells under compression, 3

    International Nuclear Information System (INIS)

    Sugita, Yuji

    1985-01-01

    The author has previously presented the static load-deflection relations of a toroidal shell subjected to axisymmetric compression between rigid plates and those of its outer half when subjected to lateral compression. In both these cases, the analytical method was based on the incremental Rayleigh-Ritz method. In this paper, the effects of compression angle and strain rate on the load-deflection relations of the toroidal shell are investigated for its use as a shock absorber for the radioactive material shipping cask which must keep its structural integrity even after accidental falls at any angle. Static compression tests have been carried out at four angles of compression, 10 0 , 20 0 , 50 0 , 90 0 and the applications of the preceding analytical method have been discussed. Dynamic compression tests have also been performed using the free-falling drop hammer. The results are compared with those in the static compression tests. (author)

  12. Correlation and image compression for limited-bandwidth CCD.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  13. The Fermilab Main Injector dipole and quadrupole cooling design and bus connections

    International Nuclear Information System (INIS)

    Satti, J.A.

    1995-06-01

    The proposed system for connecting the low conductivity water (LCW) and the electrical power to the magnets is explained. This system requires minimum maintenance. Stainless steel headers supply LCW to local, secondary manifolds which regulate the flow to the dipole and to the copper bus which conduct both power and cooling water to the quadrupole. A combination of ceramic feedthroughs and thermoplastic hoses insulate the piping electrically from the copper bus system. The utilities for the Main Injector are grouped together at the outside wall of the tunnel leaving most of the enclosure space for servicing. Space above the headers is available for future accelerator expansion. The new dipoles have bolted electrical connections with flexible copper jumpers. Separate compression fittings are used for the water connections. Each dipole magnet has two water circuits in parallel designed to minimize thermal stresses and the number of insulators. Two electrical insulators are used in series because this design has been shown to minimize electrolyses problems and copper ion deposits inside the insulators. The design value of the temperature gradient of the LCW is 8 degrees C

  14. Large Eddy Simulation for Compressible Flows

    CERN Document Server

    Garnier, E; Sagaut, P

    2009-01-01

    Large Eddy Simulation (LES) of compressible flows is still a widely unexplored area of research. The authors, whose books are considered the most relevant monographs in this field, provide the reader with a comprehensive state-of-the-art presentation of the available LES theory and application. This book is a sequel to "Large Eddy Simulation for Incompressible Flows", as most of the research on LES for compressible flows is based on variable density extensions of models, methods and paradigms that were developed within the incompressible flow framework. The book addresses both the fundamentals and the practical industrial applications of LES in order to point out gaps in the theoretical framework as well as to bridge the gap between LES research and the growing need to use it in engineering modeling. After introducing the fundamentals on compressible turbulence and the LES governing equations, the mathematical framework for the filtering paradigm of LES for compressible flow equations is established. Instead ...

  15. Compression and archiving of digital images

    International Nuclear Information System (INIS)

    Huang, H.K.

    1988-01-01

    This paper describes the application of a full-frame bit-allocation image compression technique to a hierarchical digital image archiving system consisting of magnetic disks, optical disks and an optical disk library. The digital archiving system without the compression has been in clinical operation in the Pediatric Radiology for more than half a year. The database in the system consists of all pediatric inpatients including all images from computed radiography, digitized x-ray films, CT, MR, and US. The rate of image accumulation is approximately 1,900 megabytes per week. The hardware design of the compression module is based on a Motorola 68020 microprocessor, A VME bus, a 16 megabyte image buffer memory board, and three Motorola digital signal processing 56001 chips on a VME board for performing the two-dimensional cosine transform and the quantization. The clinical evaluation of the compression module with the image archiving system is expected to be in February 1988

  16. 46 CFR 112.50-7 - Compressed air starting.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...

  17. Bitshuffle: Filter for improving compression of typed binary data

    Science.gov (United States)

    Masui, Kiyoshi

    2017-12-01

    Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.

  18. Heterogeneous Compression of Large Collections of Evolutionary Trees.

    Science.gov (United States)

    Matthews, Suzanne J

    2015-01-01

    Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.

  19. Lossy image compression for digital medical imaging systems

    Science.gov (United States)

    Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.

    1990-07-01

    Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to

  20. Thermal compression modulus of polarized neutron matter

    International Nuclear Information System (INIS)

    Abd-Alla, M.

    1990-05-01

    We applied the equation of state for pure polarized neutron matter at finite temperature, calculated previously, to calculate the compression modulus. The compression modulus of pure neutron matter at zero temperature is very large and reflects the stiffness of the equation of state. It has a little temperature dependence. Introducing the spin excess parameter in the equation of state calculations is important because it has a significant effect on the compression modulus. (author). 25 refs, 2 tabs

  1. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  2. Dual pathology proximal median nerve compression of the forearm.

    LENUS (Irish Health Repository)

    Murphy, Siun M

    2013-12-01

    We report an unusual case of synchronous pathology in the forearm- the coexistence of a large lipoma of the median nerve together with an osteochondroma of the proximal ulna, giving rise to a dual proximal median nerve compression. Proximal median nerve compression neuropathies in the forearm are uncommon compared to the prevalence of distal compression neuropathies (eg Carpal Tunnel Syndrome). Both neural fibrolipomas (Refs. 1,2) and osteochondromas of the proximal ulna (Ref. 3) in isolation are rare but well documented. Unlike that of a distal compression, a proximal compression of the median nerve will often have a definite cause. Neural fibrolipoma, also called fibrolipomatous hamartoma are rare, slow-growing, benign tumours of peripheral nerves, most often occurring in the median nerve of younger patients. To our knowledge, this is the first report of such dual pathology in the same forearm, giving rise to a severe proximal compression of the median nerve. In this case, the nerve was being pushed anteriorly by the osteochondroma, and was being compressed from within by the intraneural lipoma. This unusual case highlights the advantage of preoperative imaging as part of the workup of proximal median nerve compression.

  3. Compression of FASTQ and SAM format sequencing data.

    Directory of Open Access Journals (Sweden)

    James K Bonfield

    Full Text Available Storage and transmission of the data produced by modern DNA sequencing instruments has become a major concern, which prompted the Pistoia Alliance to pose the SequenceSqueeze contest for compression of FASTQ files. We present several compression entries from the competition, Fastqz and Samcomp/Fqzcomp, including the winning entry. These are compared against existing algorithms for both reference based compression (CRAM, Goby and non-reference based compression (DSRC, BAM and other recently published competition entries (Quip, SCALCE. The tools are shown to be the new Pareto frontier for FASTQ compression, offering state of the art ratios at affordable CPU costs. All programs are freely available on SourceForge. Fastqz: https://sourceforge.net/projects/fastqz/, fqzcomp: https://sourceforge.net/projects/fqzcomp/, and samcomp: https://sourceforge.net/projects/samcomp/.

  4. Plant for compacting compressible radioactive waste

    International Nuclear Information System (INIS)

    Baatz, H.; Rittscher, D.; Lueer, H.J.; Ambros, R.

    1983-01-01

    The waste is filled into auxiliary barrels made of sheet steel and compressed with the auxiliary barrels into steel jackets. These can be stacked in storage barrels. A hydraulic press is included in the plant, which has a horizontal compression chamber and a horizontal pressure piston, which works against a counter bearing slider. There is a filling and emptying device for the pressure chamber behind the counter bearing slider. The auxiliary barrels can be introduced into the compression chamber by the filling and emptying device. The pressure piston also pushes out the steel jackets formed, so that they are taken to the filling and emptying device. (orig./HP) [de

  5. Survived ileocecal blowout from compressed air.

    Science.gov (United States)

    Weber, Marco; Kolbus, Frank; Dressler, Jan; Lessig, Rüdiger

    2011-03-01

    Industrial accidents with compressed air entering the gastro-intestinal tract often run fatally. The pressures usually over-exceed those used by medical applications such as colonoscopy and lead to vast injuries of the intestines with high mortality. The case described in this report is of a 26-year-old man who was harmed by compressed air that entered through the anus. He survived because of fast emergency operation. This case underlines necessity of explicit instruction considering hazards handling compressed air devices to maintain safety at work. Further, our observations support the hypothesis that the mucosa is the most elastic layer of the intestine wall.

  6. Lagrangian statistics in compressible isotropic homogeneous turbulence

    Science.gov (United States)

    Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi

    2011-11-01

    In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.

  7. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

  8. Compressibility of rotating black holes

    International Nuclear Information System (INIS)

    Dolan, Brian P.

    2011-01-01

    Interpreting the cosmological constant as a pressure, whose thermodynamically conjugate variable is a volume, modifies the first law of black hole thermodynamics. Properties of the resulting thermodynamic volume are investigated: the compressibility and the speed of sound of the black hole are derived in the case of nonpositive cosmological constant. The adiabatic compressibility vanishes for a nonrotating black hole and is maximal in the extremal case--comparable with, but still less than, that of a cold neutron star. A speed of sound v s is associated with the adiabatic compressibility, which is equal to c for a nonrotating black hole and decreases as the angular momentum is increased. An extremal black hole has v s 2 =0.9 c 2 when the cosmological constant vanishes, and more generally v s is bounded below by c/√(2).

  9. Compressive behavior of fine sand.

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Bradley E. (Air Force Research Laboratory, Eglin, FL); Kabir, Md. E. (Purdue University, West Lafayette, IN); Song, Bo; Chen, Wayne (Purdue University, West Lafayette, IN)

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  10. Rupture of esophagus by compressed air.

    Science.gov (United States)

    Wu, Jie; Tan, Yuyong; Huo, Jirong

    2016-11-01

    Currently, beverages containing compressed air such as cola and champagne are widely used in our daily life. Improper ways to unscrew the bottle, usually by teeth, could lead to an injury, even a rupture of the esophagus. This letter to editor describes a case of esophageal rupture caused by compressed air.

  11. Compressibility Analysis of the Tongue During Speech

    National Research Council Canada - National Science Library

    Unay, Devrim

    2001-01-01

    .... In this paper, 3D compression and expansion analysis of the tongue will be presented. Patterns of expansion and compression have been compared for different syllables and various repetitions of each syllable...

  12. Compression garments and exercise: no influence of pressure applied.

    Science.gov (United States)

    Beliard, Samuel; Chauveau, Michel; Moscatiello, Timothée; Cros, François; Ecarnot, Fiona; Becker, François

    2015-03-01

    Compression garments on the lower limbs are increasingly popular among athletes who wish to improve performance, reduce exercise-induced discomfort, and reduce the risk of injury. However, the beneficial effects of compression garments have not been clearly established. We performed a review of the literature for prospective, randomized, controlled studies, using quantified lower limb compression in order to (1) describe the beneficial effects that have been identified with compression garments, and in which conditions; and (2) investigate whether there is a relation between the pressure applied and the reported effects. The pressure delivered were measured either in laboratory conditions on garments identical to those used in the studies, or derived from publication data. Twenty three original articles were selected for inclusion in this review. The effects of wearing compression garments during exercise are controversial, as most studies failed to demonstrate a beneficial effect on immediate or performance recovery, or on delayed onset of muscle soreness. There was a trend towards a beneficial effect of compression garments worn during recovery, with performance recovery found to be improved in the five studies in which this was investigated, and delayed-onset muscle soreness was reportedly reduced in three of these five studies. There is no apparent relation between the effects of compression garments worn during or after exercise and the pressures applied, since beneficial effects were obtained with both low and high pressures. Wearing compression garments during recovery from exercise seems to be beneficial for performance recovery and delayed-onset muscle soreness, but the factors explaining this efficacy remain to be elucidated. Key pointsWe observed no relationship between the effects of compression and the pressures applied.The pressure applied at the level of the lower limb by compression garments destined for use by athletes varies widely between

  13. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  14. On-board image compression for the RAE lunar mission

    Science.gov (United States)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  15. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  16. After microvascular decompression to treat trigeminal neuralgia, both immediate pain relief and recurrence rates are higher in patients with arterial compression than with venous compression.

    Science.gov (United States)

    Shi, Lei; Gu, Xiaoyan; Sun, Guan; Guo, Jun; Lin, Xin; Zhang, Shuguang; Qian, Chunfa

    2017-07-04

    We explored differences in postoperative pain relief achieved through decompression of the trigeminal nerve compressed by arteries and veins. Clinical characteristics, intraoperative findings, and postoperative curative effects were analyzed in 72 patients with trigeminal neuralgia who were treated by microvascular decompression. The patients were divided into arterial and venous compression groups based on intraoperative findings. Surgical curative effects included immediate relief, delayed relief, obvious reduction, and invalid result. Among the 40 patients in the arterial compression group, 32 had immediate pain relief of pain (80.0%), 5 cases had delayed relief (12.5%), and 3 cases had an obvious reduction (7.5%). In the venous compression group, 12 patients had immediate relief of pain (37.5%), 13 cases had delayed relief (40.6%), and 7 cases had an obvious reduction (21.9%). During 2-year follow-up period, 6 patients in the arterial compression group experienced recurrence of trigeminal neuralgia, but there were no recurrences in the venous compression group. Simple artery compression was followed by early relief of trigeminal neuralgia more often than simple venous compression. However, the trigeminal neuralgia recurrence rate was higher in the artery compression group than in the venous compression group.

  17. Results of subscale MTF compression experiments

    Science.gov (United States)

    Howard, Stephen; Mossman, A.; Donaldson, M.; Fusion Team, General

    2016-10-01

    In magnetized target fusion (MTF) a magnetized plasma torus is compressed in a time shorter than its own energy confinement time, thereby heating to fusion conditions. Understanding plasma behavior and scaling laws is needed to advance toward a reactor-scale demonstration. General Fusion is conducting a sequence of subscale experiments of compact toroid (CT) plasmas being compressed by chemically driven implosion of an aluminum liner, providing data on several key questions. CT plasmas are formed by a coaxial Marshall gun, with magnetic fields supported by internal plasma currents and eddy currents in the wall. Configurations that have been compressed so far include decaying and sustained spheromaks and an ST that is formed into a pre-existing toroidal field. Diagnostics measure B, ne, visible and x-ray emission, Ti and Te. Before compression the CT has an energy of 10kJ magnetic, 1 kJ thermal, with Te of 100 - 200 eV, ne 5x1020 m-3. Plasma was stable during a compression factor R0/R >3 on best shots. A reactor scale demonstration would require 10x higher initial B and ne but similar Te. Liner improvements have minimized ripple, tearing and ejection of micro-debris. Plasma facing surfaces have included plasma-sprayed tungsten, bare Cu and Al, and gettering with Ti and Li.

  18. The task of control digital image compression

    OpenAIRE

    TASHMANOV E.B.; МАМАTOV М.S.

    2014-01-01

    In this paper we consider the relationship of control tasks and image compression losses. The main idea of this approach is to allocate structural lines simplified image and further compress the selected data

  19. A Posteriori Restoration of Block Transform-Compressed Data

    Science.gov (United States)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  20. Data compression of digital X-ray images from a clinical viewpoint

    International Nuclear Information System (INIS)

    Ando, Yutaka

    1992-01-01

    For the PACS (picture archiving and communication system), large storage capacity recording media and a fast data transfer network are necessary. When the PACS are working, these technology requirements become an large problem. So we need image data compression having a higher recording efficiency media and an improved transmission ratio. There are two kinds of data compression methods, one is reversible compression and other is the irreversible one. By these reversible compression methods, a compressed-expanded image is exactly equal to the original image. The ratio of data compression is about between 1/2 an d1/3. On the other hand, for irreversible data compression, the compressed-expanded image is a distorted image, and we can achieve a high compression ratio by using this method. In the medical field, the discrete cosine transform (DCT) method is popular because of the low distortion and fast performance. The ratio of data compression is actually from 1/10 to 1/20. It is important for us to decide the compression ratio according to the purposes and modality of the image. We must carefully select the ratio of the data compression because the suitable compression ratio alters in the usage of image for education, clinical diagnosis and reference. (author)

  1. Chest compression rates and survival following out-of-hospital cardiac arrest.

    Science.gov (United States)

    Idris, Ahamed H; Guffey, Danielle; Pepe, Paul E; Brown, Siobhan P; Brooks, Steven C; Callaway, Clifton W; Christenson, Jim; Davis, Daniel P; Daya, Mohamud R; Gray, Randal; Kudenchuk, Peter J; Larsen, Jonathan; Lin, Steve; Menegazzi, James J; Sheehan, Kellie; Sopko, George; Stiell, Ian; Nichol, Graham; Aufderheide, Tom P

    2015-04-01

    Guidelines for cardiopulmonary resuscitation recommend a chest compression rate of at least 100 compressions/min. A recent clinical study reported optimal return of spontaneous circulation with rates between 100 and 120/min during cardiopulmonary resuscitation for out-of-hospital cardiac arrest. However, the relationship between compression rate and survival is still undetermined. Prospective, observational study. Data is from the Resuscitation Outcomes Consortium Prehospital Resuscitation IMpedance threshold device and Early versus Delayed analysis clinical trial. Adults with out-of-hospital cardiac arrest treated by emergency medical service providers. None. Data were abstracted from monitor-defibrillator recordings for the first five minutes of emergency medical service cardiopulmonary resuscitation. Multiple logistic regression assessed odds ratio for survival by compression rate categories (compression fraction and depth, first rhythm, and study site. Compression rate data were available for 10,371 patients; 6,399 also had chest compression fraction and depth data. Age (mean±SD) was 67±16 years. Chest compression rate was 111±19 per minute, compression fraction was 0.70±0.17, and compression depth was 42±12 mm. Circulation was restored in 34%; 9% survived to hospital discharge. After adjustment for covariates without chest compression depth and fraction (n=10,371), a global test found no significant relationship between compression rate and survival (p=0.19). However, after adjustment for covariates including chest compression depth and fraction (n=6,399), the global test found a significant relationship between compression rate and survival (p=0.02), with the reference group (100-119 compressions/min) having the greatest likelihood for survival. After adjustment for chest compression fraction and depth, compression rates between 100 and 120 per minute were associated with greatest survival to hospital discharge.

  2. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    Energy Technology Data Exchange (ETDEWEB)

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  3. Computer calculations of compressibility of natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Kassem, J.H.; Mattar, L.; Dranchuk, P.M

    An alternative method for the calculation of pseudo reduced compressibility of natural gas is presented. The method is incorporated into the routines by adding a single FORTRAN statement before the RETURN statement. The method is suitable for computer and hand-held calculator applications. It produces the same reduced compressibility as other available methods but is computationally superior. Tabular definitions of coefficients and comparisons of predicted pseudo reduced compressibility using different methods are presented, along with appended FORTRAN subroutines. 7 refs., 2 tabs.

  4. Subband Coding Methods for Seismic Data Compression

    Science.gov (United States)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  5. Evolution Of Nonlinear Waves in Compressing Plasma

    International Nuclear Information System (INIS)

    Schmit, P.F.; Dodin, I.Y.; Fisch, N.J.

    2011-01-01

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size Δ during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches Δ. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  6. Evolution Of Nonlinear Waves in Compressing Plasma

    Energy Technology Data Exchange (ETDEWEB)

    P.F. Schmit, I.Y. Dodin, and N.J. Fisch

    2011-05-27

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size {Delta} during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches {Delta}. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  7. Effect of Compression Garments on Physiological Responses After Uphill Running.

    Science.gov (United States)

    Struhár, Ivan; Kumstát, Michal; Králová, Dagmar Moc

    2018-03-01

    Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60º·s-1, 120º·s-1) was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  8. Effect of Compression Garments on Physiological Responses After Uphill Running

    Directory of Open Access Journals (Sweden)

    Struhár Ivan

    2018-03-01

    Full Text Available Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60o·s-1, 120o·s-1 was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  9. Compressibility and thermal expansion of cubic silicon nitride

    DEFF Research Database (Denmark)

    Jiang, Jianzhong; Lindelov, H.; Gerward, Leif

    2002-01-01

    The compressibility and thermal expansion of the cubic silicon nitride (c-Si3N4) phase have been investigated by performing in situ x-ray powder-diffraction measurements using synchrotron radiation, complemented with computer simulations by means of first-principles calculations. The bulk...... compressibility of the c-Si3N4 phase originates from the average of both Si-N tetrahedral and octahedral compressibilities where the octahedral polyhedra are less compressible than the tetrahedral ones. The origin of the unit cell expansion is revealed to be due to the increase of the octahedral Si-N and N-N bond...

  10. Estimates of post-acceleration longitudinal bunch compression

    International Nuclear Information System (INIS)

    Judd, D.L.

    1977-01-01

    A simple analytic method is developed, based on physical approximations, for treating transient implosive longitudinal compression of bunches of heavy ions in an accelerator system for ignition of inertial-confinement fusion pellet targets. Parametric dependences of attainable compressions and of beam path lengths and times during compression are indicated for ramped pulsed-gap lines, rf systems in storage and accumulator rings, and composite systems, including sections of free drift. It appears that for high-confidence pellets in a plant producing 1000 MW of electric power the needed pulse lengths cannot be obtained with rings alone unless an unreasonably large number of them are used, independent of choice of rf harmonic number. In contrast, pulsed-gap lines alone can meet this need. The effects of an initial inward compressive drift and of longitudinal emittance are included

  11. Real-time lossless compression of depth streams

    KAUST Repository

    Schneider, Jens

    2017-08-17

    Various examples are provided for lossless compression of data streams. In one example, a Z-lossless (ZLS) compression method includes generating compacted depth information by condensing information of a depth image and a compressed binary representation of the depth image using histogram compaction and decorrelating the compacted depth information to produce bitplane slicing of residuals by spatial prediction. In another example, an apparatus includes imaging circuitry that can capture one or more depth images and processing circuitry that can generate compacted depth information by condensing information of a captured depth image and a compressed binary representation of the captured depth image using histogram compaction; decorrelate the compacted depth information to produce bitplane slicing of residuals by spatial prediction; and generate an output stream based upon the bitplane slicing.

  12. Real-time lossless compression of depth streams

    KAUST Repository

    Schneider, Jens

    2017-01-01

    Various examples are provided for lossless compression of data streams. In one example, a Z-lossless (ZLS) compression method includes generating compacted depth information by condensing information of a depth image and a compressed binary representation of the depth image using histogram compaction and decorrelating the compacted depth information to produce bitplane slicing of residuals by spatial prediction. In another example, an apparatus includes imaging circuitry that can capture one or more depth images and processing circuitry that can generate compacted depth information by condensing information of a captured depth image and a compressed binary representation of the captured depth image using histogram compaction; decorrelate the compacted depth information to produce bitplane slicing of residuals by spatial prediction; and generate an output stream based upon the bitplane slicing.

  13. BIND – An algorithm for loss-less compression of nucleotide ...

    Indian Academy of Sciences (India)

    constituting the FNA data set. Supplementary table 2. Original and compressed file sizes (obtained using various compression algorithms) for 2679 files constituting the FFN data set. Supplementary table 3. Original and compressed file sizes (obtained using various compression algorithms) for 25 files constituting the ...

  14. Force balancing in mammographic compression

    International Nuclear Information System (INIS)

    Branderhorst, W.; Groot, J. E. de; Lier, M. G. J. T. B. van; Grimbergen, C. A.; Neeter, L. M. F. H.; Heeten, G. J. den; Neeleman, C.

    2016-01-01

    Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess its feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast

  15. Lossless quantum data compression and variable-length coding

    International Nuclear Information System (INIS)

    Bostroem, Kim; Felbinger, Timo

    2002-01-01

    In order to compress quantum messages without loss of information it is necessary to allow the length of the encoded messages to vary. We develop a general framework for variable-length quantum messages in close analogy to the classical case and show that lossless compression is only possible if the message to be compressed is known to the sender. The lossless compression of an ensemble of messages is bounded from below by its von-Neumann entropy. We show that it is possible to reduce the number of qbits passing through a quantum channel even below the von Neumann entropy by adding a classical side channel. We give an explicit communication protocol that realizes lossless and instantaneous quantum data compression and apply it to a simple example. This protocol can be used for both online quantum communication and storage of quantum data

  16. Compresso: Efficient Compression of Segmentation Data for Connectomics

    KAUST Repository

    Matejek, Brian

    2017-09-03

    Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient storage and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with structured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and image segmentation datasets. Our method provides a factor of 600–2200x compression for label volumes, with running times suitable for practice.

  17. Homogeneous Charge Compression Ignition Combustion of Dimethyl Ether

    DEFF Research Database (Denmark)

    Pedersen, Troels Dyhr

    This thesis is based on experimental and numerical studies on the use of dimethyl ether (DME) in the homogeneous charge compression ignition (HCCI) combustion process. The first paper in this thesis was published in 2007 and describes HCCI combustion of pure DME in a small diesel engine. The tests...... were designed to investigate the effect of engine speed, compression ratio and equivalence ratio on the combustion timing and the engine performance. It was found that the required compression ratio depended on the equivalence ratio used. A lower equivalence ratio requires a higher compression ratio...... before the fuel is burned completely, due to lower in-cylinder temperatures and lower reaction rates. The study provided some insight in the importance of operating at the correct compression ratio, as well as the operational limitations and emission characteristics of HCCI combustion. HCCI combustion...

  18. Determination of Optimum Compression Ratio: A Tribological Aspect

    Directory of Open Access Journals (Sweden)

    L. Yüksek

    2013-12-01

    Full Text Available Internal combustion engines are the primary energy conversion machines both in industry and transportation. Modern technologies are being implemented to engines to fulfill today's low fuel consumption demand. Friction energy consumed by the rubbing parts of the engines are becoming an important parameter for higher fuel efficiency. Rate of friction loss is primarily affected by sliding speed and the load acting upon rubbing surfaces. Compression ratio is the main parameter that increases the peak cylinder pressure and hence normal load on components. Aim of this study is to investigate the effect of compression ratio on total friction loss of a diesel engine. A variable compression ratio diesel engine was operated at four different compression ratios which were "12.96", "15:59", "18:03", "20:17". Brake power and speed was kept constant at predefined value while measuring the in- cylinder pressure. Friction mean effective pressure ( FMEP data were obtained from the in cylinder pressure curves for each compression ratio. Ratio of friction power to indicated power of the engine was increased from 22.83% to 37.06% with varying compression ratio from 12.96 to 20:17. Considering the thermal efficiency , FMEP and maximum in- cylinder pressure optimum compression ratio interval of the test engine was determined as 18.8 ÷ 19.6.

  19. Mechanical compression attenuates normal human bronchial epithelial wound healing

    Directory of Open Access Journals (Sweden)

    Malavia Nikita

    2009-02-01

    Full Text Available Abstract Background Airway narrowing associated with chronic asthma results in the transmission of injurious compressive forces to the bronchial epithelium and promotes the release of pro-inflammatory mediators and the denudation of the bronchial epithelium. While the individual effects of compression or denudation are well characterized, there is no data to elucidate how these cells respond to the application of mechanical compression in the presence of a compromised epithelial layer. Methods Accordingly, differentiated normal human bronchial epithelial cells were exposed to one of four conditions: 1 unperturbed control cells, 2 single scrape wound only, 3 static compression (6 hours of 30 cmH2O, and 4 6 hours of static compression after a scrape wound. Following treatment, wound closure rate was recorded, media was assayed for mediator content and the cytoskeletal network was fluorescently labeled. Results We found that mechanical compression and scrape injury increase TGF-β2 and endothelin-1 secretion, while EGF content in the media is attenuated with both injury modes. The application of compression after a pre-existing scrape wound augmented these observations, and also decreased PGE2 media content. Compression stimulated depolymerization of the actin cytoskeleton and significantly attenuated wound healing. Closure rate was partially restored with the addition of exogenous PGE2, but not EGF. Conclusion Our results suggest that mechanical compression reduces the capacity of the bronchial epithelium to close wounds, and is, in part, mediated by PGE2 and a compromised cytoskeleton.

  20. Mechanical behavior of silicon carbide nanoparticles under uniaxial compression

    Energy Technology Data Exchange (ETDEWEB)

    He, Qiuxiang; Fei, Jing; Tang, Chao; Zhong, Jianxin; Meng, Lijun, E-mail: ljmeng@xtu.edu.cn [Xiangtan University, Hunan Key Laboratory for Micro-Nano Energy Materials and Devices, Faculty of School of Physics and Optoelectronics (China)

    2016-03-15

    The mechanical behavior of SiC nanoparticles under uniaxial compression was investigated using an atomic-level compression simulation technique. The results revealed that the mechanical deformation of SiC nanocrystals is highly dependent on compression orientation, particle size, and temperature. A structural transformation from the original zinc-blende to a rock-salt phase is identified for SiC nanoparticles compressed along the [001] direction at low temperature. However, the rock-salt phase is not observed for SiC nanoparticles compressed along the [110] and [111] directions irrespective of size and temperature. The high-pressure-generated rock-salt phase strongly affects the mechanical behavior of the nanoparticles, including their hardness and deformation process. The hardness of [001]-compressed nanoparticles decreases monotonically as their size increases, different from that of [110] and [111]-compressed nanoparticles, which reaches a maximal value at a critical size and then decreases. Additionally, a temperature-dependent mechanical response was observed for all simulated SiC nanoparticles regardless of compression orientation and size. Interestingly, the hardness of SiC nanocrystals with a diameter of 8 nm compressed in [001]-orientation undergoes a steep decrease at 0.1–200 K and then a gradual decline from 250 to 1500 K. This trend can be attributed to different deformation mechanisms related to phase transformation and dislocations. Our results will be useful for practical applications of SiC nanoparticles under high pressure.

  1. Role of Compressibility on Tsunami Propagation

    Science.gov (United States)

    Abdolali, Ali; Kirby, James T.

    2017-12-01

    In the present paper, we aim to reduce the discrepancies between tsunami arrival times evaluated from tsunami models and real measurements considering the role of ocean compressibility. We perform qualitative studies to reveal the phase speed reduction rate via a modified version of the Mild Slope Equation for Weakly Compressible fluid (MSEWC) proposed by Sammarco et al. (2013). The model is validated against a 3-D computational model. Physical properties of surface gravity waves are studied and compared with those for waves evaluated from an incompressible flow solver over realistic geometry for 2011 Tohoku-oki event, revealing reduction in phase speed.Plain Language SummarySubmarine earthquakes and submarine mass failures (SMFs), can generate long gravitational waves (or tsunamis) that propagate at the free surface. Tsunami waves can travel long distances and are known for their dramatic effects on coastal areas. Nowadays, numerical models are used to reconstruct the tsunamigenic events for many scientific and socioeconomic aspects i.e. Tsunami Early Warning Systems, inundation mapping, risk and hazard analysis, etc. A number of typically neglected parameters in these models cause discrepancies between model outputs and observations. Most of the tsunami models predict tsunami arrival times at distant stations slightly early in comparison to observations. In this study, we show how ocean compressibility would affect the tsunami wave propagation speed. In this framework, an efficient two-dimensional model equation for the weakly compressible ocean has been developed, validated and tested for simplified and real cases against three dimensional and incompressible solvers. Taking the effect of compressibility, the phase speed of surface gravity waves is reduced compared to that of an incompressible fluid. Then, we used the model for the case of devastating Tohoku-Oki 2011 tsunami event, improving the model accuracy. This study sheds light for future model development

  2. Developing a dynamic control system for mine compressed air networks

    OpenAIRE

    Van Heerden, S.W.; Pelzer, R.; Marais, J.H.

    2014-01-01

    Mines in general, make use of compressed air systems for daily operational activities. Compressed air on mines is traditionally distributed via compressed air ring networks where multiple shafts are supplied with compressed air from an integral system. These compressed air networks make use of a number of compressors feeding the ring from various locations in the network. While these mines have sophisticated control systems to control these compressors, they are not dynamic systems. Compresso...

  3. Dual pathology proximal median nerve compression of the forearm.

    Science.gov (United States)

    Murphy, Siun M; Browne, Katherine; Tuite, David J; O'Shaughnessy, Michael

    2013-12-01

    We report an unusual case of synchronous pathology in the forearm- the coexistence of a large lipoma of the median nerve together with an osteochondroma of the proximal ulna, giving rise to a dual proximal median nerve compression. Proximal median nerve compression neuropathies in the forearm are uncommon compared to the prevalence of distal compression neuropathies (eg Carpal Tunnel Syndrome). Both neural fibrolipomas (Refs. 1,2) and osteochondromas of the proximal ulna (Ref. 3) in isolation are rare but well documented. Unlike that of a distal compression, a proximal compression of the median nerve will often have a definite cause. Neural fibrolipoma, also called fibrolipomatous hamartoma are rare, slow-growing, benign tumours of peripheral nerves, most often occurring in the median nerve of younger patients. To our knowledge, this is the first report of such dual pathology in the same forearm, giving rise to a severe proximal compression of the median nerve. In this case, the nerve was being pushed anteriorly by the osteochondroma, and was being compressed from within by the intraneural lipoma. This unusual case highlights the advantage of preoperative imaging as part of the workup of proximal median nerve compression. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  4. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  5. Time-space trade-offs for lempel-ziv compressed indexing

    DEFF Research Database (Denmark)

    Bille, Philip; Ettienne, Mikko Berggren; Gørtz, Inge Li

    2017-01-01

    Given a string S, the compressed indexing problem is to preprocess S into a compressed representation that supports fast substring queries. The goal is to use little space relative to the compressed size of S while supporting fast queries. We present a compressed index based on the Lempel-Ziv 1977...... compression scheme. Let n, and z denote the size of the input string, and the compressed LZ77 string, respectively. We obtain the following time-space trade-offs. Given a pattern string P of length m, we can solve the problem in (i) O (m + occ lg lg n) time using O(z lg(n/z) lg lg z) space, or (ii) (m (1...... best space bound, but has a leading term in the query time of O(m(1 + lgϵ z/lg(n/z))). However, for any polynomial compression ratio, i.e., z = O(n1-δ), for constant δ > 0, this becomes O(m). Our index also supports extraction of any substring of length ℓ in O(ℓ + lg(n/z)) time. Technically, our...

  6. Vacancy behavior in a compressed fcc Lennard-Jones crystal

    International Nuclear Information System (INIS)

    Beeler, J.R. Jr.

    1981-12-01

    This computer experiment study concerns the determination of the stable vacancy configuration in a compressed fcc Lennard-Jones crystal and the migration of this defect in a compressed crystal. Isotropic and uniaxial compression stress conditions were studied. The isotropic and uniaxial compression magnitudes employed were 0.94 less than or equal to eta less than or equal to 1.5, and 1.0 less than or equal to eta less than or equal to 1.5, respectively. The site-centered vacancy (SCV) was the stable vacancy configuration whenever cubic symmetry was present. This includes all of the isotropic compression cases and the particular uniaxial compression case (eta = √2) that give a bcc structure. In addition, the SCV was the stable configuration for uniaxial compression eta 1.20, the SV-OP is an extended defect and, therefore, a saddle point for SV-OP migration could not be determined. The mechanism for the transformation from the SCV to the SV-OP as the stable form at eta = 1.29 appears to be an alternating sign [101] and/or [011] shear process

  7. FRC translation into a compression coil

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-01-01

    The equilibrium and translational kinematics of Field-Reversed Configurations (FRCs) in a cylindrical coil which does not conserve flux are problems that arise in connection with adiabatic compressional heating. In this paper, they consider several features of the problem of FRC translation into a compression coil. First, the magnitude of the guide field is calculated and found to exceed that which would be applied to a flux conserver. Second, energy conservation is applied to FRC translation from a flux conserver into a compression coil. It is found that a significant temperature decrease is required for translation to be energetically possible. The temperature change depends on the external inductance in the compression circuit. An analogous case is that of a compression region composed of a compound magnet; in this case the temperature change depends on the ratio of inner and outer coil radii. Finally, the kinematics of intermediate translation states are calculated using an abrupt transition model. It is found, in this model, that the FRC must overcome a potential hill during translation, which requires a small initial velocity

  8. Compressive buckling of black phosphorene nanotubes: an atomistic study

    Science.gov (United States)

    Nguyen, Van-Trang; Le, Minh-Quy

    2018-04-01

    We investigate through molecular dynamics finite element method with Stillinger-Weber potential the uniaxial compression of armchair and zigzag black phosphorene nanotubes. We focus especially on the effects of the tube’s diameter with fixed length-diameter ratio, effects of the tube’s length for a pair of armchair and zigzag tubes of equal diameters, and effects of the tube’s diameter with fixed lengths. Their Young’s modulus, critical compressive stress and critical compressive strain are studied and discussed for these 3 case studies. Compressive buckling was clearly observed in the armchair nanotubes. Local bond breaking near the boundary occurred in the zigzag ones under compression.

  9. Header [nuclear battery

    International Nuclear Information System (INIS)

    Goslee, D.E.; Barr, H.N.

    1976-01-01

    The invention relates to a nuclear-powered microwatt thermoelectric generator, small in size, efficient in operation, and which will last for a considerable period of time. The generator is suitable for implanting within the human body for powering devices such as cardiac pacemakers

  10. Compressive Load Resistance Characteristics of Rice Grain

    OpenAIRE

    Sumpun Chaitep; Chaiy R. Metha Pathawee; Pipatpong Watanawanyoo

    2008-01-01

    Investigation was made to observe the compressive load property of rice gain both rough rice and brown grain. Six rice varieties (indica and japonica) were examined with the moisture content at 10-12%. A compressive load with reference to a principal axis normal to the thickness of the grain were conducted at selected inclined angles of 0°, 15°, 30°, 45°, 60° and 70°. The result showed the compressive load resistance of rice grain based on its characteristic of yield s...

  11. Compressed Air Production Using Vehicle Suspension

    Directory of Open Access Journals (Sweden)

    Ninad Arun Malpure

    2015-08-01

    Full Text Available Abstract Generally compressed air is produced using different types of air compressors which consumes lot of electric energy and is noisy. In this paper an innovative idea is put forth for production of compressed air using movement of vehicle suspension which normal is wasted. The conversion of the force energy into the compressed air is carried out by the mechanism which consists of the vehicle suspension system hydraulic cylinder Non-return valve air compressor and air receiver. We are collecting air in the cylinder and store this energy into the tank by simply driving the vehicle. This method is non-conventional as no fuel input is required and is least polluting.

  12. smallWig: parallel compression of RNA-seq WIG files.

    Science.gov (United States)

    Wang, Zhiying; Weissman, Tsachy; Milenkovic, Olgica

    2016-01-15

    We developed a new lossless compression method for WIG data, named smallWig, offering the best known compression rates for RNA-seq data and featuring random access functionalities that enable visualization, summary statistics analysis and fast queries from the compressed files. Our approach results in order of magnitude improvements compared with bigWig and ensures compression rates only a fraction of those produced by cWig. The key features of the smallWig algorithm are statistical data analysis and a combination of source coding methods that ensure high flexibility and make the algorithm suitable for different applications. Furthermore, for general-purpose file compression, the compression rate of smallWig approaches the empirical entropy of the tested WIG data. For compression with random query features, smallWig uses a simple block-based compression scheme that introduces only a minor overhead in the compression rate. For archival or storage space-sensitive applications, the method relies on context mixing techniques that lead to further improvements of the compression rate. Implementations of smallWig can be executed in parallel on different sets of chromosomes using multiple processors, thereby enabling desirable scaling for future transcriptome Big Data platforms. The development of next-generation sequencing technologies has led to a dramatic decrease in the cost of DNA/RNA sequencing and expression profiling. RNA-seq has emerged as an important and inexpensive technology that provides information about whole transcriptomes of various species and organisms, as well as different organs and cellular communities. The vast volume of data generated by RNA-seq experiments has significantly increased data storage costs and communication bandwidth requirements. Current compression tools for RNA-seq data such as bigWig and cWig either use general-purpose compressors (gzip) or suboptimal compression schemes that leave significant room for improvement. To substantiate

  13. Mammography parameters: compression, dose, and discomfort

    International Nuclear Information System (INIS)

    Blanco, S.; Di Risio, C.; Andisco, D.; Rojas, R.R.; Rojas, R.M.

    2017-01-01

    Objective: To confirm the importance of compression in mammography and relate it to the discomfort expressed by the patients. Materials and methods: Two samples of 402 and 268 mammographies were obtained from two diagnostic centres that use the same mammographic equipment, but different compression techniques. The patient age range was from 21 to 50 years old. (authors) [es

  14. Application of PDF methods to compressible turbulent flows

    Science.gov (United States)

    Delarue, B. J.; Pope, S. B.

    1997-09-01

    A particle method applying the probability density function (PDF) approach to turbulent compressible flows is presented. The method is applied to several turbulent flows, including the compressible mixing layer, and good agreement is obtained with experimental data. The PDF equation is solved using a Lagrangian/Monte Carlo method. To accurately account for the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. The mean pressure, the determination of which has been the object of active research over the last few years, is obtained directly from the particle properties. It is therefore not necessary to link the PDF solver with a finite-volume type solver. The stochastic differential equations (SDE) which model the evolution of particle properties are based on existing second-order closures for compressible turbulence, limited in application to low turbulent Mach number flows. Tests are conducted in decaying isotropic turbulence to compare the performances of the PDF method with the Reynolds-stress closures from which it is derived, and in homogeneous shear flows, at which stage comparison with direct numerical simulation (DNS) data is conducted. The model is then applied to the plane compressible mixing layer, reproducing the well-known decrease in the spreading rate with increasing compressibility. It must be emphasized that the goal of this paper is not as much to assess the performance of models of compressibility effects, as it is to present an innovative and consistent PDF formulation designed for turbulent inhomogeneous compressible flows, with the aim of extending it further to deal with supersonic reacting flows.

  15. [Application value of magnetic compression anastomosis in digestive tract reconstruction].

    Science.gov (United States)

    Du, Xilin; Fan, Chao; Zhang, Hongke; Lu, Jianguo

    2014-05-01

    Magnetic compression anastomosis can compress tissues together and restore the continuity. Magnetic compression anastomosis mainly experienced three stages: magnetic ring, magnetic ring and column, and smart self-assembling magnets for endoscopy (SAMSEN). Nowadays, the magnetic compression anastomosis has been applied in vascular and different digestive tract surgeries, especially for complex surgery, such as anastomotic stenosis of biliary ducts after liver transplantation or congenital esophageal stenosis. Although only case reports are available at present, the advantages of the magnetic compression anastomosis includes lower cost, simplicity, individualization, good efficacy, safety, and minimally invasiveness. We are building a better technical platform to make magnetic compression anastomosis more advanced and popularized.

  16. Layered compression for high-precision depth data.

    Science.gov (United States)

    Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen

    2015-12-01

    With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.

  17. Strength properties of interlocking compressed earth brick units

    Science.gov (United States)

    Saari, S.; Bakar, B. H. Abu; Surip, N. A.

    2017-10-01

    This study presents a laboratory investigation on the properties of interlocking compressed earth brick (ICEB) units. Compressive strength, which is one of the most important properties in masonry structures, is used to determine masonry performance. The compressive strength of the ICEB units was determined by applying a compressive strength test for 340 units from four types of ICEB. To analyze the strength of the ICEB units, each unit was capped by a steel plate at the top and bottom to create a flat surface, and then ICEB was loaded until failure. The average compressive strength of the corresponding ICEB units are as follows: wall brick, 19.15 N/mm2; beam brick, 16.99 N/mm2; column brick, 13.18 N/mm2; and half brick, 11.79 N/mm2. All the ICEB units had compressive strength of over 5 N/mm2, which is the minimum strength for a load-bearing brick. This study proves that ICEB units may be used as load-bearing bricks. The strength of ICEBs is equal to that of other common bricks and blocks that are currently available in the market.

  18. Algorithms and data structures for grammar-compressed strings

    DEFF Research Database (Denmark)

    Cording, Patrick Hagge

    Textual databases for e.g. biological or web-data are growing rapidly, and it is often only feasible to store the data in compressed form. However, compressing the data comes at a price. Traditional algorithms for e.g. pattern matching requires all data to be decompressed - a computationally...... demanding task. In this thesis we design data structures for accessing and searching compressed data efficiently. Our results can be divided into two categories. In the first category we study problems related to pattern matching. In particular, we present new algorithms for counting and comparing...... substrings, and a new algorithm for finding all occurrences of a pattern in which we may insert gaps. In the other category we deal with accessing and decompressing parts of the compressed string. We show how to quickly access a single character of the compressed string, and present a data structure...

  19. Failure behaviour of carbon/carbon composite under compression

    Energy Technology Data Exchange (ETDEWEB)

    Tushtev, K.; Grathwohl, G. [Universitaet Bremen, Advanced Ceramics, Bremen (Germany); Koch, D. [Deutsches Zentrum fuer Luft- und Raumfahrt, Institut fuer Bauweisen- und Konstruktionsforschung, Keramische Verbundstrukturen, Stuttgart (Germany); Horvath, J.

    2012-11-15

    In this work the properties of Carbon/Carbon-material are investigated under quasi-static compression and model-like characterized. The investigated material was produced by pyrolysis of a Carbon/Carbon - composite of bidirectionally reinforced fabric layers. For the compression tests, a device to prevent additional bending stress was made. The stress-strain behaviour of this material has been reproduced in various publications. This will be discussed on the fracture behaviour and compared the experimental results from the compression tests with the characteristics of tensile and shear tests. The different compression and tensile properties of stiffness, poisson and strength were assessed. Differences between the tensile and compression behaviour resulting from on-axis tests by micro buckling and crack closure and off-axis experiments by superimposed pressure normal stresses that lead to increased shear friction. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  20. Mechanics of the Compression Wood Response: II. On the Location, Action, and Distribution of Compression Wood Formation.

    Science.gov (United States)

    Archer, R R; Wilson, B F

    1973-04-01

    A new method for simulation of cross-sectional growth provided detailed information on the location of normal wood and compression wood increments in two tilted white pine (Pinus strobus L.) leaders. These data were combined with data on stiffness, slope, and curvature changes over a 16-week period to make the mechanical analysis. The location of compression wood changed from the under side to a flank side and then to the upper side of the leader as the geotropic stimulus decreased, owing to compression wood action. Its location shifted back to a flank side when the direction of movement of the leader reversed. A model for this action, based on elongation strains, was developed and predicted the observed curvature changes with elongation strains of 0.3 to 0.5%, or a maximal compressive stress of 60 to 300 kilograms per square centimeter. After tilting, new wood formation was distributed so as to maintain consistent strain levels along the leaders in bending under gravitational loads. The computed effective elastic moduli were about the same for the two leaders throughout the season.

  1. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian; Dutta, Aritra; Sun, Qiyu; Foroosh, Hassan

    2017-01-01

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  2. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian

    2017-05-02

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  3. Green and early age compressive strength of extruded cement mortar monitored with compression tests and ultrasonic techniques

    International Nuclear Information System (INIS)

    Voigt, Thomas; Malonn, Tim; Shah, Surendra P.

    2006-01-01

    Knowledge about the early age compressive strength development of cementitious materials is an important factor for the progress and safety of many construction projects. This paper uses cylindrical mortar specimens produced with a ram extruder to investigate the transition of the mortar from plastic and deformable to hardened state. In addition, wave transmission and reflection measurements with P- and S-waves were conducted to obtain further information about the microstructural changes during the setting and hardening process. The experiments have shown that uniaxial compression tests conducted on extruded mortar cylinders are a useful tool to evaluate the green strength as well as the initiation and further development of the compressive strength of the tested material. The propagation of P-waves was found to be indicative of the internal structure of the tested mortars as influenced, for example, by the addition of fine clay particles. S-waves used in transmission and reflection mode proved to be sensitive to the inter-particle bonding caused by the cement hydration and expressed by an increase in compressive strength

  4. Mechanical versus manual chest compressions for cardiac arrest.

    Science.gov (United States)

    Brooks, Steven C; Hassan, Nizar; Bigham, Blair L; Morrison, Laurie J

    2014-02-27

    This is the first update of the Cochrane review on mechanical chest compression devices published in 2011 (Brooks 2011). Mechanical chest compression devices have been proposed to improve the effectiveness of cardiopulmonary resuscitation (CPR). To assess the effectiveness of mechanical chest compressions versus standard manual chest compressions with respect to neurologically intact survival in patients who suffer cardiac arrest. We searched the Cochrane Central Register of Controlled Studies (CENTRAL; 2013, Issue 12), MEDLINE Ovid (1946 to 2013 January Week 1), EMBASE (1980 to 2013 January Week 2), Science Citation abstracts (1960 to 18 November 2009), Science Citation Index-Expanded (SCI-EXPANDED) (1970 to 11 January 2013) on Thomson Reuters Web of Science, biotechnology and bioengineering abstracts (1982 to 18 November 2009), conference proceedings Citation Index-Science (CPCI-S) (1990 to 11 January 2013) and clinicaltrials.gov (2 August 2013). We applied no language restrictions. Experts in the field of mechanical chest compression devices and manufacturers were contacted. We included randomised controlled trials (RCTs), cluster RCTs and quasi-randomised studies comparing mechanical chest compressions versus manual chest compressions during CPR for patients with atraumatic cardiac arrest. Two review authors abstracted data independently; disagreement between review authors was resolved by consensus and by a third review author if consensus could not be reached. The methodologies of selected studies were evaluated by a single author for risk of bias. The primary outcome was survival to hospital discharge with good neurological outcome. We planned to use RevMan 5 (Version 5.2. The Nordic Cochrane Centre) and the DerSimonian & Laird method (random-effects model) to provide a pooled estimate for risk ratio (RR) with 95% confidence intervals (95% CIs), if data allowed. Two new studies were included in this update. Six trials in total, including data from 1166

  5. Bit-wise arithmetic coding for data compression

    Science.gov (United States)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  6. Radial and axial compression of pure electron

    International Nuclear Information System (INIS)

    Park, Y.; Soga, Y.; Mihara, Y.; Takeda, M.; Kamada, K.

    2013-01-01

    Experimental studies are carried out on compression of the density distribution of a pure electron plasma confined in a Malmberg-Penning Trap in Kanazawa University. More than six times increase of the on-axis density is observed under application of an external rotating electric field that couples to low-order Trivelpiece-Gould modes. Axial compression of the density distribution with the axial length of a factor of two is achieved by controlling the confining potential at both ends of the plasma. Substantial increase of the axial kinetic energy is observed during the axial compression. (author)

  7. How compressible is recombinant battery separator mat?

    Energy Technology Data Exchange (ETDEWEB)

    Pendry, C. [Hollingsworth and Vose, Postlip Mills Winchcombe (United Kingdom)

    1999-03-01

    In the past few years, the recombinant battery separator mat (RBSM) for valve-regulated lead/acid (VRLA) batteries has become the focus of much attention. Compression, and the ability of microglass separators to maintain a level of `springiness` have helped reduce premature capacity loss. As higher compressions are reached, we need to determine what, if any, damage can be caused during the assembly process. This paper reviews the findings when RBSM materials, with different surface areas, are compressed under forces up to 500 kPa in the dry state. (orig.)

  8. Discrete Wigner Function Reconstruction and Compressed Sensing

    OpenAIRE

    Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin

    2011-01-01

    A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

  9. Magnetic surface compression heating in the heliotron device

    International Nuclear Information System (INIS)

    Uo, K.; Motojima, O.

    1982-01-01

    The slow adiabatic compression of the plasma in the heliotron device is examined. It has a prominent characteristic that the plasma equilibrium always exists at each stage of the compression. The heating efficiency is calculated. We show the possible access to fusion. A large amount of the initial investment for the heating system (NBI or RF) is reduced by using the magnetic surface compression heating. (author)

  10. Relationship between chest compression rates and outcomes from cardiac arrest.

    Science.gov (United States)

    Idris, Ahamed H; Guffey, Danielle; Aufderheide, Tom P; Brown, Siobhan; Morrison, Laurie J; Nichols, Patrick; Powell, Judy; Daya, Mohamud; Bigham, Blair L; Atkins, Dianne L; Berg, Robert; Davis, Dan; Stiell, Ian; Sopko, George; Nichol, Graham

    2012-06-19

    Guidelines for cardiopulmonary resuscitation recommend a chest compression rate of at least 100 compressions per minute. Animal and human studies have reported that blood flow is greatest with chest compression rates near 120/min, but few have reported rates used during out-of-hospital (OOH) cardiopulmonary resuscitation or the relationship between rate and outcome. The purpose of this study was to describe chest compression rates used by emergency medical services providers to resuscitate patients with OOH cardiac arrest and to determine the relationship between chest compression rate and outcome. Included were patients aged ≥ 20 years with OOH cardiac arrest treated by emergency medical services providers participating in the Resuscitation Outcomes Consortium. Data were abstracted from monitor-defibrillator recordings during cardiopulmonary resuscitation. Multiple logistic regression analysis assessed the association between chest compression rate and outcome. From December 2005 to May 2007, 3098 patients with OOH cardiac arrest were included in this study. Mean age was 67 ± 16 years, and 8.6% survived to hospital discharge. Mean compression rate was 112 ± 19/min. A curvilinear association between chest compression rate and return of spontaneous circulation was found in cubic spline models after multivariable adjustment (P=0.012). Return of spontaneous circulation rates peaked at a compression rate of ≈ 125/min and then declined. Chest compression rate was not significantly associated with survival to hospital discharge in multivariable categorical or cubic spline models. Chest compression rate was associated with return of spontaneous circulation but not with survival to hospital discharge in OOH cardiac arrest.

  11. Venous Leg Ulcers: Effectiveness of new compression therapy/moist ...

    African Journals Online (AJOL)

    (Cutimed Sorbact) and compression bandages (Comprilan,. Tensoplast) in the initial oedema phase, followed by a compression stocking system delivering 40mmHg (JOBST. UlcerCARE). Due to their high stiffness characteristics, these compression products exert a high working pressure during walking and a comfortably ...

  12. A measurement method for piezoelectric material properties under longitudinal compressive stress–-a compression test method for thin piezoelectric materials

    International Nuclear Information System (INIS)

    Kang, Lae-Hyong; Lee, Dae-Oen; Han, Jae-Hung

    2011-01-01

    We introduce a new compression test method for piezoelectric materials to investigate changes in piezoelectric properties under the compressive stress condition. Until now, compression tests of piezoelectric materials have been generally conducted using bulky piezoelectric ceramics and pressure block. The conventional method using the pressure block for thin piezoelectric patches, which are used in unimorph or bimorph actuators, is prone to unwanted bending and buckling. In addition, due to the constrained boundaries at both ends, the observed piezoelectric behavior contains boundary effects. In order to avoid these problems, the proposed method employs two guide plates with initial longitudinal tensile stress. By removing the tensile stress after bonding a piezoelectric material between the guide layers, longitudinal compressive stress is induced in the piezoelectric layer. Using the compression test specimens, two important properties, which govern the actuation performance of the piezoelectric material, the piezoelectric strain coefficients and the elastic modulus, are measured to evaluate the effects of applied electric fields and re-poling. The results show that the piezoelectric strain coefficient d 31 increases and the elastic modulus decreases when high voltage is applied to PZT5A, and the compression in the longitudinal direction decreases the piezoelectric strain coefficient d 31 but does not affect the elastic modulus. We also found that the re-poling of the piezoelectric material increases the elastic modulus, but the piezoelectric strain coefficient d 31 is not changed much (slightly increased) by re-poling

  13. Powder compression mechanics of spray-dried lactose nanocomposites.

    Science.gov (United States)

    Hellrup, Joel; Nordström, Josefina; Mahlin, Denny

    2017-02-25

    The aim of this study was to investigate the structural impact of the nanofiller incorporation on the powder compression mechanics of spray-dried lactose. The lactose was co-spray-dried with three different nanofillers, that is, cellulose nanocrystals, sodium montmorillonite and fumed silica, which led to lower micron-sized nanocomposite particles with varying structure and morphology. The powder compression mechanics of the nanocomposites and physical mixtures of the neat spray-dried components were evaluated by a rational evaluation method with compression analysis as a tool, using the Kawakita equation and the Shapiro-Konopicky-Heckel equation. Particle rearrangement dominated the initial compression profiles due to the small particle size of the materials. The strong contribution of particle rearrangement in the materials with fumed silica continued throughout the whole compression profile, which prohibited an in-depth material characterization. However, the lactose/cellulose nanocrystals and the lactose/sodium montmorillonite nanocomposites demonstrated high yield pressure compared with the physical mixtures indicating increased particle hardness upon composite formation. This increase has likely to do with a reinforcement of the nanocomposite particles by skeleton formation of the nanoparticles. In summary, the rational evaluation of mechanical properties done by applying powder compression analysis proved to be a valuable tool for mechanical evaluation for this type of spray-dried composite materials, unless they demonstrate particle rearrangement throughout the whole compression profile. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Caracterização das publicações periódicas em fonoaudiologia e neurociências: estudo sobre os tipos e temas de artigos e visibilidade na área de linguagem Periodicals' profile in speech-language and hearing pathology and neurosciences: study on types and headers of the language area articles, and their visibility

    Directory of Open Access Journals (Sweden)

    Sandrelli Virginio de Vasconcelos

    2009-03-01

    Full Text Available TEMA: caracterização das publicações periódicas em Fonoaudiologia e Neurociências: estudo sobre os tipos e temas de artigos e visibilidade na área de linguagem. OBJETIVO: caracterizar as publicações periódicas em Fonoaudiologia estudando os artigos da área de Linguagem relacionados às Neurociências no período de 2002 a 2006. CONCLUSÃO: ficou evidente um aumento crescente de publicações em Linguagem e em Neurociências nos últimos cinco anos. Contudo, o número de publicações em determinados temas como a Dislexia, a Doença de Alzheimer e o Transtorno do Déficit de Atenção / Hiperatividade ainda mostra-se resumido.BACKGROUND: periodicals' profile in speechlanguage and hearing pathology and neurosciences: study on types and headers of the language articles, and their visibility. PURPOSE: to characterize periodicals in SpeechLanguage Pathology and Hearing, studying the articles of the Language's area related to Neurosciences in the period from 2002 to 2006. CONCLUSION: increasing publication in Language and Neurosciences in the last five years has been evident. However, number of publications in certain headers, such as dyslexia, Alzheimer's disease and AttentionDeficit/Hyperactivity Disorder are still abridged.

  15. Efficient traveltime compression for 3D prestack Kirchhoff migration

    KAUST Repository

    Alkhalifah, Tariq

    2010-12-13

    Kirchhoff 3D prestack migration, as part of its execution, usually requires repeated access to a large traveltime table data base. Access to this data base implies either a memory intensive or I/O bounded solution to the storage problem. Proper compression of the traveltime table allows efficient 3D prestack migration without relying on the usually slow access to the computer hard drive. Such compression also allows for faster access to desirable parts of the traveltime table. Compression is applied to the traveltime field for each source location on the surface on a regular grid using 3D Chebyshev polynomial or cosine transforms of the traveltime field represented in the spherical coordinates or the Celerity domain. We obtain practical compression levels up to and exceeding 20 to 1. In fact, because of the smaller size traveltime table, we obtain exceptional traveltime extraction speed during migration that exceeds conventional methods. Additional features of the compression include better interpolation of traveltime tables and more stable estimates of amplitudes from traveltime curvatures. Further compression is achieved using bit encoding, by representing compression parameters values with fewer bits. © 2010 European Association of Geoscientists & Engineers.

  16. Premixed autoignition in compressible turbulence

    Science.gov (United States)

    Konduri, Aditya; Kolla, Hemanth; Krisman, Alexander; Chen, Jacqueline

    2016-11-01

    Prediction of chemical ignition delay in an autoignition process is critical in combustion systems like compression ignition engines and gas turbines. Often, ignition delay times measured in simple homogeneous experiments or homogeneous calculations are not representative of actual autoignition processes in complex turbulent flows. This is due the presence of turbulent mixing which results in fluctuations in thermodynamic properties as well as chemical composition. In the present study the effect of fluctuations of thermodynamic variables on the ignition delay is quantified with direct numerical simulations of compressible isotropic turbulence. A premixed syngas-air mixture is used to remove the effects of inhomogeneity in the chemical composition. Preliminary results show a significant spatial variation in the ignition delay time. We analyze the topology of autoignition kernels and identify the influence of extreme events resulting from compressibility and intermittency. The dependence of ignition delay time on Reynolds and turbulent Mach numbers is also quantified. Supported by Basic Energy Sciences, Dept of Energy, United States.

  17. Effect of the rate of chest compression familiarised in previous training on the depth of chest compression during metronome-guided cardiopulmonary resuscitation: a randomised crossover trial.

    Science.gov (United States)

    Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo

    2016-02-12

    To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Participants were recruited from a medical school and two paramedic schools of South Korea. 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Average compression depths were significantly different according to the rate used in training (ptraining at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both pCPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. 5 CFR 532.513 - Flexible and compressed work schedules.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Flexible and compressed work schedules... REGULATIONS PREVAILING RATE SYSTEMS Premium Pay and Differentials § 532.513 Flexible and compressed work schedules. Federal Wage System employees who are authorized to work flexible and compressed work schedules...

  19. Wavelet/scalar quantization compression standard for fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  20. Picosecond chirped pulse compression in single-mode fibers

    International Nuclear Information System (INIS)

    Wenhua Cao; Youwei Zhang

    1995-01-01

    In this paper, the nonlinear propagation of picosecond chirped pulses in single mode fibers has been investigated both analytically and numerically. Results show that downchirped pulses can be compressed owing to normal group-velocity dispersion. The compression ratio depends both on the initial peak power and on the initial frequency chirp of the input pulse. While the compression ratio depends both on the initial peak power and on the initial frequency chirp of the input pulse. While the compression ratio increases with the negative frequency chirp, it decreases with the initial peak power of the input pulse. This means that the self-phase modulation induced nonlinear frequency chirp which is linear and positive (up-chirp) over a large central region of the pulse and tends to cancel the initial negative chirp of the pulse. It is also shown that, as the negative chirped pulse compresses temporally, it synchronously experiences a spectral narrowing

  1. Microdamage in polycrystalline ceramics under dynamic compression and tension

    International Nuclear Information System (INIS)

    Zhang, K.S.; Zhang, D.; Feng, R.; Wu, M.S.

    2005-01-01

    In-grain microplasticity and intergranular microdamage in polycrystalline hexagonal-structure ceramics subjected to a sequence of dynamic compression and tension are studied computationally using the Voronoi polycrystal model, by which the topological heterogeneity and material anisotropy of the crystals are simulated explicitly. The constitutive modeling considers crystal plasticity by basal slip, intergranular shear damage during compression, and intergranular mode-I cracking during tension. The model parameters are calibrated with the available shock compression and spall strength data on polycrystalline α-6H silicon carbide. The numerical results show that microplasticity is a more plausible micromechanism for the inelastic response of the material under shock compression. On the other hand, the spallation behavior of the shocked material can be well predicted by intergranular mode-I microcracking during load reversal from dynamic compression to tension. The failure process and the resulting spall strength are, however, affected strongly by the intensity of local release heterogeneity induced by heterogeneous microplasticity, and by the grain-boundary shear damage during compression

  2. Temporal compression in episodic memory for real-life events.

    Science.gov (United States)

    Jeunehomme, Olivier; Folville, Adrien; Stawarczyk, David; Van der Linden, Martial; D'Argembeau, Arnaud

    2018-07-01

    Remembering an event typically takes less time than experiencing it, suggesting that episodic memory represents past experience in a temporally compressed way. Little is known, however, about how the continuous flow of real-life events is summarised in memory. Here we investigated the nature and determinants of temporal compression by directly comparing memory contents with the objective timing of events as measured by a wearable camera. We found that episodic memories consist of a succession of moments of prior experience that represent events with varying compression rates, such that the density of retrieved information is modulated by goal processing and perceptual changes. Furthermore, the results showed that temporal compression rates remain relatively stable over one week and increase after a one-month delay, particularly for goal-related events. These data shed new light on temporal compression in episodic memory and suggest that compression rates are adaptively modulated to maintain current goal-relevant information.

  3. Application of Compressive Sensing to Gravitational Microlensing Experiments

    Science.gov (United States)

    Korde-Patel, Asmita; Barry, Richard K.; Mohsenin, Tinoosh

    2016-01-01

    Compressive Sensing is an emerging technology for data compression and simultaneous data acquisition. This is an enabling technique for significant reduction in data bandwidth, and transmission power and hence, can greatly benefit spaceflight instruments. We apply this process to detect exoplanets via gravitational microlensing. We experiment with various impact parameters that describe microlensing curves to determine the effectiveness and uncertainty caused by Compressive Sensing. Finally, we describe implications for spaceflight missions.

  4. Managment oriented analysis of sediment yield time compression

    Science.gov (United States)

    Smetanova, Anna; Le Bissonnais, Yves; Raclot, Damien; Nunes, João P.; Licciardello, Feliciana; Le Bouteiller, Caroline; Latron, Jérôme; Rodríguez Caballero, Emilio; Mathys, Nicolle; Klotz, Sébastien; Mekki, Insaf; Gallart, Francesc; Solé Benet, Albert; Pérez Gallego, Nuria; Andrieux, Patrick; Moussa, Roger; Planchon, Olivier; Marisa Santos, Juliana; Alshihabi, Omran; Chikhaoui, Mohamed

    2016-04-01

    The understanding of inter- and intra-annual variability of sediment yield is important for the land use planning and management decisions for sustainable landscapes. It is of particular importance in the regions where the annual sediment yield is often highly dependent on the occurrence of few large events which produce the majority of sediments, such as in the Mediterranean. This phenomenon is referred as time compression, and relevance of its consideration growths with the increase in magnitude and frequency of extreme events due to climate change in many other regions. So far, time compression has ben studied mainly on events datasets, providing high resolution, but (in terms of data amount, required data precision and methods), demanding analysis. In order to provide an alternative simplified approach, the monthly and yearly time compressions were evaluated in eight Mediterranean catchments (of the R-OSMed network), representing a wide range of Mediterranean landscapes. The annual sediment yield varied between 0 to ~27100 Mg•km-2•a-1, and the monthly sediment yield between 0 to ~11600 Mg•km-2•month-1. The catchment's sediment yield was un-equally distributed at inter- and intra-annual scale, and large differences were observed between the catchments. Two types of time compression were distinguished - (i) the inter-annual (based on annual values) and intra- annual (based on monthly values). Four different rainfall-runoff-sediment yield time compression patterns were observed: (i) no time-compression of rainfall, runoff, nor sediment yield, (ii) low time compression of rainfall and runoff, but high compression of sediment yield, (iii) low compression of rainfall and high of runoff and sediment yield, and (iv) low, medium and high compression of rainfall, runoff and sediment yield. All four patterns were present at inter-annual scale, while at intra-annual scale only the two latter were present. This implies that high sediment yields occurred in

  5. SCALCE: boosting sequence compression algorithms using locally consistent encoding.

    Science.gov (United States)

    Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk

    2012-12-01

    The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip

  6. Compressive Strength of Compacted Clay-Sand Mixes

    Directory of Open Access Journals (Sweden)

    Faseel Suleman Khan

    2014-01-01

    Full Text Available The use of sand to improve the strength of natural clays provides a viable alternative for civil infrastructure construction involving earthwork. The main objective of this note was to investigate the compressive strength of compacted clay-sand mixes. A natural clay of high plasticity was mixed with 20% and 40% sand (SP and their compaction and strength properties were determined. Results indicated that the investigated materials exhibited a brittle behaviour on the dry side of optimum and a ductile behaviour on the wet side of optimum. For each material, the compressive strength increased with an increase in density following a power law function. Conversely, the compressive strength increased with decreasing water content of the material following a similar function. Finally, the compressive strength decreased with an increase in sand content because of increased material heterogeneity and loss of sand grains from the sides during shearing.

  7. Compression force-depth relationship during out-of-hospital cardiopulmonary resuscitation.

    Science.gov (United States)

    Tomlinson, A E; Nysaether, J; Kramer-Johansen, J; Steen, P A; Dorph, E

    2007-03-01

    Recent clinical studies reporting the high frequency of inadequate chest compression depth (compression depth in certain patients. Using a specially designed monitor/defibrillator equipped with a sternal pad fitted with an accelerometer and a pressure sensor, compression force and depth was measured during CPR in 91 adult out-of-hospital cardiac arrest patients. There was a strong non-linear relationship between the force of compression and depth achieved. Mean applied force for all patients was 30.3+/-8.2 kg and mean absolute compression depth 42+/-8 mm. For 87 of 91 patients 38 mm compression depth was obtained with less than 50 kg. Stiffer chests were compressed more forcefully than softer chests (pcompressed more deeply than stiffer chests (p=0.001). The force needed to reach 38 mm compression depth (F38) and mean compression force were higher for males than for females: 29.8+/-14.5 kg versus 22.5+/-10.2 kg (pcompression depth with age, but a significant 1.5 kg mean decrease in applied force for each 10 years increase in age (pcompressions performed. Average residual force during decompression was 1.7+/-1.0 kg, corresponding to an average residual depth of 3+/-2 mm. In most out-of-hospital cardiac arrest victims adequate chest compression depth can be achieved by a force<50 kg, indicating that an average sized and fit rescuer should be able to perform effective CPR in most adult patients.

  8. Generation new MP3 data set after compression

    Science.gov (United States)

    Atoum, Mohammed Salem; Almahameed, Mohammad

    2016-02-01

    The success of audio steganography techniques is to ensure imperceptibility of the embedded secret message in stego file and withstand any form of intentional or un-intentional degradation of secret message (robustness). Crucial to that using digital audio file such as MP3 file, which comes in different compression rate, however research studies have shown that performing steganography in MP3 format after compression is the most suitable one. Unfortunately until now the researchers can not test and implement their algorithm because no standard data set in MP3 file after compression is generated. So this paper focuses to generate standard data set with different compression ratio and different Genre to help researchers to implement their algorithms.

  9. 3:1 compression to ventilation ratio versus continuous chest compression with asynchronous ventilation in a porcine model of neonatal resuscitation.

    Science.gov (United States)

    Schmölzer, Georg M; O'Reilly, Megan; Labossiere, Joseph; Lee, Tze-Fun; Cowan, Shaun; Nicoll, Jessica; Bigam, David L; Cheung, Po-Yin

    2014-02-01

    In contrast to the resuscitation guidelines of children and adults, guidelines on neonatal resuscitation recommend synchronized 90 chest compressions with 30 manual inflations (3:1) per minute in newborn infants. The study aimed to determine if chest compression with asynchronous ventilation improves the recovery of bradycardic asphyxiated newborn piglets compared to 3:1 Compression:Ventilation cardiopulmonary resuscitation (CPR). Term newborn piglets (n=8/group) were anesthetized, intubated, instrumented and exposed to 45-min normocapnic hypoxia followed by asphyxia. Protocolized resuscitation was initiated when heart rate decreased to 25% of baseline. Piglets were randomized to receive resuscitation with either 3:1 compressions to ventilations (3:1C:V CPR group) or chest compressions with asynchronous ventilations (CCaV) or sham. Continuous respiratory parameters (Respironics NM3(®)), cardiac output, mean systemic and pulmonary artery pressures, and regional blood flows were measured. Piglets in 3:1C:V CPR and CCaV CPR groups had similar time to return of spontaneous circulation, survival rates, hemodynamic and respiratory parameters during CPR. The systemic and regional hemodynamic recovery in the subsequent 4h was similar in both groups and significantly lower compared to sham-operated piglets. Newborn piglets resuscitated by CCaV had similar return of spontaneous circulation, survival, and hemodynamic recovery compared to those piglets resuscitated by 3:1 Compression:Ventilation ratio. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Effects on MR images compression in tissue classification quality

    International Nuclear Information System (INIS)

    Santalla, H; Meschino, G; Ballarin, V

    2007-01-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of 'quality' is essential. What we understand for 'quality'? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images

  11. Lossless Compression of Classification-Map Data

    Science.gov (United States)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  12. Mammographic compression – A need for mechanical standardization

    Energy Technology Data Exchange (ETDEWEB)

    Branderhorst, Woutjan, E-mail: w.branderhorst@amc.nl [Academic Medical Center, Department of Biomedical Engineering & Physics, P.O. Box 22660, 1100 DD Amsterdam (Netherlands); Sigmascreening B.V., Meibergdreef 45, 1105 BA Amsterdam (Netherlands); Groot, Jerry E. de, E-mail: jerry.degroot@sigmascreening.com [Academic Medical Center, Department of Biomedical Engineering & Physics, P.O. Box 22660, 1100 DD Amsterdam (Netherlands); Highnam, Ralph, E-mail: ralph.highnam@volparasolutions.com [Volpara Solutions Limited, P.O. Box 24404, Manners St Central, Wellington 6142 (New Zealand); Chan, Ariane, E-mail: ariane.chan@volparasolutions.com [Volpara Solutions Limited, P.O. Box 24404, Manners St Central, Wellington 6142 (New Zealand); Böhm-Vélez, Marcela, E-mail: marcelabvelez@gmail.com [Weinstein Imaging Associates, 5850 Centre Avenue, Pittsburgh, PA 15206 (United States); Broeders, Mireille J.M., E-mail: mireille.broeders@radboudumc.nl [Radboud University Medical Center, Department for Health Evidence, P.O. Box 9101, 6500 HB Nijmegen (Netherlands); LRCB Dutch Reference Center for Screening, P.O. Box 6873, 6503 GJ Nijmegen (Netherlands); Heeten, Gerard J. den, E-mail: g.denheeten@lrcb.nl [Academic Medical Center, Department of Radiology, P.O. Box 22660, 1100 DD Amsterdam (Netherlands); LRCB Dutch Reference Center for Screening, P.O. Box 6873, 6503 GJ Nijmegen (Netherlands); Grimbergen, Cornelis A., E-mail: c.a.grimbergen@amc.uva.nl [Academic Medical Center, Department of Biomedical Engineering & Physics, P.O. Box 22660, 1100 DD Amsterdam (Netherlands); Sigmascreening B.V., Meibergdreef 45, 1105 BA Amsterdam (Netherlands)

    2015-04-15

    Highlights: •We studied mechanical breast compression practice in two different clinical sites. •We visualized the distributions of not only the applied force but also the pressure. •The applied pressure was highly variable, both within and between the data sets. •The average applied pressure and the variation were higher for smaller breasts. •A proposal for improved individualization, by standardizing pressure, is discussed. -- Abstract: Background: A lack of consistent guidelines regarding mammographic compression has led to wide variation in its technical execution. Breast compression is accomplished by means of a compression paddle, resulting in a certain contact area between the paddle and the breast. This procedure is associated with varying levels of discomfort or pain. On current mammography systems, the only mechanical parameter available in estimating the degree of compression is the physical entity of force (daN). Recently, researchers have suggested that pressure (kPa), resulting from a specific force divided by contact area on a breast, might be a more appropriate parameter for standardization. Software has now become available which enables device-independent cross-comparisons of key mammographic metrics, such as applied compression pressure (force divided by contact area), breast density and radiation dose, between patient populations. Purpose: To compare the current compression practice in mammography between different imaging sites in the Netherlands and the United States from a mechanical point of view, and to investigate whether the compression protocols in these countries can be improved by standardization of pressure (kPa) as an objective mechanical parameter. Materials and methods: We retrospectively studied the available parameters of a set of 37,518 mammographic compressions (9188 women) from the Dutch national breast cancer screening programme (NL data set) and of another set of 7171 compressions (1851 women) from a breast imaging

  13. Normalized compression distance of multisets with applications

    NARCIS (Netherlands)

    Cohen, A.R.; Vitányi, P.M.B.

    Pairwise normalized compression distance (NCD) is a parameter-free, feature-free, alignment-free, similarity metric based on compression. We propose an NCD of multisets that is also metric. Previously, attempts to obtain such an NCD failed. For classification purposes it is superior to the pairwise

  14. Compressed Gas Safety for Experimental Fusion Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Lee C. Cadwallader

    2004-09-01

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air, and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard assoicated with compressed gas cylinders and mthods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  15. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  16. Fingerprints in compressed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Cording, Patrick Hagge

    2017-01-01

    In this paper we show how to construct a data structure for a string S of size N compressed into a context-free grammar of size n that supports efficient Karp–Rabin fingerprint queries to any substring of S. That is, given indices i and j, the answer to a query is the fingerprint of the substring S......[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(log⁡N) query time, and for Linear SLPs (an SLP derivative that captures LZ78 compression and its variations) we get O(log⁡log⁡N) query time...

  17. Iris Recognition: The Consequences of Image Compression

    Directory of Open Access Journals (Sweden)

    Bishop DanielA

    2010-01-01

    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  18. Iris Recognition: The Consequences of Image Compression

    Science.gov (United States)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  19. Compressing Aviation Data in XML Format

    Science.gov (United States)

    Patel, Hemil; Lau, Derek; Kulkarni, Deepak

    2003-01-01

    Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.

  20. AIRMaster: Compressed air system audit software

    International Nuclear Information System (INIS)

    Wheeler, G.M.; Bessey, E.G.; McGill, R.D.; Vischer, K.

    1997-01-01

    The project goal was to develop a software tool, AIRMaster, and a methodology for performing compressed air system audits. AIRMaster and supporting manuals are designed for general auditors or plant personnel to evaluate compressed air system operation with simple instrumentation during a short-term audit. AIRMaster provides a systematic approach to compressed air system audits, analyzing collected data, and reporting results. AIRMaster focuses on inexpensive Operation and Maintenance (O and M) measures, such as fixing air leaks and improving controls that can significantly improve performance and reliability of the compressed air system, without significant risk to production. An experienced auditor can perform an audit, analyze collected data, and produce results in 2--3 days. AIRMaster reduces the cost of an audit, thus freeing funds to implement recommendations. The AIRMaster package includes an Audit Manual, Software and User's manual, Analysis Methodology Manual, and a Case Studies summary report. It also includes a Self-Guided Tour booklet to help users quickly screen a plant for efficiency improvement potentials, and an Industrial Compressed Air Systems Energy Efficiency Guidebook. AIRMaster proved to be a fast and effective audit tool. In sever audits AIRMaster identified energy savings of 4,056,000 kWh, or 49.2% of annual compressor energy use, for a cost savings of $152,000. Total implementation costs were $94,700 for a project payback period of 0.6 years. Available airflow increased between 11% and 51% of plant compressor capacity, leading to potential capital benefits from 40% to 230% of first year energy savings

  1. Manual compression and reflex syncope in native renal biopsy.

    Science.gov (United States)

    Takeuchi, Yoichi; Ojima, Yoshie; Kagaya, Saeko; Aoki, Satoshi; Nagasawa, Tasuku

    2018-03-14

    Complications associated with diagnostic native percutaneous renal biopsy (PRB) must be minimized. While life threatening major complications has been extensively investigated, there is little discussion regarding minor bleeding complications, such as a transient hypotension, which directly affect patients' quality of life. There is also little evidence supporting the need for conventional manual compression following PRB. Therefore, this study evaluated the relationship between minor and major complications incidence in patients following PRB with or without compression. This single-center, retrospective study included 456 patients (compression group: n = 71; observation group: n = 385). The compression group completed 15 min of manual compression and 4 h of subsequent strict bed rest with abdominal bandage. The observation group completed 2 h of strict bed rest only. The primary outcome of interest was transient symptomatic hypotension (minor event). Of the 456 patients, 26 patients encountered intraoperative and postoperative transient hypotension, which were considered reflex syncope without tachycardia. Univariate analysis showed that symptomatic transient hypotension was significantly associated with compression. This association remained significant, even after adjustment of covariates using multivariate logistic regression analysis (adjusted odds ratio 3.27; 95% confidential interval 1.36-7.82; P = 0.0078). Manual compression and abdominal bandage significantly increased the frequency of reflex syncope during native PRB. It is necessary to consider the potential benefit and risk of compression maneuvers for each patient undergoing this procedure.

  2. Compressed Data Transmission Among Nodes in BigData

    OpenAIRE

    Thirunavukarasu B; Sudhahar V M; VasanthaKumar U; Dr Kalaikumaran T; Dr Karthik S

    2014-01-01

    Many organizations are now dealing with large amount of data. Traditionally they used relational data. But nowadays they are supposed to use structured and semi structured data. To work effectively these organizations uses virtualization, parallel processing in compression etc., out of which the compression is most effective one. The data transmission of high volume usually causes high transmission time. This compression of unstructured data is immediately done when the data is being trans...

  3. Accidental fatal lung injury by compressed air: a case report.

    Science.gov (United States)

    Rayamane, Anand Parashuram; Pradeepkumar, M V

    2015-03-01

    Compressed air is being used extensively as a source of energy at industries and in daily life. A variety of fatal injuries are caused by improper and ignorant use of compressed air equipments. Many types of injuries due to compressed air are reported in the literature such as colorectal injury, orbital injury, surgical emphysema, and so on. Most of these injuries are accidental in nature. It is documented that 40 pounds per square inch pressure causes fatal injuries to the ear, eyes, lungs, stomach, and intestine. Openings of body are vulnerable to injuries by compressed air. Death due to compressed air injuries is rarely reported. Many cases are treated successfully by conservative or surgical management. Extensive survey of literature revealed no reports of fatal injury to the upper respiratory tract and lungs caused by compressed air. Here, we are reporting a fatal event of accidental death after insertion of compressed air pipe into the mouth. The postmortem findings are corroborated with the history and discussed in detail.

  4. Compression in Working Memory and Its Relationship With Fluid Intelligence.

    Science.gov (United States)

    Chekaf, Mustapha; Gauvrit, Nicolas; Guida, Alessandro; Mathy, Fabien

    2018-06-01

    Working memory has been shown to be strongly related to fluid intelligence; however, our goal is to shed further light on the process of information compression in working memory as a determining factor of fluid intelligence. Our main hypothesis was that compression in working memory is an excellent indicator for studying the relationship between working-memory capacity and fluid intelligence because both depend on the optimization of storage capacity. Compressibility of memoranda was estimated using an algorithmic complexity metric. The results showed that compressibility can be used to predict working-memory performance and that fluid intelligence is well predicted by the ability to compress information. We conclude that the ability to compress information in working memory is the reason why both manipulation and retention of information are linked to intelligence. This result offers a new concept of intelligence based on the idea that compression and intelligence are equivalent problems. Copyright © 2018 Cognitive Science Society, Inc.

  5. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  6. Addition of Audiovisual Feedback During Standard Compressions Is Associated with Improved Ability

    Directory of Open Access Journals (Sweden)

    Nicholas Asakawa

    2018-02-01

    Full Text Available Introduction: A benefit of in-hospital cardiac arrest is the opportunity for rapid initiation of “high-quality” chest compressions as defined by current American Heart Association (AHA adult guidelines as a depth 2–2.4 inches, full chest recoil, rate 100–120 per minute, and minimal interruptions with a chest compression fraction (CCF ≥ 60%. The goal of this study was to assess the effect of audiovisual feedback on the ability to maintain high-quality chest compressions as per 2015 updated guidelines. Methods: Ninety-eight participants were randomized into four groups. Participants were randomly assigned to perform chest compressions with or without use of audiovisual feedback (+/− AVF. Participants were further assigned to perform either standard compressions with a ventilation ratio of 30:2 to simulate cardiopulmonary resuscitation (CPR without an advanced airway or continuous chest compressions to simulate CPR with an advanced airway. The primary outcome measured was ability to maintain high-quality chest compressions as defined by current 2015 AHA guidelines. Results: Overall comparisons between continuous and standard chest compressions (n=98 were without significant differences in chest compression dynamics (p’s >0.05. Overall comparisons between +/− AVF (n = 98 were significant for differences in average rate of compressions per minute (p= 0.0241 and proportion of chest compressions within guideline rate recommendations (p = 0.0084. There was a significant difference in the proportion of high quality-chest compressions favoring AVF (p = 0.0399. Comparisons between chest compression strategy groups +/− AVF were significant for differences in compression dynamics favoring AVF (p’s < 0.05. Conclusion: Overall, AVF is associated with greater ability to maintain high-quality chest compressions per most-recent AHA guidelines. Specifically, AVF was associated with a greater proportion of compressions within ideal rate with

  7. Neutralized drift compression experiments with a high-intensity ion beam

    International Nuclear Information System (INIS)

    Roy, P.K.; Yu, S.S.; Waldron, W.L.; Anders, A.; Baca, D.; Barnard, J.J.; Bieniosek, F.M.; Coleman, J.; Davidson, R.C.; Efthimion, P.C.; Eylon, S.; Friedman, A.; Gilson, E.P.; Greenway, W.G.; Henestroza, E.; Kaganovich, I.; Leitner, M.; Logan, B.G.; Sefkow, A.B.; Seidl, P.A.; Sharp, W.M.; Thoma, C.; Welch, D.R.

    2007-01-01

    To create high-energy density matter and fusion conditions, high-power drivers, such as lasers, ion beams, and X-ray drivers, may be employed to heat targets with short pulses compared to hydro-motion. Both high-energy density physics and ion-driven inertial fusion require the simultaneous transverse and longitudinal compression of an ion beam to achieve high intensities. We have previously studied the effects of plasma neutralization for transverse beam compression. The scaled experiment, the Neutralized Transport Experiment (NTX), demonstrated that an initially un-neutralized beam can be compressed transversely to ∼1 mm radius when charge neutralization by background plasma electrons is provided. Here, we report longitudinal compression of a velocity-tailored, intense, neutralized 25 mA K + beam at 300 keV. The compression takes place in a 1-2 m drift section filled with plasma to provide space-charge neutralization. An induction cell produces a head-to-tail velocity ramp that longitudinally compresses the neutralized beam, enhances the beam peak current by a factor of 50 and produces a pulse duration of about 3 ns. The physics of longitudinal compression, experimental procedure, and the results of the compression experiments are presented

  8. Superconductivity under uniaxial compression in β-(BDA-TTP) salts

    International Nuclear Information System (INIS)

    Suzuki, T.; Onari, S.; Ito, H.; Tanaka, Y.

    2009-01-01

    In order to clarify the mechanism of organic superconductor β-(BDA-TTP) salts. We study the superconductivity under uniaxial compression with non-dimerized two-band Hubbard model. We have calculated the uniaxial compression dependence of T c by solving the Eliashberg's equation using the fluctuation exchange (FLEX) approximation. The transfer integral under the uniaxial compression was estimated by the extended Huckel method. We have found that non-monotonic behaviors of T c in experimental results under uniaxial compression are understood taking the spin frustration and spin fluctuation into account.

  9. Superconductivity under uniaxial compression in β-(BDA-TTP) salts

    Science.gov (United States)

    Suzuki, T.; Onari, S.; Ito, H.; Tanaka, Y.

    2009-10-01

    In order to clarify the mechanism of organic superconductor β-(BDA-TTP) salts. We study the superconductivity under uniaxial compression with non-dimerized two-band Hubbard model. We have calculated the uniaxial compression dependence of T c by solving the Eliashberg’s equation using the fluctuation exchange (FLEX) approximation. The transfer integral under the uniaxial compression was estimated by the extended Huckel method. We have found that non-monotonic behaviors of T c in experimental results under uniaxial compression are understood taking the spin frustration and spin fluctuation into account.

  10. Superconductivity under uniaxial compression in beta-(BDA-TTP) salts

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, T., E-mail: suzuki@rover.nuap.nagoya-u.ac.j [Department of Applied Physics and JST, TRIP, Nagoya University, Chikusa, Nagoya 464-8603 (Japan); Onari, S.; Ito, H.; Tanaka, Y. [Department of Applied Physics and JST, TRIP, Nagoya University, Chikusa, Nagoya 464-8603 (Japan)

    2009-10-15

    In order to clarify the mechanism of organic superconductor beta-(BDA-TTP) salts. We study the superconductivity under uniaxial compression with non-dimerized two-band Hubbard model. We have calculated the uniaxial compression dependence of T{sub c} by solving the Eliashberg's equation using the fluctuation exchange (FLEX) approximation. The transfer integral under the uniaxial compression was estimated by the extended Huckel method. We have found that non-monotonic behaviors of T{sub c} in experimental results under uniaxial compression are understood taking the spin frustration and spin fluctuation into account.

  11. Torque Modeling and Control of a Variable Compression Engine

    OpenAIRE

    Bergström, Andreas

    2003-01-01

    The SAAB variable compression engine is a new engine concept that enables the fuel consumption to be radically cut by varying the compression ratio. A challenge with this new engine concept is that the compression ratio has a direct influence on the output torque, which means that a change in compression ratio also leads to a change in the torque. A torque change may be felt as a jerk in the movement of the car, and this is an undesirable effect since the driver has no control over the compre...

  12. The effect of hydraulic bed movement on the quality of chest compressions.

    Science.gov (United States)

    Park, Maeng Real; Lee, Dae Sup; In Kim, Yong; Ryu, Ji Ho; Cho, Young Mo; Kim, Hyung Bin; Yeom, Seok Ran; Min, Mun Ki

    2017-08-01

    The hydraulic height control systems of hospital beds provide convenience and shock absorption. However, movements in a hydraulic bed may reduce the effectiveness of chest compressions. This study investigated the effects of hydraulic bed movement on chest compressions. Twenty-eight participants were recruited for this study. All participants performed chest compressions for 2min on a manikin and three surfaces: the floor (Day 1), a firm plywood bed (Day 2), and a hydraulic bed (Day 3). We considered 28 participants of Day 1 as control and each 28 participants of Day 2 and Day 3 as study subjects. The compression rates, depths, and good compression ratios (>5-cm compressions/all compressions) were compared between the three surfaces. When we compared the three surfaces, we did not detect a significant difference in the speed of chest compressions (p=0.582). However, significantly lower values were observed on the hydraulic bed in terms of compression depth (p=0.001) and the good compression ratio (p=0.003) compared to floor compressions. When we compared the plywood and hydraulic beds, we did not detect significant differences in compression depth (p=0.351) and the good compression ratio (p=0.391). These results indicate that the movements in our hydraulic bed were associated with a non-statistically significant trend towards lower-quality chest compressions. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Compressed sensing electron tomography

    International Nuclear Information System (INIS)

    Leary, Rowan; Saghi, Zineb; Midgley, Paul A.; Holland, Daniel J.

    2013-01-01

    The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform

  14. Fast Compressive Tracking.

    Science.gov (United States)

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  15. Dynamic compression and sound quality of music

    NARCIS (Netherlands)

    Lieshout, van R.A.J.M.; Wagenaars, W.M.; Houtsma, A.J.M.; Stikvoort, E.F.

    1984-01-01

    Amplitude compression is often used to match the dynamic: range of music to a particular playback situation in order to ensure, e .g ., continuous audibility in a noisy environment or unobtrusiveness if the music is intended as a quiet background. Since amplitude compression is a nonlinear process,

  16. Subjective evaluation of dynamic compression in music

    NARCIS (Netherlands)

    Wagenaars, W.M.; Houtsma, A.J.M.; Lieshout, van R.A.J.M.

    1986-01-01

    Amplitude compression is often used to match the dynamic range of music to a particular playback situation so as to ensure continuous audibility in a noisy environment. Since amplitude compression is a nonlinear process, it is potentially very damaging to sound quality. Three physical parameters of

  17. Roofbolters with compressed-air rotators

    Science.gov (United States)

    Lantsevich, MA; Repin Klishin, AA, VI; Kokoulin, DI

    2018-03-01

    The specifications of the most popular roofbolters of domestic and foreign manufacture currently in operation in coal mines are discussed. Compressed-air roofbolters SAP and SAP2 designed at the Institute of Mining are capable of drilling in hard rocks. The authors describe the compressed-air rotator of SAP2 roofbolter with alternate motion rotors. From the comparative analysis of characteristics of SAP and SAP 2 roofbolters, the combination of high-frequency axial and rotary impacts on a drilling tool in SAP2 ensure efficient drilling in rocks with the strength up to 160 MPa.

  18. Music analysis and point-set compression

    DEFF Research Database (Denmark)

    Meredith, David

    2015-01-01

    COSIATEC, SIATECCompress and Forth’s algorithm are point-set compression algorithms developed for discovering repeated patterns in music, such as themes and motives that would be of interest to a music analyst. To investigate their effectiveness and versatility, these algorithms were evaluated...... on three analytical tasks that depend on the discovery of repeated patterns: classifying folk song melodies into tune families, discovering themes and sections in polyphonic music, and discovering subject and countersubject entries in fugues. Each algorithm computes a compressed encoding of a point......-set representation of a musical object in the form of a list of compact patterns, each pattern being given with a set of vectors indicating its occurrences. However, the algorithms adopt different strategies in their attempts to discover encodings that maximize compression.The best-performing algorithm on the folk...

  19. Compressive strength improvement for recycled concrete aggregate

    Directory of Open Access Journals (Sweden)

    Mohammed Dhiyaa

    2018-01-01

    Full Text Available Increasing amount of construction waste and, concrete remnants, in particular pose a serious problem. Concrete waste exist in large amounts, do not decay and need long time for disintegration. Therefore, in this work old demolished concrete is crashed and recycled to produce recycled concrete aggregate which can be reused in new concrete production. The effect of using recycled aggregate on concrete compressive strength has been experimentally investigated; silica fume admixture also is used to improve recycled concrete aggregate compressive strength. The main parameters in this study are recycled aggregate and silica fume admixture. The percent of recycled aggregate ranged from (0-100 %. While the silica fume ranged from (0-10 %. The experimental results show that the average concrete compressive strength decreases from 30.85 MPa to 17.58 MPa when the recycled aggregate percentage increased from 0% to 100%. While, when silica fume is used the concrete compressive strength increase again to 29.2 MPa for samples with 100% of recycled aggregate.

  20. Adiabatic Liquid Piston Compressed Air Energy Storage

    DEFF Research Database (Denmark)

    Petersen, Tage; Elmegaard, Brian; Pedersen, Allan Schrøder

    the system. The compression leads to a significant increase in temperature, and the heat generated is dumped into the ambient. This energy loss results in a low efficiency of the system, and when expanding the air, the expansion leads to a temperature drop reducing the mechanical output of the expansion......), but no such units are in operation at present. The CAES system investigated in this project uses a different approach to avoid compression heat loss. The system uses a pre-compressed pressure vessel full of air. A liquid is pumped into the bottom of the vessel when charging and the same liquid is withdrawn through......-CAES system is significantly higher than existing CAES systems due to a low or nearly absent compression heat loss. Furthermore, pumps/turbines, which use a liquid as a medium, are more efficient than air/gas compressors/turbines. In addition, the demand for fuel during expansion does not occur. •The energy...

  1. 2D-RBUC for efficient parallel compression of residuals

    Science.gov (United States)

    Đurđević, Đorđe M.; Tartalja, Igor I.

    2018-02-01

    In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.

  2. Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery

    Science.gov (United States)

    Xie, Hua; Klimesh, Matthew A.

    2009-01-01

    This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.

  3. Medical image compression and its application to TDIS-FILE equipment

    International Nuclear Information System (INIS)

    Tsubura, Shin-ichi; Nishihara, Eitaro; Iwai, Shunsuke

    1990-01-01

    In order to compress medical images for filing and communication, we have developed a compression algorithm which compresses images with remarkable quality using a high-pass filtering method. Hardware for this compression algorithm was also developed and applied to TDIS (total digital imaging system)-FILE equipment. In the future, hardware based on this algorithm will be developed for various types of diagnostic equipment and PACS. This technique has the following characteristics: (1) significant reduction of artifacts; (2) acceptable quality for clinical evaluation at 15:1 to 20:1 compression ratio; and (3) high-speed processing and compact hardware. (author)

  4. Compression and decompression of digital seismic waveform data for storage and communication

    International Nuclear Information System (INIS)

    Bhadauria, Y.S.; Kumar, Vijai

    1991-01-01

    Two different classes of data compression schemes, namely physical data compression schemes and logical data compression schemes are examined for their use in storage and communication of digital seismic waveform data. In physical data compression schemes, the physical size of the waveform is reduced. One, therefore, gets only a broad picture of the original waveform, when the data are retrieved and the waveform is reconstituted. Coerrelation between original and decompressed waveform varies inversely with the data compresion ratio. In the logical data compression schemes, the data are stored in a logically encoded form. Storage of unnecessary characters like blank space is avoided. On decompression original data are retrieved and compression error is nil. Three algorithms of logical data compression schemes have been developed and studied. These are : 1) optimum formatting schemes, 2) differential bit reduction scheme, and 3) six bit compression scheme. Results of the above three algorithms of logical compression class are compared with those of physical compression schemes reported in literature. It is found that for all types of data, six bit compression scheme gives the highest value of data compression ratio. (author). 6 refs., 8 figs., 1 appendix, 2 tabs

  5. Curvelet-based compressive sensing for InSAR raw data

    Science.gov (United States)

    Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David

    2015-10-01

    The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications

  6. An efficient and extensible approach for compressing phylogenetic trees.

    Science.gov (United States)

    Matthews, Suzanne J; Williams, Tiffani L

    2011-10-18

    Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community.

  7. 41 CFR 50-204.8 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Use of compressed air. 50-204.8 Section 50-204.8 Public Contracts and Property Management Other Provisions Relating to Public... General Safety and Health Standards § 50-204.8 Use of compressed air. Compressed air shall not be used for...

  8. Dynamics of heavy ion beams during longitudinal compression

    International Nuclear Information System (INIS)

    Ho, D.D.M.; Bangerter, R.O.; Lee, E.P.; Brandon, S.; Mark, J.W.K.

    1987-01-01

    Heavy ion beams with initially uniform line charge density can be compressed longitudinally by an order of magnitude in such a way that the compressed beam has uniform line charge density and velocity-tilt profiles. There are no envelope mismatch oscillations during compression. Although the transverse temperature varies along the beam and also varies with time, no substantial longitudinal and transverse emittance growth has been observed. Scaling laws for beam radius and transport system parameters are given

  9. Treatment of fully enclosed FSI using artificial compressibility

    CSIR Research Space (South Africa)

    Bogaers, Alfred EJ

    2013-07-01

    Full Text Available artificial compressibility (AC), whereby the fluid equations are modified to allow for compressibility which internally incorporates an approximation of the system volume change as a function of pressure....

  10. Three-dimensional numerical simulation for plastic injection-compression molding

    Science.gov (United States)

    Zhang, Yun; Yu, Wenjie; Liang, Junjie; Lang, Jianlin; Li, Dequn

    2018-03-01

    Compared with conventional injection molding, injection-compression molding can mold optical parts with higher precision and lower flow residual stress. However, the melt flow process in a closed cavity becomes more complex because of the moving cavity boundary during compression and the nonlinear problems caused by non-Newtonian polymer melt. In this study, a 3D simulation method was developed for injection-compression molding. In this method, arbitrary Lagrangian- Eulerian was introduced to model the moving-boundary flow problem in the compression stage. The non-Newtonian characteristics and compressibility of the polymer melt were considered. The melt flow and pressure distribution in the cavity were investigated by using the proposed simulation method and compared with those of injection molding. Results reveal that the fountain flow effect becomes significant when the cavity thickness increases during compression. The back flow also plays an important role in the flow pattern and redistribution of cavity pressure. The discrepancy in pressures at different points along the flow path is complicated rather than monotonically decreased in injection molding.

  11. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  12. Signal compression in radar using FPGA

    OpenAIRE

    Escamilla Hemández, Enrique; Kravchenko, Víctor; Ponomaryov, Volodymyr; Duchen Sánchez, Gonzalo; Hernández Sánchez, David

    2010-01-01

    We present the hardware implementation of radar real time processing procedures using a simple, fast technique based on FPGA (Field Programmable Gate Array) architecture. This processing includes different window procedures during pulse compression in synthetic aperture radar (SAR). The radar signal compression processing is realized using matched filter, and classical and novel window functions, where we focus on better solution for minimum values of sidelobes. The proposed architecture expl...

  13. Effect of JPEG2000 mammogram compression on microcalcifications segmentation

    International Nuclear Information System (INIS)

    Georgiev, V.; Arikidis, N.; Karahaliou, A.; Skiadopoulos, S.; Costaridou, L.

    2012-01-01

    The purpose of this study is to investigate the effect of mammographic image compression on the automated segmentation of individual microcalcifications. The dataset consisted of individual microcalcifications of 105 clusters originating from mammograms of the Digital Database for Screening Mammography. A JPEG2000 wavelet-based compression algorithm was used for compressing mammograms at 7 compression ratios (CRs): 10:1, 20:1, 30:1, 40:1, 50:1, 70:1 and 100:1. A gradient-based active contours segmentation algorithm was employed for segmentation of microcalcifications as depicted on original and compressed mammograms. The performance of the microcalcification segmentation algorithm on original and compressed mammograms was evaluated by means of the area overlap measure (AOM) and distance differentiation metrics (d mean and d max ) by comparing automatically derived microcalcification borders to manually defined ones by an expert radiologist. The AOM monotonically decreased as CR increased, while d mean and d max metrics monotonically increased with CR increase. The performance of the segmentation algorithm on original mammograms was (mean±standard deviation): AOM=0.91±0.08, d mean =0.06±0.05 and d max =0.45±0.20, while on 40:1 compressed images the algorithm's performance was: AOM=0.69±0.15, d mean =0.23±0.13 and d max =0.92±0.39. Mammographic image compression deteriorates the performance of the segmentation algorithm, influencing the quantification of individual microcalcification morphological properties and subsequently affecting computer aided diagnosis of microcalcification clusters. (authors)

  14. Compressed air massage hastens healing of the diabetic foot.

    Science.gov (United States)

    Mars, M; Desai, Y; Gregory, M A

    2008-02-01

    The management of diabetic foot ulcers remains a problem. A treatment modality that uses compressed air massage has been developed as a supplement to standard surgical and medical treatment. Compressed air massage is thought to improve local tissue oxygenation around ulcers. The aim of this study was to determine whether the addition of compressed air massage influences the rate of healing of diabetic ulcers. Sixty consecutive patients with diabetes, admitted to one hospital for urgent surgical management of diabetic foot ulcers, were randomized into two groups. Both groups received standard medical and surgical management of their diabetes and ulcer. In addition, one group received 15-20 min of compressed air massage, at 1 bar pressure, daily, for 5 days a week, to the foot and the tissue around the ulcer. Healing time was calculated as the time from admission to the time of re-epithelialization. Fifty-seven patients completed the trial; 28 received compressed air massage. There was no difference in the mean age, Wagner score, ulcer size, pulse status, or peripheral sensation in the two groups. The time to healing in the compressed air massage group was significantly reduced: 58.1 +/- 22.3 days (95% confidence interval: 49.5-66.6) versus 82.7 +/- 30.7 days (95% confidence interval: 70.0-94.3) (P = 0.001). No adverse effects in response to compressed air massage were noted. The addition of compressed air massage to standard medical and surgical management of diabetic ulcers appears to enhance ulcer healing. Further studies with this new treatment modality are warranted.

  15. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  16. Thermodynamic and aerodynamic meanline analysis of wet compression in a centrifugal compressor

    International Nuclear Information System (INIS)

    Kang, Jeong Seek; Cha, Bong Jun; Yang, Soo Seok

    2006-01-01

    Wet compression means the injection of water droplets into the compressor of gas turbines. This method decreases the compression work and increases the turbine output by decreasing the compressor exit temperature through the evaporation of water droplets inside the compressor. Researches on wet compression, up to now, have been focused on the thermodynamic analysis of wet compression where the decrease in exit flow temperature and compression work is demonstrated. This paper provides thermodynamic and aerodynamic analysis on wet compression in a centrifugal compressor for a microturbine. The meanline dry compression performance analysis of centrifugal compressor is coupled with the thermodynamic equation of wet compression to get the meanline performance of wet compression. The most influencing parameter in the analysis is the evaporative rate of water droplets. It is found that the impeller exit flow temperature and compression work decreases as the evaporative rate increases. And the exit flow angle decreases as the evaporative rate increases

  17. n-Gram-Based Text Compression

    Science.gov (United States)

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  18. [Compression treatment for burned skin].

    Science.gov (United States)

    Jaafar, Fadhel; Lassoued, Mohamed A; Sahnoun, Mahdi; Sfar, Souad; Cheikhrouhou, Morched

    2012-02-01

    The regularity of a compressive knit is defined as its ability to perform its function in a burnt skin. This property is essential to avoid the phenomenon of rejection of the material or toxicity problems But: Make knits biocompatible with high burnet of human skin. We fabric knits of elastic material. To ensure good adhesion to the skin, we made elastic material, typically a tight loop knitted. The Length of yarn absorbed by stitch and the raw matter are changed with each sample. The physical properties of each sample are measured and compared. Surface modifications are made to these samples by impregnation of microcapsules based on jojoba oil. Knits are compressif, elastic in all directions, light, thin, comfortable, and washable for hygiene issues. In addition, the washing can find their compressive properties. The Jojoba Oil microcapsules hydrated the human burnet skin. This moisturizer is used to the firmness of the wound and it gives flexibility to the skin. Compressive Knits are biocompatible with burnet skin. The mixture of natural and synthetic fibers is irreplaceable in terms comfort and regularity.

  19. n-Gram-Based Text Compression

    Directory of Open Access Journals (Sweden)

    Vu H. Nguyen

    2016-01-01

    Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  20. Image compression of bone images

    International Nuclear Information System (INIS)

    Hayrapetian, A.; Kangarloo, H.; Chan, K.K.; Ho, B.; Huang, H.K.

    1989-01-01

    This paper reports a receiver operating characteristic (ROC) experiment conducted to compare the diagnostic performance of a compressed bone image with the original. The compression was done on custom hardware that implements an algorithm based on full-frame cosine transform. The compression ratio in this study is approximately 10:1, which was decided after a pilot experiment. The image set consisted of 45 hand images, including normal images and images containing osteomalacia and osteitis fibrosa. Each image was digitized with a laser film scanner to 2,048 x 2,048 x 8 bits. Six observers, all board-certified radiologists, participated in the experiment. For each ROC session, an independent ROC curve was constructed and the area under that curve calculated. The image set was randomized for each session, as was the order for viewing the original and reconstructed images. Analysis of variance was used to analyze the data and derive statistically significant results. The preliminary results indicate that the diagnostic quality of the reconstructed image is comparable to that of the original image

  1. Digital cinema video compression

    Science.gov (United States)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  2. 1999 Annual report: compression + power + service

    International Nuclear Information System (INIS)

    2000-01-01

    Enerflex manufactures, services and leases compression systems for the production and processing of natural gas and gas-fueled power generation systems. Design, engineering, project management, financing, installation commissioning and after-sales service are also part of Enerflex's arsenal of tools to ensure innovation, and high standards of quality and service. In 1999, Enerflex suffered an 18 per cent decline in revenues from $315 million in 1998 to $257 million in 1999, entirely due to lower sales of big ticket compression equipment in Canada. At the same time, revenues from international sales and service increased to $ 61.8 million in 1999, from $ 53 million in 1998. The company successfully completed the move to a new 328,000 sq. ft state-of-the-art manufacturing facility, and made its first significant sale to the United States in 1999 in the form of delivering a coal bed methane project in the Powder River area of Wyoming, and power generation equipment to Massachusetts. Although in the short term unusually warm average temperatures, industry cash flows, and access to capital may determine demand for the company's products and services, the long-term fundamentals are positive and demand for compression equipment and power generation systems is likely to grow. Indeed, in the fourth quarter of 1999, market conditions improved significantly and the company recorded its highest quarterly revenues and earnings during the last quarter. The annual review provides further details about the operations of the company's various divisions, (Compression and Power Systems, Parts and Compression Services, Leasing and Financing), management's review of the company's overall operations and finances, audited financial statements, and shareholders' information

  3. Experimental study on compressive strength of sediment brick masonry

    Science.gov (United States)

    Woen, Ean Lee; Malek, Marlinda Abdul; Mohammed, Bashar S.; Chao-Wei, Tang; Tamunif, Muhammad Thaqif

    2018-02-01

    The effects of pre-wetted unit bricks, mortar type and slenderness ratio of prisms on the compressive strength and failure mode of newly developed sediment brick have been evaluated and compared to clay brick and cement-sand bricks. The results show that pre-wetted sediment brick masonry exhibits higher compressive strength of up to 20% compared to the dry sediment masonry. Using cement-lime mortar leads to lower compressive strength compared to cement mortar. However, the sediment brick masonry with the cement lime mortar exhibit higher compressive strength in comparison with cement mortar masonry. More of diagonal shear cracks have been observed in the failure mode of the sediment bricks masonry compared to clay and cement-sand bricks masonry that show mostly vertical cracks and crushing. The sediment unit bricks display compressive strength in between clay and cement-sand bricks.

  4. Ohmic ignition of Neo-Alcator tokamak with adiabatic compression

    International Nuclear Information System (INIS)

    Inoue, Nobuyuki; Ogawa, Yuichi

    1992-01-01

    Ohmic ignition condition on axis of the DT tokamak plasma heated by minor radius and major radius adiabatic compression is studied assuming parabolic profiles for plasma parameters, elliptic plasma cross section, and Neo-Alcator confinement scaling. It is noticeable that magnetic compression reduces the necessary total plasma current for Ohmic ignition device. Typically in compact ignition tokamak of the minor radius of 0.47 m, major radius of 1.5 m and on-axis toroidal field of 20 T, the plasma current of 6.8 MA is sufficient for compression plasma, while that of 11.7 MA is for no compression plasma. Another example with larger major radius is also described. In such a device the large flux swing of Ohmic transformer is available for long burn. Application of magnetic compression saves the flux swing and thereby extends the burn time. (author)

  5. Microbuckling compression failure of a radiation-induced wood/polymer composite

    International Nuclear Information System (INIS)

    Boey, F.Y.C.

    1990-01-01

    A wood/polymer composite was produced by impregnating Ramin wood with methyl methacrylate monomer and subsequently polymerizing it by gamma irradiation. To assess the improvement in compression strength of the wood caused by the polymer impregnation, a microbuckling compression failure mechanism was used to model the compression failure of the composite. Such a mechanism was found to predict a linear relationship between the compression strength and the percentage polymer impregnation (by weight). Uniaxial compression test results at 45(±5)% and 90(±5)% relative humidity levels, after being statistically analysed, showed that such a linear relationship was valid for up to 100% polymer impregnation. (author)

  6. Acceptable levels of digital image compression in chest radiology

    International Nuclear Information System (INIS)

    Smith, I.

    2000-01-01

    The introduction of picture archival and communications systems (PACS) and teleradiology has prompted an examination of techniques that optimize the storage capacity and speed of digital storage and distribution networks. The general acceptance of the move to replace conventional screen-film capture with computed radiography (CR) is an indication that clinicians within the radiology community are willing to accept images that have been 'compressed'. The question to be answered, therefore, is what level of compression is acceptable. The purpose of the present study is to provide an assessment of the ability of a group of imaging professionals to determine whether an image has been compressed. To undertake this study a single mobile chest image, selected for the presence of some subtle pathology in the form of a number of septal lines in both costphrenic angles, was compressed to levels of 10:1, 20:1 and 30:1. These images were randomly ordered and shown to the observers for interpretation. Analysis of the responses indicates that in general it was not possible to distinguish the original image from its compressed counterparts. Furthermore, a preference appeared to be shown for images that have undergone low levels of compression. This preference can most likely be attributed to the 'de-noising' effect of the compression algorithm at low levels. Copyright (1999) Blackwell Science Pty. Ltd

  7. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    Science.gov (United States)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  8. Relating working memory to compression parameters in clinically fit hearing AIDS.

    Science.gov (United States)

    Souza, Pamela E; Sirow, Lynn

    2014-12-01

    Several laboratory studies have demonstrated that working memory may influence response to compression speed in controlled (i.e., laboratory) comparisons of compression. In this study, the authors explored whether the same relationship would occur under less controlled conditions, as might occur in a typical audiology clinic. Participants included 27 older adults who sought hearing care in a private practice audiology clinic. Working memory was measured for each participant using a reading span test. The authors examined the relationship between working memory and aided speech recognition in noise, using clinically fit hearing aids with a range of compression speeds. Working memory, amount of hearing loss, and age each contributed to speech recognition, but the contribution depended on the speed of the compression processor. For fast-acting compression, the best performance was obtained by patients with high working memory. For slow-acting compression, speech recognition was affected by age and amount of hearing loss but was not affected by working memory. Despite the expectation of greater variability from differences in compression implementation, number of compression channels, or attendant signal processing, the relationship between working memory and compression speed showed a similar pattern as results from more controlled, laboratory-based studies.

  9. Economic and environmental evaluation of compressed-air cars

    International Nuclear Information System (INIS)

    Creutzig, Felix; Kammen, Daniel M; Papson, Andrew; Schipper, Lee

    2009-01-01

    Climate change and energy security require a reduction in travel demand, a modal shift, and technological innovation in the transport sector. Through a series of press releases and demonstrations, a car using energy stored in compressed air produced by a compressor has been suggested as an environmentally friendly vehicle of the future. We analyze the thermodynamic efficiency of a compressed-air car powered by a pneumatic engine and consider the merits of compressed air versus chemical storage of potential energy. Even under highly optimistic assumptions the compressed-air car is significantly less efficient than a battery electric vehicle and produces more greenhouse gas emissions than a conventional gas-powered car with a coal intensive power mix. However, a pneumatic-combustion hybrid is technologically feasible, inexpensive and could eventually compete with hybrid electric vehicles.

  10. Indentation of elastically soft and plastically compressible solids

    DEFF Research Database (Denmark)

    Needleman, A.; Tvergaard, Viggo; Van der Giessen, E.

    2015-01-01

    rapidly for small deviations from plastic incompressibility and then decreases rather slowly for values of the plastic Poisson's ratio less than 0.25. For both soft elasticity and plastic compressibility, the main reason for the lower values of indentation hardness is related to the reduction......The effect of soft elasticity, i.e., a relatively small value of the ratio of Young's modulus to yield strength and plastic compressibility on the indentation of isotropically hardening elastic-viscoplastic solids is investigated. Calculations are carried out for indentation of a perfectly sticking...... rigid sharp indenter into a cylinder modeling indentation of a half space. The material is characterized by a finite strain elastic-viscoplastic constitutive relation that allows for plastic as well as elastic compressibility. Both soft elasticity and plastic compressibility significantly reduce...

  11. Design, Implementation and Evaluation of an Operating System for a Network of Transputers.

    Science.gov (United States)

    1987-06-01

    WHILE TRUE -- listen to linki SEQ receiving the header BYTE.SLICE.INPUT (linkl,headerl,1,header.size) -- decoding the block size block.sizelLO] z...I’m done BYTE.SLICE.OUTPUT (screen[0] ,header0,3,1) WHILE TRUE -- listen to linki SEQ- rec eiving the header BYTE.SLICE. IPUT (linkl,headerl,1

  12. Compression of Short Text on Embedded Systems

    DEFF Research Database (Denmark)

    Rein, S.; Gühmann, C.; Fitzek, Frank

    2006-01-01

    The paper details a scheme for lossless compression of a short data series larger than 50 bytes. The method uses arithmetic coding and context modelling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes...

  13. System using data compression and hashing adapted for use for multimedia encryption

    Science.gov (United States)

    Coffland, Douglas R [Livermore, CA

    2011-07-12

    A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

  14. Configuring and Characterizing X-Rays for Laser-Driven Compression Experiments at the Dynamic Compression Sector

    Science.gov (United States)

    Li, Y.; Capatina, D.; D'Amico, K.; Eng, P.; Hawreliak, J.; Graber, T.; Rickerson, D.; Klug, J.; Rigg, P. A.; Gupta, Y. M.

    2017-06-01

    Coupling laser-driven compression experiments to the x-ray beam at the Dynamic Compression Sector (DCS) at the Advanced Photon Source (APS) of Argonne National Laboratory requires state-of-the-art x-ray focusing, pulse isolation, and diagnostics capabilities. The 100J UV pulsed laser system can be fired once every 20 minutes so precise alignment and focusing of the x-rays on each new sample must be fast and reproducible. Multiple Kirkpatrick-Baez (KB) mirrors are used to achieve a focal spot size as small as 50 μm at the target, while the strategic placement of scintillating screens, cameras, and detectors allows for fast diagnosis of the beam shape, intensity, and alignment of the sample to the x-ray beam. In addition, a series of x-ray choppers and shutters are used to ensure that the sample is exposed to only a single x-ray pulse ( 80ps) during the dynamic compression event and require highly precise synchronization. Details of the technical requirements, layout, and performance of these instruments will be presented. Work supported by DOE/NNSA.

  15. Envera Variable Compression Ratio Engine

    Energy Technology Data Exchange (ETDEWEB)

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  16. SU-D-BRA-06: Duodenal Interfraction Motion with Abdominal Compression

    International Nuclear Information System (INIS)

    Witztum, A; Holyoake, D; Warren, S; Partridge, M; Hawkins, M

    2016-01-01

    Purpose: To quantify the effect of abdominal compression on duodenal motion during pancreatic radiotherapy. Methods: Seven patients treated for pancreatic cancer were selected for analysis. Four patients were treated with abdominal compression and three without. The duodenum was contoured by the same physician on each CBCT (five CBCTs for patients with compression, four for non-compression patients). CBCTs were rigidly registered using a soft tissue match and contours were copied to the delivered plans which were all radical (BED > 50 Gy). The distance between the duodenum on the planning CT and each CBCT was quantified by calculating the root mean square (RMS) distance. The DVHs of each abdominal compression patient was converted to an EQD2 DVH (alpha/beta = 10) using an in-house tool and volumes receiving at least 25, 35, 45, and 50 Gy were recorded. Results: The maximum variation in duodenal volumes on the CBCTs for the four abdominal compression patients were 19.1 cm 3 (32.8%), 19.1 cm 3 (20.6%), 19.9 cm 3 (14.3%), and 12.9 cm 3 (27.3%) compared to 15.2 cm 3 (17.6%), 34.7 cm 3 (83.4%), and 56 cm 3 (60.2%) for non-compression patients. The average RMS distance between the duodenum on the planning CT and each CBCT for all abdominal compression patients was 0.3 cm compared to 0.7 cm for non-compressed patients. The largest (and average) difference between the planning CT and CBCTs in volume of duodenum receiving more than 25, 35, 45 and 50 Gy for abdominal compression patients was 11% (5%), 9% (3%), 9% (2%), and 6% (1%). Conclusion: Abdominal compression reduces variation in volume and absolute position of the duodenum throughout treatment. This is seen as an improvement but does not eliminate the need to consider dosimetric effects of motion. Abdominal compression is particularly useful in SBRT when only a few fractions are delivered. Alon Witztum is supported by an MRC/Gray Institute DPhil Studentship. Daniel Holyoake is supported by a CRUK/Nuffield Clinical

  17. SU-D-BRA-06: Duodenal Interfraction Motion with Abdominal Compression

    Energy Technology Data Exchange (ETDEWEB)

    Witztum, A; Holyoake, D; Warren, S; Partridge, M; Hawkins, M [CRUK/MRC Oxford Institute for Radiation Oncology, Department of Oncology, University of Oxford, Oxford (United Kingdom)

    2016-06-15

    Purpose: To quantify the effect of abdominal compression on duodenal motion during pancreatic radiotherapy. Methods: Seven patients treated for pancreatic cancer were selected for analysis. Four patients were treated with abdominal compression and three without. The duodenum was contoured by the same physician on each CBCT (five CBCTs for patients with compression, four for non-compression patients). CBCTs were rigidly registered using a soft tissue match and contours were copied to the delivered plans which were all radical (BED > 50 Gy). The distance between the duodenum on the planning CT and each CBCT was quantified by calculating the root mean square (RMS) distance. The DVHs of each abdominal compression patient was converted to an EQD2 DVH (alpha/beta = 10) using an in-house tool and volumes receiving at least 25, 35, 45, and 50 Gy were recorded. Results: The maximum variation in duodenal volumes on the CBCTs for the four abdominal compression patients were 19.1 cm{sup 3} (32.8%), 19.1 cm{sup 3} (20.6%), 19.9 cm{sup 3} (14.3%), and 12.9 cm{sup 3} (27.3%) compared to 15.2 cm{sup 3} (17.6%), 34.7 cm{sup 3} (83.4%), and 56 cm{sup 3} (60.2%) for non-compression patients. The average RMS distance between the duodenum on the planning CT and each CBCT for all abdominal compression patients was 0.3 cm compared to 0.7 cm for non-compressed patients. The largest (and average) difference between the planning CT and CBCTs in volume of duodenum receiving more than 25, 35, 45 and 50 Gy for abdominal compression patients was 11% (5%), 9% (3%), 9% (2%), and 6% (1%). Conclusion: Abdominal compression reduces variation in volume and absolute position of the duodenum throughout treatment. This is seen as an improvement but does not eliminate the need to consider dosimetric effects of motion. Abdominal compression is particularly useful in SBRT when only a few fractions are delivered. Alon Witztum is supported by an MRC/Gray Institute DPhil Studentship. Daniel Holyoake is

  18. Understanding Turbulence in Compressing Plasmas and Its Exploitation or Prevention

    Science.gov (United States)

    Davidovits, Seth

    Unprecedented densities and temperatures are now achieved in compressions of plasma, by lasers and by pulsed power, in major experimental facilities. These compressions, carried out at the largest scale at the National Ignition Facility and at the Z Pulsed Power Facility, have important applications, including fusion, X-ray production, and materials research. Several experimental and simulation results suggest that the plasma in some of these compressions is turbulent. In fact, measurements suggest that in certain laboratory plasma compressions the turbulent energy is a dominant energy component. Similarly, turbulence is dominant in some compressing astrophysical plasmas, such as in molecular clouds. Turbulence need not be dominant to be important; even small quantities could greatly influence experiments that are sensitive to mixing of non-fuel into fuel, such as compressions seeking fusion ignition. Despite its important role in major settings, bulk plasma turbulence under compression is insufficiently understood to answer or even to pose some of the most fundamental questions about it. This thesis both identifies and answers key questions in compressing turbulent motion, while providing a description of the behavior of three-dimensional, isotropic, compressions of homogeneous turbulence with a plasma viscosity. This description includes a simple, but successful, new model for the turbulent energy of plasma undergoing compression. The unique features of compressing turbulence with a plasma viscosity are shown, including the sensitivity of the turbulence to plasma ionization, and a "sudden viscous dissipation'' effect which rapidly converts plasma turbulent energy into thermal energy. This thesis then examines turbulence in both laboratory compression experiments and molecular clouds. It importantly shows: the possibility of exploiting turbulence to make fusion or X-ray production more efficient; conditions under which hot-spot turbulence can be prevented; and a

  19. Contribution of collagen fibers to the compressive stiffness of cartilaginous tissues.

    Science.gov (United States)

    Römgens, Anne M; van Donkelaar, Corrinus C; Ito, Keita

    2013-11-01

    Cartilaginous tissues such as the intervertebral disk are predominantly loaded under compression. Yet, they contain abundant collagen fibers, which are generally assumed to contribute to tensile loading only. Fiber tension is thought to originate from swelling of the proteoglycan-rich nucleus. However, in aged or degenerate disk, proteoglycans are depleted, whereas collagen content changes little. The question then rises to which extend the collagen may contribute to the compressive stiffness of the tissue. We hypothesized that this contribution is significant at high strain magnitudes and that the effect depends on fiber orientation. In addition, we aimed to determine the compression of the matrix. Bovine inner and outer annulus fibrosus specimens were subjected to incremental confined compression tests up to 60 % strain in radial and circumferential direction. The compressive aggregate modulus was determined per 10 % strain increment. The biochemical composition of the compressed specimens and uncompressed adjacent tissue was determined to compute solid matrix compression. The stiffness of all specimens increased nonlinearly with strain. The collagen-rich outer annulus was significantly stiffer than the inner annulus above 20 % compressive strain. Orientation influenced the modulus in the collagen-rich outer annulus. Finally, it was shown that the solid matrix was significantly compressed above 30 % strain. Therefore, we concluded that collagen fibers significantly contribute to the compressive stiffness of the intervertebral disk at high strains. This is valuable for understanding the compressive behavior of collagen-reinforced tissues in general, and may be particularly relevant for aging or degenerate disks, which become more fibrous and less hydrated.

  20. Effect of feedback on delaying deterioration in quality of compressions during 2 minutes of continuous chest compressions

    DEFF Research Database (Denmark)

    Lyngeraa, Tobias S; Hjortrup, Peter Buhl; Wulff, Nille B

    2012-01-01

    delays deterioration of quality of compressions. METHODS: Participants attending a national one-day conference on cardiac arrest and CPR in Denmark were randomized to perform single-rescuer BLS with (n = 26) or without verbal and visual feedback (n = 28) on a manikin using a ZOLL AED plus. Data were...... analyzed using Rescuenet Code Review. Blinding of participants was not possible, but allocation concealment was performed. Primary outcome was the proportion of delivered compressions within target depth compared over a 2-minute period within the groups and between the groups. Secondary outcome...... was the proportion of delivered compressions within target rate compared over a 2-minute period within the groups and between the groups. Performance variables for 30-second intervals were analyzed and compared. RESULTS: 24 (92%) and 23 (82%) had CPR experience in the group with and without feedback respectively. 14...

  1. Recognition of VLSI Module Isomorphism

    Science.gov (United States)

    1990-03-01

    forthforth->next; 6.5 else{ prev4=prev4->next; forth=forth->next; if (header-. nenI ->tai==third){ header-.nevrI->tail=prev3; prev3->next=NULL; end...end=TRUE; if (header-. nenI ->head=third){ header-.newn->head=third->next; I if((third!=prev3)&&(finished!=TRUE)){ prev3->next=prev3->next->next; third

  2. Recoil Experiments Using a Compressed Air Cannon

    Science.gov (United States)

    Taylor, Brett

    2006-01-01

    Ping-Pong vacuum cannons, potato guns, and compressed air cannons are popular and dramatic demonstrations for lecture and lab. Students enjoy them for the spectacle, but they can also be used effectively to teach physics. Recently we have used a student-built compressed air cannon as a laboratory activity to investigate impulse, conservation of…

  3. Accurate nonlocal theory for cascaded quadratic soliton compression

    DEFF Research Database (Denmark)

    Bache, Morten; Bang, Ole; Moses, Jeffrey

    2007-01-01

    We study soliton compression in bulk quadratic nonlinear materials at 800 nm, where group-velocity mismatch dominates. We develop a nonlocal theory showing that efficient compression depends strongly on characteristic nonlocal time scales related to pulse dispersion....

  4. Performance evaluation of breast image compression techniques

    International Nuclear Information System (INIS)

    Anastassopoulos, G.; Lymberopoulos, D.; Panayiotakis, G.; Bezerianos, A.

    1994-01-01

    Novel diagnosis orienting tele working systems manipulate, store, and process medical data through real time communication - conferencing schemes. One of the most important factors affecting the performance of these systems is image handling. Compression algorithms can be applied to the medical images, in order to minimize : a) the volume of data to be stored in the database, b) the demanded bandwidth from the network, c) the transmission costs, and to minimize the speed of the transmitted data. In this paper an estimation of all the factors of the process that affect the presentation of breast images is made, from the time the images are produced from a modality, till the compressed images are stored, or transmitted in a Broadband network (e.g. B-ISDN). The images used were scanned images of the TOR(MAX) Leeds breast phantom, as well as typical breast images. A comparison of seven compression techniques has been done, based on objective criteria such as Mean Square Error (MSE), resolution, contrast, etc. The user can choose the appropriate compression ratio in order to achieve the desired image quality. (authors)

  5. The compression dome concept: the restorative implications.

    Science.gov (United States)

    Milicich, Graeme

    2017-01-01

    Evidence now supports the concept that the enamel on a tooth acts like a compression dome, much like the dome of a cathedral. With an overlying enamel compression dome, the underlying dentin is protected from damaging tensile forces. Disruption of a compression system leads to significant shifts in load pathways. The clinical restorative implications are significant and far-reaching. Cutting the wrong areas of a tooth exposes the underlying dentin to tensile forces that exceed natural design parameters. These forces lead to crack propagation, causing flexural pain and eventual fracture and loss of tooth structure. Improved understanding of the microanatomy of tooth structure and where it is safe to cut teeth has led to a revolution in dentistry that is known by several names, including microdentistry, minimally invasive dentistry, biomimetic dentistry, and bioemulation dentistry. These treatment concepts have developed due to a coalescence of principles of tooth microanatomy, material science, adhesive dentistry, and reinforcing techniques that, when applied together, will allow dentists to repair a compromised compression dome so that it more closely replicates the structure of the healthy tooth.

  6. Metal Hydride Compression

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)

    2017-07-01

    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

  7. Sparse representations and compressive sensing for imaging and vision

    CERN Document Server

    Patel, Vishal M

    2013-01-01

    Compressed sensing or compressive sensing is a new concept in signal processing where one measures a small number of non-adaptive linear combinations of the signal.  These measurements are usually much smaller than the number of samples that define the signal.  From these small numbers of measurements, the signal is then reconstructed by non-linear procedure.  Compressed sensing has recently emerged as a powerful tool for efficiently processing data in non-traditional ways.  In this book, we highlight some of the key mathematical insights underlying sparse representation and compressed sensing and illustrate the role of these theories in classical vision, imaging and biometrics problems.

  8. Application of digital compression techniques to optical surveillance systems

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1991-01-01

    There are many benefits to handling video images electronically, however, the amount of digital data in a normal video image is a major obstacle. The solution is to remove the high frequency and redundant information in a process that is referred to as compression. Compression allows the number of digital bits required for a given image to be reduced for more efficient storage or transmission of images. The next question is how much compression can be done without impairing the image quality beyond its usefulness for a given application. This paper discusses image compression that might be applied to provide useful images in unattended nuclear facility surveillance applications

  9. Encryption of Stereo Images after Compression by Advanced Encryption Standard (AES

    Directory of Open Access Journals (Sweden)

    Marwah k Hussien

    2018-04-01

    Full Text Available New partial encryption schemes are proposed, in which a secure encryption algorithm is used to encrypt only part of the compressed data. Partial encryption applied after application of image compression algorithm. Only 0.0244%-25% of the original data isencrypted for two pairs of dif-ferent grayscale imageswiththe size (256 ´ 256 pixels. As a result, we see a significant reduction of time in the stage of encryption and decryption. In the compression step, the Orthogonal Search Algorithm (OSA for motion estimation (the dif-ferent between stereo images is used. The resulting disparity vector and the remaining image were compressed by Discrete Cosine Transform (DCT, Quantization and arithmetic encoding. The image compressed was encrypted by Advanced Encryption Standard (AES. The images were then decoded and were compared with the original images. Experimental results showed good results in terms of Peak Signal-to-Noise Ratio (PSNR, Com-pression Ratio (CR and processing time. The proposed partial encryption schemes are fast, se-cure and do not reduce the compression performance of the underlying selected compression methods

  10. Wavelet compression algorithm applied to abdominal ultrasound images

    International Nuclear Information System (INIS)

    Lin, Cheng-Hsun; Pan, Su-Feng; LU, Chin-Yuan; Lee, Ming-Che

    2006-01-01

    We sought to investigate acceptable compression ratios of lossy wavelet compression on 640 x 480 x 8 abdominal ultrasound (US) images. We acquired 100 abdominal US images with normal and abnormal findings from the view station of a 932-bed teaching hospital. The US images were then compressed at quality factors (QFs) of 3, 10, 30, and 50 followed outcomes of a pilot study. This was equal to the average compression ratios of 4.3:1, 8.5:1, 20:1 and 36.6:1, respectively. Four objective measurements were carried out to examine and compare the image degradation between original and compressed images. Receiver operating characteristic (ROC) analysis was also introduced for subjective assessment. Five experienced and qualified radiologists as reviewers blinded to corresponding pathological findings, analysed paired 400 randomly ordered images with two 17-inch thin film transistor/liquid crystal display (TFT/LCD) monitors. At ROC analysis, the average area under curve (Az) for US abdominal image was 0.874 at the ratio of 36.6:1. The compressed image size was only 2.7% for US original at this ratio. The objective parameters showed the higher the mean squared error (MSE) or root mean squared error (RMSE) values, the poorer the image quality. The higher signal-to-noise ratio (SNR) or peak signal-to-noise ratio (PSNR) values indicated better image quality. The average RMSE, PSNR at 36.6:1 for US were 4.84 ± 0.14, 35.45 dB, respectively. This finding suggests that, on the basis of the patient sample, wavelet compression of abdominal US to a ratio of 36.6:1 did not adversely affect diagnostic performance or evaluation error for radiologists' interpretation so as to risk affecting diagnosis

  11. Compressed beam directed particle nuclear energy generator

    International Nuclear Information System (INIS)

    Salisbury, W.W.

    1985-01-01

    This invention relates to the generation of energy from the fusion of atomic nuclei which are caused to travel towards each other along collision courses, orbiting in common paths having common axes and equal radii. High velocity fusible ion beams are directed along head-on circumferential collision paths in an annular zone wherein beam compression by electrostatic focusing greatly enhances head-on fusion-producing collisions. In one embodiment, a steady radial electric field is imposed on the beams to compress the beams and reduce the radius of the spiral paths for enhancing the particle density. Beam compression is achieved through electrostatic focusing to establish and maintain two opposing beams in a reaction zone

  12. XPath Node Selection over Grammar-Compressed Trees

    Directory of Open Access Journals (Sweden)

    Sebastian Maneth

    2013-11-01

    Full Text Available XML document markup is highly repetitive and therefore well compressible using grammar-based compression. Downward, navigational XPath can be executed over grammar-compressed trees in PTIME: the query is translated into an automaton which is executed in one pass over the grammar. This result is well-known and has been mentioned before. Here we present precise bounds on the time complexity of this problem, in terms of big-O notation. For a given grammar and XPath query, we consider three different tasks: (1 to count the number of nodes selected by the query, (2 to materialize the pre-order numbers of the selected nodes, and (3 to serialize the subtrees at the selected nodes.

  13. Inelastic response of silicon to shock compression.

    Science.gov (United States)

    Higginbotham, A; Stubley, P G; Comley, A J; Eggert, J H; Foster, J M; Kalantar, D H; McGonegle, D; Patel, S; Peacock, L J; Rothman, S D; Smith, R F; Suggit, M J; Wark, J S

    2016-04-13

    The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported 'anomalous' elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. This model is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature.

  14. Composition-Structure-Property Relations of Compressed Borosilicate Glasses

    Science.gov (United States)

    Svenson, Mouritz N.; Bechgaard, Tobias K.; Fuglsang, Søren D.; Pedersen, Rune H.; Tjell, Anders Ø.; Østergaard, Martin B.; Youngman, Randall E.; Mauro, John C.; Rzoska, Sylwester J.; Bockowski, Michal; Smedskjaer, Morten M.

    2014-08-01

    Hot isostatic compression is an interesting method for modifying the structure and properties of bulk inorganic glasses. However, the structural and topological origins of the pressure-induced changes in macroscopic properties are not yet well understood. In this study, we report on the pressure and composition dependences of density and micromechanical properties (hardness, crack resistance, and brittleness) of five soda-lime borosilicate glasses with constant modifier content, covering the extremes from Na-Ca borate to Na-Ca silicate end members. Compression experiments are performed at pressures ≤1.0 GPa at the glass transition temperature in order to allow processing of large samples with relevance for industrial applications. In line with previous reports, we find an increasing fraction of tetrahedral boron, density, and hardness but a decreasing crack resistance and brittleness upon isostatic compression. Interestingly, a strong linear correlation between plastic (irreversible) compressibility and initial trigonal boron content is demonstrated, as the trigonal boron units are the ones most disposed for structural and topological rearrangements upon network compaction. A linear correlation is also found between plastic compressibility and the relative change in hardness with pressure, which could indicate that the overall network densification is responsible for the increase in hardness. Finally, we find that the micromechanical properties exhibit significantly different composition dependences before and after pressurization. The findings have important implications for tailoring microscopic and macroscopic structures of glassy materials and thus their properties through the hot isostatic compression method.

  15. Superplastic boronizing of duplex stainless steel under dual compression method

    International Nuclear Information System (INIS)

    Jauhari, I.; Yusof, H.A.M.; Saidan, R.

    2011-01-01

    Highlights: → Superplastic boronizing. → Dual compression method has been developed. → Hard boride layer. → Bulk deformation was significantly thicker the boronized layer. → New data on boronizing could be expanded the application of DSS in industries. - Abstract: In this work, SPB of duplex stainless steel (DSS) under compression method is studied with the objective to produce ultra hard and thick boronized layer using minimal amount of boron powder and at a much faster boronizing time as compared to the conventional process. SPB is conducted under dual compression methods. In the first method DSS is boronized using a minimal amount of boron powder under a fix pre-strained compression condition throughout the process. The compression strain is controlled in such a way that plastic deformation is restricted at the surface asperities of the substrate in contact with the boron powder. In the second method, the boronized specimen taken from the first mode is compressed superplastically up to a certain compressive strain under a certain strain rate condition. The process in the second method is conducted without the present of boron powder. As compared with the conventional boronizing process, through this SPB under dual compression methods, a much harder and thicker boronized layer thickness is able to be produced using a minimal amount of boron powder.

  16. Superplastic boronizing of duplex stainless steel under dual compression method

    Energy Technology Data Exchange (ETDEWEB)

    Jauhari, I., E-mail: iswadi@um.edu.my [Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia); Yusof, H.A.M.; Saidan, R. [Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia)

    2011-10-25

    Highlights: {yields} Superplastic boronizing. {yields} Dual compression method has been developed. {yields} Hard boride layer. {yields} Bulk deformation was significantly thicker the boronized layer. {yields} New data on boronizing could be expanded the application of DSS in industries. - Abstract: In this work, SPB of duplex stainless steel (DSS) under compression method is studied with the objective to produce ultra hard and thick boronized layer using minimal amount of boron powder and at a much faster boronizing time as compared to the conventional process. SPB is conducted under dual compression methods. In the first method DSS is boronized using a minimal amount of boron powder under a fix pre-strained compression condition throughout the process. The compression strain is controlled in such a way that plastic deformation is restricted at the surface asperities of the substrate in contact with the boron powder. In the second method, the boronized specimen taken from the first mode is compressed superplastically up to a certain compressive strain under a certain strain rate condition. The process in the second method is conducted without the present of boron powder. As compared with the conventional boronizing process, through this SPB under dual compression methods, a much harder and thicker boronized layer thickness is able to be produced using a minimal amount of boron powder.

  17. A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs

    Directory of Open Access Journals (Sweden)

    Yu Zheng

    2017-06-01

    Full Text Available In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.

  18. Bystander fatigue and CPR quality by older bystanders: a randomized crossover trial comparing continuous chest compressions and 30:2 compressions to ventilations.

    Science.gov (United States)

    Liu, Shawn; Vaillancourt, Christian; Kasaboski, Ann; Taljaard, Monica

    2016-11-01

    This study sought to measure bystander fatigue and cardiopulmonary resuscitation (CPR) quality after five minutes of CPR using the continuous chest compression (CCC) versus the 30:2 chest compression to ventilation method in older lay persons, a population most likely to perform CPR on cardiac arrest victims. This randomized crossover trial took place at three tertiary care hospitals and a seniors' center. Participants were aged ≥55 years without significant physical limitations (frailty score ≤3/7). They completed two 5-minute CPR sessions (using 30:2 and CCC) on manikins; sessions were separated by a rest period. We used concealed block randomization to determine CPR method order. Metronome feedback maintained a compression rate of 100/minute. We measured heart rate (HR), mean arterial pressure (MAP), and Borg Exertion Scale. CPR quality measures included total number of compressions and number of adequate compressions (depth ≥5 cm). Sixty-three participants were enrolled: mean age 70.8 years, female 66.7%, past CPR training 60.3%. Bystander fatigue was similar between CPR methods: mean difference in HR -0.59 (95% CI -3.51-2.33), MAP 1.64 (95% CI -0.23-3.50), and Borg 0.46 (95% CI 0.07-0.84). Compared to 30:2, participants using CCC performed more chest compressions (480.0 v. 376.3, mean difference 107.7; pCPR quality decreased significantly faster when performing CCC compared to 30:2. However, performing CCC produced more adequate compressions overall with a similar level of fatigue compared to the 30:2 method.

  19. An efficient and extensible approach for compressing phylogenetic trees

    KAUST Repository

    Matthews, Suzanne J

    2011-01-01

    Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference.Results: On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings.Conclusions: TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. © 2011 Matthews and Williams; licensee BioMed Central Ltd.

  20. Investigation on wind energy-compressed air power system.

    Science.gov (United States)

    Jia, Guang-Zheng; Wang, Xuan-Yin; Wu, Gen-Mao

    2004-03-01

    Wind energy is a pollution free and renewable resource widely distributed over China. Aimed at protecting the environment and enlarging application of wind energy, a new approach to application of wind energy by using compressed air power to some extent instead of electricity put forward. This includes: explaining the working principles and characteristics of the wind energy-compressed air power system; discussing the compatibility of wind energy and compressor capacity; presenting the theoretical model and computational simulation of the system. The obtained compressor capacity vs wind power relationship in certain wind velocity range can be helpful in the designing of the wind power-compressed air system. Results of investigations on the application of high-pressure compressed air for pressure reduction led to conclusion that pressure reduction with expander is better than the throttle regulator in energy saving.