WorldWideScience

Sample records for artificial compressibility method

  1. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-01-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data. (author)

  2. Treatment of fully enclosed FSI using artificial compressibility

    CSIR Research Space (South Africa)

    Bogaers, Alfred EJ

    2013-07-01

    Full Text Available artificial compressibility (AC), whereby the fluid equations are modified to allow for compressibility which internally incorporates an approximation of the system volume change as a function of pressure....

  3. Stability of Bifurcating Stationary Solutions of the Artificial Compressible System

    Science.gov (United States)

    Teramoto, Yuka

    2018-02-01

    The artificial compressible system gives a compressible approximation of the incompressible Navier-Stokes system. The latter system is obtained from the former one in the zero limit of the artificial Mach number ɛ which is a singular limit. The sets of stationary solutions of both systems coincide with each other. It is known that if a stationary solution of the incompressible system is asymptotically stable and the velocity field of the stationary solution satisfies an energy-type stability criterion, then it is also stable as a solution of the artificial compressible one for sufficiently small ɛ . In general, the range of ɛ shrinks when the spectrum of the linearized operator for the incompressible system approaches to the imaginary axis. This can happen when a stationary bifurcation occurs. It is proved that when a stationary bifurcation from a simple eigenvalue occurs, the range of ɛ can be taken uniformly near the bifurcation point to conclude the stability of the bifurcating solution as a solution of the artificial compressible system.

  4. Extending the robustness and efficiency of artificial compressibility for partitioned fluid-structure interactions

    CSIR Research Space (South Africa)

    Bogaers, Alfred EJ

    2015-01-01

    Full Text Available In this paper we introduce the idea of combining artificial compressibility (AC) with quasi-Newton (QN) methods to solve strongly coupled, fully/quasi-enclosed fluid-structure interaction (FSI) problems. Partitioned, incompressible, FSI based...

  5. Survey of numerical methods for compressible fluids

    Energy Technology Data Exchange (ETDEWEB)

    Sod, G A

    1977-06-01

    The finite difference methods of Godunov, Hyman, Lax-Wendroff (two-step), MacCormack, Rusanov, the upwind scheme, the hybrid scheme of Harten and Zwas, the antidiffusion method of Boris and Book, and the artificial compression method of Harten are compared with the random choice known as Glimm's method. The methods are used to integrate the one-dimensional equations of gas dynamics for an inviscid fluid. The results are compared and demonstrate that Glimm's method has several advantages. 16 figs., 4 tables.

  6. A relaxation-projection method for compressible flows. Part II: Artificial heat exchanges for multiphase shocks

    International Nuclear Information System (INIS)

    Petitpas, Fabien; Franquet, Erwin; Saurel, Richard; Le Metayer, Olivier

    2007-01-01

    The relaxation-projection method developed in Saurel et al. [R. Saurel, E. Franquet, E. Daniel, O. Le Metayer, A relaxation-projection method for compressible flows. Part I: The numerical equation of state for the Euler equations, J. Comput. Phys. (2007) 822-845] is extended to the non-conservative hyperbolic multiphase flow model of Kapila et al. [A.K. Kapila, Menikoff, J.B. Bdzil, S.F. Son, D.S. Stewart, Two-phase modeling of deflagration to detonation transition in granular materials: reduced equations, Physics of Fluids 13(10) (2001) 3002-3024]. This model has the ability to treat multi-temperatures mixtures evolving with a single pressure and velocity and is particularly interesting for the computation of interface problems with compressible materials as well as wave propagation in heterogeneous mixtures. The non-conservative character of this model poses however computational challenges in the presence of shocks. The first issue is related to the Riemann problem resolution that necessitates shock jump conditions. Thanks to the Rankine-Hugoniot relations proposed and validated in Saurel et al. [R. Saurel, O. Le Metayer, J. Massoni, S. Gavrilyuk, Shock jump conditions for multiphase mixtures with stiff mechanical relaxation, Shock Waves 16 (3) (2007) 209-232] exact and approximate 2-shocks Riemann solvers are derived. However, the Riemann solver is only a part of a numerical scheme and non-conservative variables pose extra difficulties for the projection or cell average of the solution. It is shown that conventional Godunov schemes are unable to converge to the exact solution for strong multiphase shocks. This is due to the incorrect partition of the energies or entropies in the cell averaged mixture. To circumvent this difficulty a specific Lagrangian scheme is developed. The correct partition of the energies is achieved by using an artificial heat exchange in the shock layer. With the help of an asymptotic analysis this heat exchange takes a similar form as

  7. A relaxation-projection method for compressible flows. Part II: Artificial heat exchanges for multiphase shocks

    Science.gov (United States)

    Petitpas, Fabien; Franquet, Erwin; Saurel, Richard; Le Metayer, Olivier

    2007-08-01

    The relaxation-projection method developed in Saurel et al. [R. Saurel, E. Franquet, E. Daniel, O. Le Metayer, A relaxation-projection method for compressible flows. Part I: The numerical equation of state for the Euler equations, J. Comput. Phys. (2007) 822-845] is extended to the non-conservative hyperbolic multiphase flow model of Kapila et al. [A.K. Kapila, Menikoff, J.B. Bdzil, S.F. Son, D.S. Stewart, Two-phase modeling of deflagration to detonation transition in granular materials: reduced equations, Physics of Fluids 13(10) (2001) 3002-3024]. This model has the ability to treat multi-temperatures mixtures evolving with a single pressure and velocity and is particularly interesting for the computation of interface problems with compressible materials as well as wave propagation in heterogeneous mixtures. The non-conservative character of this model poses however computational challenges in the presence of shocks. The first issue is related to the Riemann problem resolution that necessitates shock jump conditions. Thanks to the Rankine-Hugoniot relations proposed and validated in Saurel et al. [R. Saurel, O. Le Metayer, J. Massoni, S. Gavrilyuk, Shock jump conditions for multiphase mixtures with stiff mechanical relaxation, Shock Waves 16 (3) (2007) 209-232] exact and approximate 2-shocks Riemann solvers are derived. However, the Riemann solver is only a part of a numerical scheme and non-conservative variables pose extra difficulties for the projection or cell average of the solution. It is shown that conventional Godunov schemes are unable to converge to the exact solution for strong multiphase shocks. This is due to the incorrect partition of the energies or entropies in the cell averaged mixture. To circumvent this difficulty a specific Lagrangian scheme is developed. The correct partition of the energies is achieved by using an artificial heat exchange in the shock layer. With the help of an asymptotic analysis this heat exchange takes a similar form as

  8. [Artificial muscle and its prospect in application for direct cardiac compression assist].

    Science.gov (United States)

    Dong, Jing; Yang, Ming; Zheng, Zhejun; Yan, Guozheng

    2008-12-01

    Artificial heart is an effective device in solving insufficient native heart supply for heart transplant, and the research and application of novel actuators play an important role in the development of artificial heart. In this paper, artificial muscle is introduced as the actuators of direct cardiac compression assist, and some of its parameters are compared with those of native heart muscle. The open problems are also discussed.

  9. Preconditioned characteristic boundary conditions based on artificial compressibility method for solution of incompressible flows

    Science.gov (United States)

    Hejranfar, Kazem; Parseh, Kaveh

    2017-09-01

    The preconditioned characteristic boundary conditions based on the artificial compressibility (AC) method are implemented at artificial boundaries for the solution of two- and three-dimensional incompressible viscous flows in the generalized curvilinear coordinates. The compatibility equations and the corresponding characteristic variables (or the Riemann invariants) are mathematically derived and then applied as suitable boundary conditions in a high-order accurate incompressible flow solver. The spatial discretization of the resulting system of equations is carried out by the fourth-order compact finite-difference (FD) scheme. In the preconditioning applied here, the value of AC parameter in the flow field and also at the far-field boundary is automatically calculated based on the local flow conditions to enhance the robustness and performance of the solution algorithm. The code is fully parallelized using the Concurrency Runtime standard and Parallel Patterns Library (PPL) and its performance on a multi-core CPU is analyzed. The incompressible viscous flows around a 2-D circular cylinder, a 2-D NACA0012 airfoil and also a 3-D wavy cylinder are simulated and the accuracy and performance of the preconditioned characteristic boundary conditions applied at the far-field boundaries are evaluated in comparison to the simplified boundary conditions and the non-preconditioned characteristic boundary conditions. It is indicated that the preconditioned characteristic boundary conditions considerably improve the convergence rate of the solution of incompressible flows compared to the other boundary conditions and the computational costs are significantly decreased.

  10. Bacterial DNA Sequence Compression Models Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Armando J. Pinho

    2013-08-01

    Full Text Available It is widely accepted that the advances in DNA sequencing techniques have contributed to an unprecedented growth of genomic data. This fact has increased the interest in DNA compression, not only from the information theory and biology points of view, but also from a practical perspective, since such sequences require storage resources. Several compression methods exist, and particularly, those using finite-context models (FCMs have received increasing attention, as they have been proven to effectively compress DNA sequences with low bits-per-base, as well as low encoding/decoding time-per-base. However, the amount of run-time memory required to store high-order finite-context models may become impractical, since a context-order as low as 16 requires a maximum of 17.2 x 109 memory entries. This paper presents a method to reduce such a memory requirement by using a novel application of artificial neural networks (ANN to build such probabilistic models in a compact way and shows how to use them to estimate the probabilities. Such a system was implemented, and its performance compared against state-of-the art compressors, such as XM-DNA (expert model and FCM-Mx (mixture of finite-context models , as well as with general-purpose compressors. Using a combination of order-10 FCM and ANN, similar encoding results to those of FCM, up to order-16, are obtained using only 17 megabytes of memory, whereas the latter, even employing hash-tables, uses several hundreds of megabytes.

  11. Prediction of compressibility parameters of the soils using artificial neural network.

    Science.gov (United States)

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  12. Estimation of mechanical properties of nanomaterials using artificial intelligence methods

    Science.gov (United States)

    Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.

    2014-09-01

    Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.

  13. Assessment of high-resolution methods for numerical simulations of compressible turbulence with shock waves

    International Nuclear Information System (INIS)

    Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.

    2010-01-01

    Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.

  14. Artificial neural network does better spatiotemporal compressive sampling

    Science.gov (United States)

    Lee, Soo-Young; Hsu, Charles; Szu, Harold

    2012-06-01

    Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.

  15. Prediction of compression strength of high performance concrete using artificial neural networks

    International Nuclear Information System (INIS)

    Torre, A; Moromi, I; Garcia, F; Espinoza, P; Acuña, L

    2015-01-01

    High-strength concrete is undoubtedly one of the most innovative materials in construction. Its manufacture is simple and is carried out starting from essential components (water, cement, fine and aggregates) and a number of additives. Their proportions have a high influence on the final strength of the product. This relations do not seem to follow a mathematical formula and yet their knowledge is crucial to optimize the quantities of raw materials used in the manufacture of concrete. Of all mechanical properties, concrete compressive strength at 28 days is most often used for quality control. Therefore, it would be important to have a tool to numerically model such relationships, even before processing. In this aspect, artificial neural networks have proven to be a powerful modeling tool especially when obtaining a result with higher reliability than knowledge of the relationships between the variables involved in the process. This research has designed an artificial neural network to model the compressive strength of concrete based on their manufacturing parameters, obtaining correlations of the order of 0.94

  16. Methods for Sampling and Measurement of Compressed Air Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Stroem, L

    1976-10-15

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  17. Methods for Sampling and Measurement of Compressed Air Contaminants

    International Nuclear Information System (INIS)

    Stroem, L.

    1976-10-01

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  18. Robustly Fitting and Forecasting Dynamical Data With Electromagnetically Coupled Artificial Neural Network: A Data Compression Method.

    Science.gov (United States)

    Wang, Ziyin; Liu, Mandan; Cheng, Yicheng; Wang, Rubin

    2017-06-01

    In this paper, a dynamical recurrent artificial neural network (ANN) is proposed and studied. Inspired from a recent research in neuroscience, we introduced nonsynaptic coupling to form a dynamical component of the network. We mathematically proved that, with adequate neurons provided, this dynamical ANN model is capable of approximating any continuous dynamic system with an arbitrarily small error in a limited time interval. Its extreme concise Jacobian matrix makes the local stability easy to control. We designed this ANN for fitting and forecasting dynamic data and obtained satisfied results in simulation. The fitting performance is also compared with those of both the classic dynamic ANN and the state-of-the-art models. Sufficient trials and the statistical results indicated that our model is superior to those have been compared. Moreover, we proposed a robust approximation problem, which asking the ANN to approximate a cluster of input-output data pairs in large ranges and to forecast the output of the system under previously unseen input. Our model and learning scheme proposed in this paper have successfully solved this problem, and through this, the approximation becomes much more robust and adaptive to noise, perturbation, and low-order harmonic wave. This approach is actually an efficient method for compressing massive external data of a dynamic system into the weight of the ANN.

  19. Determination of deformation and strength characteristics of artificial geomaterial having step-shaped discontinuities under uniaxial compression

    Science.gov (United States)

    Tsoy, PA

    2018-03-01

    In order to determine the empirical relationship between the linear dimensions of step-shaped macrocracks in geomaterials as well as deformation and strength characteristics of geomaterials (ultimate strength, modulus of deformation) under uniaxial compression, the artificial flat alabaster specimens with the through discontinuities have been manufactured and subjected to a series of the related physical tests.

  20. [Research progress on mechanical performance evaluation of artificial intervertebral disc].

    Science.gov (United States)

    Li, Rui; Wang, Song; Liao, Zhenhua; Liu, Weiqiang

    2018-03-01

    The mechanical properties of artificial intervertebral disc (AID) are related to long-term reliability of prosthesis. There are three testing methods involved in the mechanical performance evaluation of AID based on different tools: the testing method using mechanical simulator, in vitro specimen testing method and finite element analysis method. In this study, the testing standard, testing equipment and materials of AID were firstly introduced. Then, the present status of AID static mechanical properties test (static axial compression, static axial compression-shear), dynamic mechanical properties test (dynamic axial compression, dynamic axial compression-shear), creep and stress relaxation test, device pushout test, core pushout test, subsidence test, etc. were focused on. The experimental techniques using in vitro specimen testing method and testing results of available artificial discs were summarized. The experimental methods and research status of finite element analysis were also summarized. Finally, the research trends of AID mechanical performance evaluation were forecasted. The simulator, load, dynamic cycle, motion mode, specimen and test standard would be important research fields in the future.

  1. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  2. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  3. Using an artificial neural network to predict carbon dioxide compressibility factor at high pressure and temperature

    Energy Technology Data Exchange (ETDEWEB)

    Mohagheghian, Erfan [Memorial University of Newfoundland, St. John' s (Canada); Zafarian-Rigaki, Habiballah; Motamedi-Ghahfarrokhi, Yaser; Hemmati-Sarapardeh, Abdolhossein [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of)

    2015-10-15

    Carbon dioxide injection, which is widely used as an enhanced oil recovery (EOR) method, has the potential of being coupled with CO{sub 2} sequestration and reducing the emission of greenhouse gas. Hence, knowing the compressibility factor of carbon dioxide is of a vital significance. Compressibility factor (Z-factor) is traditionally measured through time consuming, expensive and cumbersome experiments. Hence, developing a fast, robust and accurate model for its estimation is necessary. In this study, a new reliable model on the basis of feed forward artificial neural networks is presented to predict CO{sub 2} compressibility factor. Reduced temperature and pressure were selected as the input parameters of the proposed model. To evaluate and compare the results of the developed model with pre-existing models, both statistical and graphical error analyses were employed. The results indicated that the proposed model is more reliable and accurate compared to pre-existing models in a wide range of temperature (up to 1,273.15 K) and pressure (up to 140MPa). Furthermore, by employing the relevancy factor, the effect of pressure and temprature on the Z-factor of CO{sub 2} was compared for below and above the critical pressure of CO{sub 2}, and the physcially expected trends were observed. Finally, to identify the probable outliers and applicability domain of the proposed ANN model, both numerical and graphical techniques based on Leverage approach were performed. The results illustrated that only 1.75% of the experimental data points were located out of the applicability domain of the proposed model. As a result, the developed model is reliable for the prediction of CO{sub 2} compressibility factor.

  4. Thermal characteristics of highly compressed bentonite

    International Nuclear Information System (INIS)

    Sueoka, Tooru; Kobayashi, Atsushi; Imamura, S.; Ogawa, Terushige; Murata, Shigemi.

    1990-01-01

    In the disposal of high level radioactive wastes in strata, it is planned to protect the canisters enclosing wastes with buffer materials such as overpacks and clay, therefore, the examination of artificial barrier materials is an important problem. The concept of the disposal in strata and the soil mechanics characteristics of highly compressed bentonite as an artificial barrier material were already reported. In this study, the basic experiment on the thermal characteristics of highly compressed bentonite was carried out, therefore, it is reported. The thermal conductivity of buffer materials is important because the possibility that it determines the temperature of solidified bodies and canisters is high, and the buffer materials may cause the thermal degeneration due to high temperature. Thermophysical properties are roughly divided into thermodynamic property, transport property and optical property. The basic principle of measured thermal conductivity and thermal diffusivity, the kinds of the measuring method and so on are explained. As for the measurement of the thermal conductivity of highly compressed bentonite, the experimental setup, the procedure, samples and the results are reported. (K.I.)

  5. Logarithmic compression methods for spectral data

    Science.gov (United States)

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  6. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  7. Quality by design approach: application of artificial intelligence techniques of tablets manufactured by direct compression.

    Science.gov (United States)

    Aksu, Buket; Paradkar, Anant; de Matas, Marcel; Ozer, Ozgen; Güneri, Tamer; York, Peter

    2012-12-01

    The publication of the International Conference of Harmonization (ICH) Q8, Q9, and Q10 guidelines paved the way for the standardization of quality after the Food and Drug Administration issued current Good Manufacturing Practices guidelines in 2003. "Quality by Design", mentioned in the ICH Q8 guideline, offers a better scientific understanding of critical process and product qualities using knowledge obtained during the life cycle of a product. In this scope, the "knowledge space" is a summary of all process knowledge obtained during product development, and the "design space" is the area in which a product can be manufactured within acceptable limits. To create the spaces, artificial neural networks (ANNs) can be used to emphasize the multidimensional interactions of input variables and to closely bind these variables to a design space. This helps guide the experimental design process to include interactions among the input variables, along with modeling and optimization of pharmaceutical formulations. The objective of this study was to develop an integrated multivariate approach to obtain a quality product based on an understanding of the cause-effect relationships between formulation ingredients and product properties with ANNs and genetic programming on the ramipril tablets prepared by the direct compression method. In this study, the data are generated through the systematic application of the design of experiments (DoE) principles and optimization studies using artificial neural networks and neurofuzzy logic programs.

  8. Application of PDF methods to compressible turbulent flows

    Science.gov (United States)

    Delarue, B. J.; Pope, S. B.

    1997-09-01

    A particle method applying the probability density function (PDF) approach to turbulent compressible flows is presented. The method is applied to several turbulent flows, including the compressible mixing layer, and good agreement is obtained with experimental data. The PDF equation is solved using a Lagrangian/Monte Carlo method. To accurately account for the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. The mean pressure, the determination of which has been the object of active research over the last few years, is obtained directly from the particle properties. It is therefore not necessary to link the PDF solver with a finite-volume type solver. The stochastic differential equations (SDE) which model the evolution of particle properties are based on existing second-order closures for compressible turbulence, limited in application to low turbulent Mach number flows. Tests are conducted in decaying isotropic turbulence to compare the performances of the PDF method with the Reynolds-stress closures from which it is derived, and in homogeneous shear flows, at which stage comparison with direct numerical simulation (DNS) data is conducted. The model is then applied to the plane compressible mixing layer, reproducing the well-known decrease in the spreading rate with increasing compressibility. It must be emphasized that the goal of this paper is not as much to assess the performance of models of compressibility effects, as it is to present an innovative and consistent PDF formulation designed for turbulent inhomogeneous compressible flows, with the aim of extending it further to deal with supersonic reacting flows.

  9. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Science.gov (United States)

    Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad

    2014-06-01

    Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  10. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Directory of Open Access Journals (Sweden)

    Shahoo Maleki

    2014-06-01

    Full Text Available Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR and Back-Propagation Neural Network (BPNN. Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  11. A measurement method for piezoelectric material properties under longitudinal compressive stress–-a compression test method for thin piezoelectric materials

    International Nuclear Information System (INIS)

    Kang, Lae-Hyong; Lee, Dae-Oen; Han, Jae-Hung

    2011-01-01

    We introduce a new compression test method for piezoelectric materials to investigate changes in piezoelectric properties under the compressive stress condition. Until now, compression tests of piezoelectric materials have been generally conducted using bulky piezoelectric ceramics and pressure block. The conventional method using the pressure block for thin piezoelectric patches, which are used in unimorph or bimorph actuators, is prone to unwanted bending and buckling. In addition, due to the constrained boundaries at both ends, the observed piezoelectric behavior contains boundary effects. In order to avoid these problems, the proposed method employs two guide plates with initial longitudinal tensile stress. By removing the tensile stress after bonding a piezoelectric material between the guide layers, longitudinal compressive stress is induced in the piezoelectric layer. Using the compression test specimens, two important properties, which govern the actuation performance of the piezoelectric material, the piezoelectric strain coefficients and the elastic modulus, are measured to evaluate the effects of applied electric fields and re-poling. The results show that the piezoelectric strain coefficient d 31 increases and the elastic modulus decreases when high voltage is applied to PZT5A, and the compression in the longitudinal direction decreases the piezoelectric strain coefficient d 31 but does not affect the elastic modulus. We also found that the re-poling of the piezoelectric material increases the elastic modulus, but the piezoelectric strain coefficient d 31 is not changed much (slightly increased) by re-poling

  12. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  13. Superplastic boronizing of duplex stainless steel under dual compression method

    International Nuclear Information System (INIS)

    Jauhari, I.; Yusof, H.A.M.; Saidan, R.

    2011-01-01

    Highlights: → Superplastic boronizing. → Dual compression method has been developed. → Hard boride layer. → Bulk deformation was significantly thicker the boronized layer. → New data on boronizing could be expanded the application of DSS in industries. - Abstract: In this work, SPB of duplex stainless steel (DSS) under compression method is studied with the objective to produce ultra hard and thick boronized layer using minimal amount of boron powder and at a much faster boronizing time as compared to the conventional process. SPB is conducted under dual compression methods. In the first method DSS is boronized using a minimal amount of boron powder under a fix pre-strained compression condition throughout the process. The compression strain is controlled in such a way that plastic deformation is restricted at the surface asperities of the substrate in contact with the boron powder. In the second method, the boronized specimen taken from the first mode is compressed superplastically up to a certain compressive strain under a certain strain rate condition. The process in the second method is conducted without the present of boron powder. As compared with the conventional boronizing process, through this SPB under dual compression methods, a much harder and thicker boronized layer thickness is able to be produced using a minimal amount of boron powder.

  14. Superplastic boronizing of duplex stainless steel under dual compression method

    Energy Technology Data Exchange (ETDEWEB)

    Jauhari, I., E-mail: iswadi@um.edu.my [Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia); Yusof, H.A.M.; Saidan, R. [Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia)

    2011-10-25

    Highlights: {yields} Superplastic boronizing. {yields} Dual compression method has been developed. {yields} Hard boride layer. {yields} Bulk deformation was significantly thicker the boronized layer. {yields} New data on boronizing could be expanded the application of DSS in industries. - Abstract: In this work, SPB of duplex stainless steel (DSS) under compression method is studied with the objective to produce ultra hard and thick boronized layer using minimal amount of boron powder and at a much faster boronizing time as compared to the conventional process. SPB is conducted under dual compression methods. In the first method DSS is boronized using a minimal amount of boron powder under a fix pre-strained compression condition throughout the process. The compression strain is controlled in such a way that plastic deformation is restricted at the surface asperities of the substrate in contact with the boron powder. In the second method, the boronized specimen taken from the first mode is compressed superplastically up to a certain compressive strain under a certain strain rate condition. The process in the second method is conducted without the present of boron powder. As compared with the conventional boronizing process, through this SPB under dual compression methods, a much harder and thicker boronized layer thickness is able to be produced using a minimal amount of boron powder.

  15. A modified compressible smoothed particle hydrodynamics method and its application on the numerical simulation of low and high velocity impacts

    International Nuclear Information System (INIS)

    Amanifard, N.; Haghighat Namini, V.

    2012-01-01

    In this study a Modified Compressible Smoothed Particle Hydrodynamics method is introduced which is applicable in problems involving shock wave structures and elastic-plastic deformations of solids. As a matter of fact, algorithm of the method is based on an approach which descritizes the momentum equation into three parts and solves each part separately and calculates their effects on the velocity field and displacement of particles. The most exclusive feature of the method is exactly removing artificial viscosity of the formulations and representing good compatibility with other reasonable numerical methods without any rigorous numerical fractures or tensile instabilities while Modified Compressible Smoothed Particle Hydrodynamics does not use any extra modifications. Two types of problems involving elastic-plastic deformations and shock waves are presented here to demonstrate the capability of Modified Compressible Smoothed Particle Hydrodynamics in simulation of such problems and its ability to capture shock. The problems that are proposed here are low and high velocity impacts between aluminum projectiles and semi infinite aluminum beams. Elastic-perfectly plastic model is chosen for constitutive model of the aluminum and the results of simulations are compared with other reasonable studies in these cases.

  16. Word aligned bitmap compression method, data structure, and apparatus

    Science.gov (United States)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  17. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  18. Subband Coding Methods for Seismic Data Compression

    Science.gov (United States)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  19. A streamlined artificial variable free version of simplex method.

    Directory of Open Access Journals (Sweden)

    Syed Inayatullah

    Full Text Available This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  20. A streamlined artificial variable free version of simplex method.

    Science.gov (United States)

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  1. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  2. Word aligned bitmap compression method, data structure, and apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  3. A Streamlined Artificial Variable Free Version of Simplex Method

    OpenAIRE

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new ...

  4. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  5. Structural Dynamic Response Compressing Technique in Bridges using a Cochlea-inspired Artificial Filter Bank (CAFB)

    International Nuclear Information System (INIS)

    Heo, G; Jeon, J; Son, B; Kim, C; Jeon, S; Lee, C

    2016-01-01

    In this study, a cochlea-inspired artificial filter bank (CAFB) was developed to efficiently obtain dynamic response of a structure, and a dynamic response measurement of a cable-stayed bridge model was also carried out to evaluate the performance of the developed CAFB. The developed CAFB used a band-pass filter optimizing algorithm (BOA) and peakpicking algorithm (PPA) to select and compress dynamic response signal containing the modal information which was significant enough. The CAFB was then optimized about the El-Centro earthquake wave which was often used in the construction research, and the software implementation of CAFB was finally embedded in the unified structural management system (USMS). For the evaluation of the developed CAFB, a real time dynamic response experiment was performed on a cable-stayed bridge model, and the response of the cable-stayed bridge model was measured using both the traditional wired system and the developed CAFB-based USMS. The experiment results showed that the compressed dynamic response acquired by the CAFB-based USMS matched significantly with that of the traditional wired system while still carrying sufficient modal information of the cable-stayed bridge. (paper)

  6. Prediction of thermophysical properties of mixed refrigerants using artificial neural network

    International Nuclear Information System (INIS)

    Sencan, Arzu; Koese, Ismail Ilke; Selbas, Resat

    2011-01-01

    The determination of thermophysical properties of the refrigerants is very important for thermodynamic analysis of vapor compression refrigeration systems. In this paper, an artificial neural network (ANN) is proposed to determine properties as heat conduction coefficient, dynamic viscosity, kinematic viscosity, thermal diffusivity, density, specific heat capacity of refrigerants. Five alternative refrigerants are considered: R413A, R417A, R422A, R422D and R423A. The training and validation were performed with good accuracy. The thermophysical properties of the refrigerants are formulated using artificial neural network (ANN) methodology. Liquid and vapor thermophysical properties of refrigerants with new formulation obtained from ANN can be easily estimated. The method proposed offers more flexibility and therefore thermodynamic analysis of vapor compression refrigeration systems is fairly simplified.

  7. Numerical study of turbulent heat transfer from confined impinging jets using a pseudo-compressibility method

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, M.; Rautaheimo, P.; Siikonen, T.

    1997-12-31

    A numerical investigation is carried out to predict the turbulent fluid flow and heat transfer characteristics of two-dimensional single and three impinging slot jets. Two low-Reynolds-number {kappa}-{epsilon} models, namely the classical model of Chien and the explicit algebraic stress model of Gatski and Speziale, are considered in the simulation. A cell-centered finite-volume scheme combined with an artificial compressibility approach is employed to solve the flow equations, using a diagonally dominant alternating direction implicit (DDADI) time integration method. A fully upwinded second order spatial differencing is adopted to approximate the convective terms. Roe`s damping term is used to calculate the flux on the cell face. A multigrid method is utilized for the acceleration of convergence. On average, the heat transfer coefficients predicted by both models show good agreement with the experimental results. (orig.) 17 refs.

  8. A new method of artificial latent fingerprint creation using artificial sweat and inkjet printer.

    Science.gov (United States)

    Hong, Sungwook; Hong, Ingi; Han, Aleum; Seo, Jin Yi; Namgung, Juyoung

    2015-12-01

    In order to study fingerprinting in the field of forensic science, it is very important to have two or more latent fingerprints with identical chemical composition and intensity. However, it is impossible to obtain identical fingerprints, in reality, because fingerprinting comes out slightly differently every time. A previous research study had proposed an artificial fingerprint creation method in which inkjet ink was replaced with amino acids and sodium chloride solution: the components of human sweat. But, this method had some drawbacks: divalent cations were not added while formulating the artificial sweat solution, and diluted solutions were used for creating weakly deposited latent fingerprint. In this study, a method was developed for overcoming the drawbacks of the methods used in the previous study. Several divalent cations were added in this study because the amino acid-ninhydrin (or some of its analogues) complex is known to react with divalent cations to produce a photoluminescent product; and, similarly, the amino acid-1,2-indanedione complex is known to be catalyzed by a small amount of zinc ions to produce a highly photoluminescent product. Also, in this study, a new technique was developed which enables to adjust the intensity when printing the latent fingerprint patterns. In this method, image processing software is used to control the intensity of the master fingerprint patterns, which adjusts the printing intensity of the latent fingerprints. This new method opened the way to produce a more realistic artificial fingerprint in various strengths with one artificial sweat working solution. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. The study of diagnostic accuracy of chest nodules by using different compression methods

    International Nuclear Information System (INIS)

    Liang Zhigang; Kuncheng, L.I.; Zhang Jinghong; Liu Shuliang

    2005-01-01

    Background: The purpose of this study was to compare the diagnostic accuracy of small nodules in the chest by using different compression methods. Method: Two radiologists with 5 years experience twice interpreted 39 chest images by using lossless and lossy compression methods. The time interval was 3 weeks. Each time the radiologists interpreted one kind of compressed images. The image browser used the Unisight software provided by Atlastiger Company in Shanghai. The interpreting results were analyzed by the ROCKIT software and the ROC curves were painted by Excel 2002. Results: In studies of receiver operating characteristics for scoring the presence or absence of nodules, the images with lossy compression method showed no statistical difference as compared with the images with lossless compression method. Conclusion: The diagnostic accuracy of chest nodules by using the lossless and lossy compression methods had no significant difference, we could use the lossy compression method to transmit and archive the chest images with nodules

  10. On the estimation method of compressed air consumption during pneumatic caisson sinking

    OpenAIRE

    平川, 修治; ヒラカワ, シュウジ; Shuji, HIRAKAWA

    1990-01-01

    There are several methods in estimation of compressed air consumption during pneumatic caisson sinking. It is re uired in the estimation of compressed air consumption by the methods under the same conditions. In this paper, it is proposed the methods which is able to estimate accurately the compressed air consumption during pnbumatic caissons sinking at this moment.

  11. [Evaluation of artificial digestion method on inspection of meat for Trichinella spiralis contamination and influence of the method on muscle larvae recovery].

    Science.gov (United States)

    Wang, Guo-Ying; Du, Jing-Fang; Dun, Guo-Qing; Sun, Wei-Li; Wang, Jin-Xi

    2011-04-01

    To evaluate the effect of artificial digestion method on inspection of meat for Trichinella spiralis contamination and its influence on activity and infectivity of muscle larvae. The mice were inoculated orally with 100 muscle larvae of T. spiralis and sacrificed on the 30th day following the infection. The muscle larvae of T. spiralis were recovered by three different test protocols employing variations of the artificial digestion method, i.e. the first test protocol evaluating digestion for 2 hours (magnetic stirrer method), the second test protocol evaluating digestion for 12 hours, and the third test protocol evaluating digestion for 20 hours. Each test group included ten samples, and each of which included 300 encapsulated larvae. Meanwhile, the activity of the recovered muscle larvae was also assessed. Forty mice were randomly divided into a control group and three digestion groups, so 4 groups (with 10 mice per group) in total. In the control group, each mouse was orally inoculated with 100 encapsulated larvae of T. spiralis. In all of the digestion test groups, each mouse was orally inoculated with 100 muscle larvae of T. spiralis. The larvae were then recovered from the different three test groups by the artificial digestion protocol variations. All the infected mice were sacrificed on the 30th day following the infection, and the muscle larvae of T. spiralis were examined respectively by the diaphragm compression method and the magnetic stirrer method. The muscle larvae detection rates were 78.47%, 76.73%, and 68.63%, the death rates were 0.59%, 4.60%, and 7.43%, and the reduction rates were 60.56%, 61.94%, and 73.07%, in the Test Group One (2-hour digestion), Test Group Two (12-hour digestion) and Test Group Three (20-hour digestion), respectively. The magnetic stirrer method (2-hour digestion method) is superior to both 12-hour digestion and 20-hour digestion methods when assessed by the detection rate, activity and infectivity of muscle larvae.

  12. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  13. Statistical Analysis of Compression Methods for Storing Binary Image for Low-Memory Systems

    Directory of Open Access Journals (Sweden)

    Roman Slaby

    2013-01-01

    Full Text Available The paper is focused on the statistical comparison of the selected compression methods which are used for compression of the binary images. The aim is to asses, which of presented compression method for low-memory system requires less number of bytes of memory. For assessment of the success rates of the input image to binary image the correlation functions are used. Correlation function is one of the methods of OCR algorithm used for the digitization of printed symbols. Using of compression methods is necessary for systems based on low-power micro-controllers. The data stream saving is very important for such systems with limited memory as well as the time required for decoding the compressed data. The success rate of the selected compression algorithms is evaluated using the basic characteristics of the exploratory analysis. The searched samples represent the amount of bytes needed to compress the test images, representing alphanumeric characters.

  14. The boundary data immersion method for compressible flows with application to aeroacoustics

    Energy Technology Data Exchange (ETDEWEB)

    Schlanderer, Stefan C., E-mail: stefan.schlanderer@unimelb.edu.au [Faculty for Engineering and the Environment, University of Southampton, SO17 1BJ Southampton (United Kingdom); Weymouth, Gabriel D., E-mail: G.D.Weymouth@soton.ac.uk [Faculty for Engineering and the Environment, University of Southampton, SO17 1BJ Southampton (United Kingdom); Sandberg, Richard D., E-mail: richard.sandberg@unimelb.edu.au [Department of Mechanical Engineering, University of Melbourne, Melbourne VIC 3010 (Australia)

    2017-03-15

    This paper introduces a virtual boundary method for compressible viscous fluid flow that is capable of accurately representing moving bodies in flow and aeroacoustic simulations. The method is the compressible extension of the boundary data immersion method (BDIM, Maertens & Weymouth (2015), ). The BDIM equations for the compressible Navier–Stokes equations are derived and the accuracy of the method for the hydrodynamic representation of solid bodies is demonstrated with challenging test cases, including a fully turbulent boundary layer flow and a supersonic instability wave. In addition we show that the compressible BDIM is able to accurately represent noise radiation from moving bodies and flow induced noise generation without any penalty in allowable time step.

  15. Investigating low-frequency compression using the Grid method

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Dau, Torsten; MacDonald, Ewen

    2016-01-01

    in literature. Moreover, slopes of the low-level portions of the BM I/O functions estimated at 500 Hz were examined, to determine whether the 500-Hz off-frequency forward masking curves were affected by compression. Overall, the collected data showed a trend confirming the compressive behaviour. However......There is an ongoing discussion about whether the amount of cochlear compression in humans at low frequencies (below 1 kHz) is as high as that at higher frequencies. It is controversial whether the compression affects the slope of the off-frequency forward masking curves at those frequencies. Here......, the Grid method with a 2-interval 1-up 3-down tracking rule was applied to estimate forward masking curves at two characteristic frequencies: 500 Hz and 4000 Hz. The resulting curves and the corresponding basilar membrane input-output (BM I/O) functions were found to be comparable to those reported...

  16. Analysis of a discrete element method and coupling with a compressible fluid flow method

    International Nuclear Information System (INIS)

    Monasse, L.

    2011-01-01

    This work aims at the numerical simulation of compressible fluid/deformable structure interactions. In particular, we have developed a partitioned coupling algorithm between a Finite Volume method for the compressible fluid and a Discrete Element method capable of taking into account fractures in the solid. A survey of existing fictitious domain methods and partitioned algorithms has led to choose an Embedded Boundary method and an explicit coupling scheme. We first showed that the Discrete Element method used for the solid yielded the correct macroscopic behaviour and that the symplectic time-integration scheme ensured the preservation of energy. We then developed an explicit coupling algorithm between a compressible inviscid fluid and an un-deformable solid. Mass, momentum and energy conservation and consistency properties were proved for the coupling scheme. The algorithm was then extended to the coupling with a deformable solid, in the form of a semi implicit scheme. Finally, we applied this method to unsteady inviscid flows around moving structures: comparisons with existing numerical and experimental results demonstrate the excellent accuracy of our method. (author) [fr

  17. Robust steganographic method utilizing properties of MJPEG compression standard

    Directory of Open Access Journals (Sweden)

    Jakub Oravec

    2015-06-01

    Full Text Available This article presents design of steganographic method, which uses video container as cover data. Video track was recorded by webcam and was further encoded by compression standard MJPEG. Proposed method also takes in account effects of lossy compression. The embedding process is realized by switching places of transform coefficients, which are computed by Discrete Cosine Transform. The article contains possibilities, used techniques, advantages and drawbacks of chosen solution. The results are presented at the end of the article.

  18. Artificial urinary conduit construction using tissue engineering methods.

    Science.gov (United States)

    Kloskowski, Tomasz; Pokrywczyńska, Marta; Drewa, Tomasz

    2015-01-01

    Incontinent urinary diversion using an ileal conduit is the most popular method used by urologists after bladder cystectomy resulting from muscle invasive bladder cancer. The use of gastrointestinal tissue is related to a series of complications with the necessity of surgical procedure extension which increases the time of surgery. Regenerative medicine together with tissue engineering techniques gives hope for artificial urinary conduit construction de novo without affecting the ileum. In this review we analyzed history of urinary diversion together with current attempts in urinary conduit construction using tissue engineering methods. Based on literature and our own experience we presented future perspectives related to the artificial urinary conduit construction. A small number of papers in the field of tissue engineered urinary conduit construction indicates that this topic requires more attention. Three main factors can be distinguished to resolve this topic: proper scaffold construction along with proper regeneration of both the urothelium and smooth muscle layers. Artificial urinary conduit has a great chance to become the first commercially available product in urology constructed by regenerative medicine methods.

  19. Development of a diagnostic expert system for eddy current data analysis using applied artificial intelligence methods

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Yan, W.; Henry, G.

    1999-01-01

    A diagnostic expert system that integrates database management methods, artificial neural networks, and decision-making using fuzzy logic has been developed for the automation of steam generator eddy current test (ECT) data analysis. The new system, known as EDDYAI, considers the following key issues: (1) digital eddy current test data calibration, compression, and representation; (2) development of robust neural networks with low probability of misclassification for flaw depth estimation; (3) flaw detection using fuzzy logic; (4) development of an expert system for database management, compilation of a trained neural network library, and a decision module; and (5) evaluation of the integrated approach using eddy current data. The implementation to field test data includes the selection of proper feature vectors for ECT data analysis, development of a methodology for large eddy current database management, artificial neural networks for flaw depth estimation, and a fuzzy logic decision algorithm for flaw detection. A large eddy current inspection database from the Electric Power Research Institute NDE Center is being utilized in this research towards the development of an expert system for steam generator tube diagnosis. The integration of ECT data pre-processing as part of the data management, fuzzy logic flaw detection technique, and tube defect parameter estimation using artificial neural networks are the fundamental contributions of this research. (orig.)

  20. Development of a diagnostic expert system for eddy current data analysis using applied artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Upadhyaya, B.R.; Yan, W. [Tennessee Univ., Knoxville, TN (United States). Dept. of Nuclear Engineering; Behravesh, M.M. [Electric Power Research Institute, Palo Alto, CA (United States); Henry, G. [EPRI NDE Center, Charlotte, NC (United States)

    1999-09-01

    A diagnostic expert system that integrates database management methods, artificial neural networks, and decision-making using fuzzy logic has been developed for the automation of steam generator eddy current test (ECT) data analysis. The new system, known as EDDYAI, considers the following key issues: (1) digital eddy current test data calibration, compression, and representation; (2) development of robust neural networks with low probability of misclassification for flaw depth estimation; (3) flaw detection using fuzzy logic; (4) development of an expert system for database management, compilation of a trained neural network library, and a decision module; and (5) evaluation of the integrated approach using eddy current data. The implementation to field test data includes the selection of proper feature vectors for ECT data analysis, development of a methodology for large eddy current database management, artificial neural networks for flaw depth estimation, and a fuzzy logic decision algorithm for flaw detection. A large eddy current inspection database from the Electric Power Research Institute NDE Center is being utilized in this research towards the development of an expert system for steam generator tube diagnosis. The integration of ECT data pre-processing as part of the data management, fuzzy logic flaw detection technique, and tube defect parameter estimation using artificial neural networks are the fundamental contributions of this research. (orig.)

  1. Evaluation of the distortions of the digital chest image caused by the data compression

    International Nuclear Information System (INIS)

    Ando, Yutaka; Kunieda, Etsuo; Ogawa, Koichi; Tukamoto, Nobuhiro; Hashimoto, Shozo; Aoki, Makoto; Kurotani, Kenichi.

    1988-01-01

    The image data compression methods using orthogonal transforms (Discrete cosine transform, Discrete fourier transform, Hadamard transform, Haar transform, Slant transform) were analyzed. From the points of the error and the speed of the data conversion, the discrete cosine transform method (DCT) is superior to the other methods. The block quantization by the DCT for the digital chest image was used. The quality of data compressed and reconstructed images by the score analysis and the ROC curve analysis was examined. The chest image with the esophageal cancer and metastatic lung tumors was evaluated at the 17 checkpoints (the tumor, the vascular markings, the border of the heart and ribs, the mediastinal structures and et al). By our score analysis, the satisfactory ratio of the data compression is 1/5 and 1/10. The ROC analysis using normal chest images superimposed by the artificial coin lesions was made. The ROC curve of the 1/5 compressed ratio is almost as same as the original one. To summarize our study, the image data compression method using the DCT is thought to be useful for the clinical use and the 1/5 compression ratio is a tolerable ratio. (author)

  2. Evaluation of the distortions of the digital chest image caused by the data compression

    Energy Technology Data Exchange (ETDEWEB)

    Ando, Yutaka; Kunieda, Etsuo; Ogawa, Koichi; Tukamoto, Nobuhiro; Hashimoto, Shozo; Aoki, Makoto; Kurotani, Kenichi

    1988-08-01

    The image data compression methods using orthogonal transforms (Discrete cosine transform, Discrete fourier transform, Hadamard transform, Haar transform, Slant transform) were analyzed. From the points of the error and the speed of the data conversion, the discrete cosine transform method (DCT) is superior to the other methods. The block quantization by the DCT for the digital chest image was used. The quality of data compressed and reconstructed images by the score analysis and the ROC curve analysis was examined. The chest image with the esophageal cancer and metastatic lung tumors was evaluated at the 17 checkpoints (the tumor, the vascular markings, the border of the heart and ribs, the mediastinal structures and et al). By our score analysis, the satisfactory ratio of the data compression is 1/5 and 1/10. The ROC analysis using normal chest images superimposed by the artificial coin lesions was made. The ROC curve of the 1/5 compressed ratio is almost as same as the original one. To summarize our study, the image data compression method using the DCT is thought to be useful for the clinical use and the 1/5 compression ratio is a tolerable ratio.

  3. Fluid-driven origami-inspired artificial muscles

    Science.gov (United States)

    Li, Shuguang; Vogt, Daniel M.; Rus, Daniela; Wood, Robert J.

    2017-12-01

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ˜600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration.

  4. Review of Artificial Abrasion Test Methods for PV Module Technology

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Muller, Matt T. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Simpson, Lin J. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-08-01

    This review is intended to identify the method or methods--and the basic details of those methods--that might be used to develop an artificial abrasion test. Methods used in the PV literature were compared with their closest implementation in existing standards. Also, meetings of the International PV Quality Assurance Task Force Task Group 12-3 (TG12-3, which is concerned with coated glass) were used to identify established test methods. Feedback from the group, which included many of the authors from the PV literature, included insights not explored within the literature itself. The combined experience and examples from the literature are intended to provide an assessment of the present industry practices and an informed path forward. Recommendations toward artificial abrasion test methods are then identified based on the experiences in the literature and feedback from the PV community. The review here is strictly focused on abrasion. Assessment methods, including optical performance (e.g., transmittance or reflectance), surface energy, and verification of chemical composition were not examined. Methods of artificially soiling PV modules or other specimens were not examined. The weathering of artificial or naturally soiled specimens (which may ultimately include combined temperature and humidity, thermal cycling and ultraviolet light) were also not examined. A sense of the purpose or application of an abrasion test method within the PV industry should, however, be evident from the literature.

  5. Separation prediction in two dimensional boundary layer flows using artificial neural networks

    International Nuclear Information System (INIS)

    Sabetghadam, F.; Ghomi, H.A.

    2003-01-01

    In this article, the ability of artificial neural networks in prediction of separation in steady two dimensional boundary layer flows is studied. Data for network training is extracted from numerical solution of an ODE obtained from Von Karman integral equation with approximate one parameter Pohlhousen velocity profile. As an appropriate neural network, a two layer radial basis generalized regression artificial neural network is used. The results shows good agreements between the overall behavior of the flow fields predicted by the artificial neural network and the actual flow fields for some cases. The method easily can be extended to unsteady separation and turbulent as well as compressible boundary layer flows. (author)

  6. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  7. Compressive sampling by artificial neural networks for video

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt

    2011-06-01

    We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.

  8. Artificial muscles for a novel simulator in minimally invasive spine surgery.

    Science.gov (United States)

    Hollensteiner, Marianne; Fuerst, David; Schrempf, Andreas

    2014-01-01

    Vertebroplasty and kyphoplasty are commonly used minimally invasive methods to treat vertebral compression fractures. Novice surgeons gather surgical skills in different ways, mainly by "learning by doing" or training on models, specimens or simulators. Currently, a new training modality, an augmented reality simulator for minimally invasive spine surgeries, is going to be developed. An important step in investigating this simulator is the accurate establishment of artificial tissues. Especially vertebrae and muscles, reproducing a comparable haptical feedback during tool insertion, are necessary. Two artificial tissues were developed to imitate natural muscle tissue. The axial insertion force was used as validation parameter. It appropriates the mechanical properties of artificial and natural muscles. Validation was performed on insertion measurement data from fifteen artificial muscle tissues compared to human muscles measurement data. Based on the resulting forces during needle insertion into human muscles, a suitable material composition for manufacturing artificial muscles was found.

  9. Faster tissue interface analysis from Raman microscopy images using compressed factorisation

    Science.gov (United States)

    Palmer, Andrew D.; Bannerman, Alistair; Grover, Liam; Styles, Iain B.

    2013-06-01

    The structure of an artificial ligament was examined using Raman microscopy in combination with novel data analysis. Basis approximation and compressed principal component analysis are shown to provide efficient compression of confocal Raman microscopy images, alongside powerful methods for unsupervised analysis. This scheme allows the acceleration of data mining, such as principal component analysis, as they can be performed on the compressed data representation, providing a decrease in the factorisation time of a single image from five minutes to under a second. Using this workflow the interface region between a chemically engineered ligament construct and a bone-mimic anchor was examined. Natural ligament contains a striated interface between the bone and tissue that provides improved mechanical load tolerance, a similar interface was found in the ligament construct.

  10. A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    T. Celine Therese Jenny

    2010-11-01

    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  11. A method of loss free compression for the data of nuclear spectrum

    International Nuclear Information System (INIS)

    Sun Mingshan; Wu Shiying; Chen Yantao; Xu Zurun

    2000-01-01

    A new method of loss free compression based on the feature of the data of nuclear spectrum is provided, from which a practicable algorithm is successfully derived. A compression rate varying from 0.50 to 0.25 is obtained and the distribution of the processed data becomes even more suitable to be reprocessed by another compression such as Huffman Code to improve the compression rate

  12. Fluid-driven origami-inspired artificial muscles.

    Science.gov (United States)

    Li, Shuguang; Vogt, Daniel M; Rus, Daniela; Wood, Robert J

    2017-12-12

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ∼600 kPa, and produce peak power densities over 2 kW/kg-all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration. Copyright © 2017 the Author(s). Published by PNAS.

  13. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  14. Convergence of a residual based artificial viscosity finite element method

    KAUST Repository

    Nazarov, Murtazo

    2013-02-01

    We present a residual based artificial viscosity finite element method to solve conservation laws. The Galerkin approximation is stabilized by only residual based artificial viscosity, without any least-squares, SUPG, or streamline diffusion terms. We prove convergence of the method, applied to a scalar conservation law in two space dimensions, toward an unique entropy solution for implicit time stepping schemes. © 2012 Elsevier B.V. All rights reserved.

  15. Control Systems for Hyper-Redundant Robots Based on Artificial Potential Method

    Directory of Open Access Journals (Sweden)

    Mihaela Florescu

    2015-06-01

    Full Text Available This paper presents the control method of hyper-redundant robots based on the artificial potential approach. The principles of this method are shown and a suggestive example is offered. Then, the artificial potential method is applied to the case of a tentacle robot starting from the dynamic model of the robot. In addition, a series of results that are obtained through simulation is presented.

  16. NON-COHESIVE SOILS’ COMPRESSIBILITY AND UNEVEN GRAIN-SIZE DISTRIBUTION RELATION

    Directory of Open Access Journals (Sweden)

    Anatoliy Mirnyy

    2016-03-01

    Full Text Available This paper presents the results of laboratory investigation of soil compression phases with consideration of various granulometric composition. Materials and Methods Experimental soil box with microscale video recording for compression phases studies is described. Photo and video materials showing the differences of microscale particle movements were obtained for non-cohesive soils with different grain-size distribution. Results The analysis of the compression tests results and elastic and plastic deformations separation allows identifying each compression phase. It is shown, that soil density is correlating with deformability parameters only for the same grain-size distribution. Basing on the test results the authors suggest that compaction ratio is not sufficient for deformability estimating without grain-size distribution taken into account. Discussion and Conclusions Considering grain-size distribution allows refining technological requirements for artificial soil structures, backfills, and sand beds. Further studies could be used for developing standard documents, SP45.13330.2012 in particular.

  17. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2005-01-01

    This paper presents a new design method, which is a modification of the diagonal compression field method, the modification consisting of the introduction of circular fan stress fields. The traditional method does not allow changes of the concrete compression direction throughout a given beam...... if equilibrium is strictly required. This is conservative, since it is not possible fully to utilize the concrete strength in regions with low shear stresses. The larger inclination (the smaller -value) of the uniaxial concrete stress the more transverse shear reinforcement is needed; hence it would be optimal...... if the -value for a given beam could be set to a low value in regions with high shear stresses and thereafter increased in regions with low shear stresses. Thus the shear reinforcement would be reduced and the concrete strength would be utilized in a better way. In the paper it is shown how circular fan stress...

  18. [On the preparation and mechanical properties of PVA hydrogel bionic cartilage/bone composite artificial articular implants].

    Science.gov (United States)

    Meng, Haoye; Zheng, Yudong; Huang, Xiaoshan; Yue, Bingqing; Xu, Hong; Wang, Yingjun; Chen, Xiaofeng

    2010-10-01

    In view of the problems that conventional artificial cartilages have no bioactivity and are prone to peel off in repeated uses as a result of insufficient strength to bond with subchondral bone, we have designed and prepared a novel kind of PVA-BG composite hydrogel as bionic artificial articular cartilage/bone composite implants. The effects of processes and conditions of preparation on the mechanical properties of implant were explored. In addition, the relationships between compression strain rate, BG content, PVA hydrogels thickness and compressive tangent modulus were also explicated. We also analyzed the effects of cancellous bone aperture, BG and PVA content on the shear strength of bonding interface of artificial articular cartilage with cancellous bone. Meanwhile, the bonding interface of artificial articular cartilage and cancellous bone was characterized by scanning electron microscopy. It was revealed that the compressive modulus of composite implants was correspondingly increased with the adding of BG content and the augments of PVA hydrogel thickness. The compressive modulus and bonding interface were both related to the apertures of cancellous bone. The compressive modulus of composite implants was 1.6-2.23 MPa and the shear strength of bonding interface was 0.63-1.21 MPa. These results demonstrated that the connection between artificial articular cartilage and cancellous bone was adequately firm.

  19. Quinary excitation method for pulse compression ultrasound measurements.

    Science.gov (United States)

    Cowell, D M J; Freear, S

    2008-04-01

    A novel switched excitation method for linear frequency modulated excitation of ultrasonic transducers in pulse compression systems is presented that is simple to realise, yet provides reduced signal sidelobes at the output of the matched filter compared to bipolar pseudo-chirp excitation. Pulse compression signal sidelobes are reduced through the use of simple amplitude tapering at the beginning and end of the excitation duration. Amplitude tapering using switched excitation is realised through the use of intermediate voltage switching levels, half that of the main excitation voltages. In total five excitation voltages are used creating a quinary excitation system. The absence of analogue signal generation and power amplifiers renders the excitation method attractive for applications with requirements such as a high channel count or low cost per channel. A systematic study of switched linear frequency modulated excitation methods with simulated and laboratory based experimental verification is presented for 2.25 MHz non-destructive testing immersion transducers. The signal to sidelobe noise level of compressed waveforms generated using quinary and bipolar pseudo-chirp excitation are investigated for transmission through a 0.5m water and kaolin slurry channel. Quinary linear frequency modulated excitation consistently reduces signal sidelobe power compared to bipolar excitation methods. Experimental results for transmission between two 2.25 MHz transducers separated by a 0.5m channel of water and 5% kaolin suspension shows improvements in signal to sidelobe noise power in the order of 7-8 dB. The reported quinary switched method for linear frequency modulated excitation provides improved performance compared to pseudo-chirp excitation without the need for high performance excitation amplifiers.

  20. Compressed Sensing Methods in Radio Receivers Exposed to Noise and Interference

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek

    , there is a problem of interference, which makes digitization of radio receivers even more dicult. High-order low-pass lters are needed to remove interfering signals and secure a high-quality reception. In the mid-2000s a new method of signal acquisition, called compressed sensing, emerged. Compressed sensing...... the downconverted baseband signal and interference, may be replaced by low-order lters. Additional digital signal processing is a price to pay for this feature. Hence, the signal processing is moved from the analog to the digital domain. Filtering compressed sensing, which is a new application of compressed sensing...

  1. Alteration of blue pigment in artificial iris in ocular prosthesis: effect of paint, drying method and artificial aging.

    Science.gov (United States)

    Goiato, Marcelo Coelho; Fernandes, Aline Úrsula Rocha; dos Santos, Daniela Micheline; Hadadd, Marcela Filié; Moreno, Amália; Pesqueira, Aldiéris Alves

    2011-02-01

    The artificial iris is the structure responsible for the dissimulation and aesthetics of ocular prosthesis. The objective of the present study was to evaluate the color stability of artificial iris of microwaveable polymerized ocular prosthesis, as a function of paint type, drying method and accelerated aging. A total of 40 discs of microwaveable polymerized acrylic resin were fabricated, and divided according to the blue paint type (n = 5): hydrosoluble acrylic, nitrocellulose automotive, hydrosoluble gouache and oil paints. Paints where dried either at natural or at infrared light bulb method. Each specimen was constituted of one disc in colorless acrylic resin and another colored with a basic sclera pigment. Painting was performed in one surface of one of the discs. The specimens were submitted to an artificial aging chamber under ultraviolet light, during 1008 h. A reflective spectrophotometer was used to evaluate color changes. Data were evaluated by 3-way repeated-measures ANOVA and the Tukey HSD test (α = 0.05). All paints suffered color alteration. The oil paint presented the highest color resistance to artificial aging regardless of drying method. Copyright © 2010 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  2. An ROI multi-resolution compression method for 3D-HEVC

    Science.gov (United States)

    Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan

    2017-09-01

    3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.

  3. High-strength mineralized collagen artificial bone

    Science.gov (United States)

    Qiu, Zhi-Ye; Tao, Chun-Sheng; Cui, Helen; Wang, Chang-Ming; Cui, Fu-Zhai

    2014-03-01

    Mineralized collagen (MC) is a biomimetic material that mimics natural bone matrix in terms of both chemical composition and microstructure. The biomimetic MC possesses good biocompatibility and osteogenic activity, and is capable of guiding bone regeneration as being used for bone defect repair. However, mechanical strength of existing MC artificial bone is too low to provide effective support at human load-bearing sites, so it can only be used for the repair at non-load-bearing sites, such as bone defect filling, bone graft augmentation, and so on. In the present study, a high strength MC artificial bone material was developed by using collagen as the template for the biomimetic mineralization of the calcium phosphate, and then followed by a cold compression molding process with a certain pressure. The appearance and density of the dense MC were similar to those of natural cortical bone, and the phase composition was in conformity with that of animal's cortical bone demonstrated by XRD. Mechanical properties were tested and results showed that the compressive strength was comparable to human cortical bone, while the compressive modulus was as low as human cancellous bone. Such high strength was able to provide effective mechanical support for bone defect repair at human load-bearing sites, and the low compressive modulus can help avoid stress shielding in the application of bone regeneration. Both in vitro cell experiments and in vivo implantation assay demonstrated good biocompatibility of the material, and in vivo stability evaluation indicated that this high-strength MC artificial bone could provide long-term effective mechanical support at human load-bearing sites.

  4. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    Science.gov (United States)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  5. The Effects of Different Curing Methods on the Compressive Strength of Terracrete

    Directory of Open Access Journals (Sweden)

    O. Alake

    2009-01-01

    Full Text Available This research evaluated the effects of different curing methods on the compressive strength of terracrete. Several tests that included sieve analysis were carried out on constituents of terracrete (granite and laterite to determine their particle size distribution and performance criteria tests to determine compressive strength of terracrete cubes for 7 to 35 days of curing. Sand, foam-soaked, tank and open methods of curing were used and the study was carried out under controlled temperature. Sixty cubes of 100 × 100 × 100mm sized cubes were cast using a mix ratio of 1 part of cement, 1½ part of latrite, and 3 part of coarse aggregate (granite proportioned by weight and water – cement ratio of 0.62. The result of the various compressive strengths of the cubes showed that out of the four curing methods, open method of curing was the best because the cubes gained the highest average compressive strength of 10.3N/mm2 by the 35th day.

  6. Image-Based Compression Method of Three-Dimensional Range Data with Texture

    OpenAIRE

    Chen, Xia; Bell, Tyler; Zhang, Song

    2017-01-01

    Recently, high speed and high accuracy three-dimensional (3D) scanning techniques and commercially available 3D scanning devices have made real-time 3D shape measurement and reconstruction possible. The conventional mesh representation of 3D geometry, however, results in large file sizes, causing difficulties for its storage and transmission. Methods for compressing scanned 3D data therefore become desired. This paper proposes a novel compression method which stores 3D range data within the c...

  7. On mathematical modelling and numerical simulation of transient compressible flow across open boundaries

    Energy Technology Data Exchange (ETDEWEB)

    Rian, Kjell Erik

    2003-07-01

    In numerical simulations of turbulent reacting compressible flows, artificial boundaries are needed to obtain a finite computational domain when an unbounded physical domain is given. Artificial boundaries which fluids are free to cross are called open boundaries. When calculating such flows, non-physical reflections at the open boundaries may occur. These reflections can pollute the solution severely, leading to inaccurate results, and the generation of spurious fluctuations may even cause the numerical simulation to diverge. Thus, a proper treatment of the open boundaries in numerical simulations of turbulent reacting compressible flows is required to obtain a reliable solution for realistic conditions. A local quasi-one-dimensional characteristic-based open-boundary treatment for the Favre-averaged governing equations for time-dependent three-dimensional multi-component turbulent reacting compressible flow is presented. A k-{epsilon} model for turbulent compressible flow and Magnussen's EDC model for turbulent combustion is included in the analysis. The notion of physical boundary conditions is incorporated in the method, and the conservation equations themselves are applied on the boundaries to complement the set of physical boundary conditions. A two-dimensional finite-difference-based computational fluid dynamics code featuring high-order accurate numerical schemes was developed for the numerical simulations. Transient numerical simulations of the well-known, one-dimensional shock-tube problem, a two-dimensional pressure-tower problem in a decaying turbulence field, and a two-dimensional turbulent reacting compressible flow problem have been performed. Flow- and combustion-generated pressure waves seem to be well treated by the non-reflecting subsonic open-boundary conditions. Limitations of the present open-boundary treatment are demonstrated and discussed. The simple and solid physical basis of the method makes it both favourable and relatively easy to

  8. Technical note: New table look-up lossless compression method ...

    African Journals Online (AJOL)

    Technical note: New table look-up lossless compression method based on binary index archiving. ... International Journal of Engineering, Science and Technology ... This paper intends to present a common use archiver, made up following the dictionary technique and using the index archiving method as a simple and ...

  9. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  10. Artificial intelligence methods applied for quantitative analysis of natural radioactive sources

    International Nuclear Information System (INIS)

    Medhat, M.E.

    2012-01-01

    Highlights: ► Basic description of artificial neural networks. ► Natural gamma ray sources and problem of detections. ► Application of neural network for peak detection and activity determination. - Abstract: Artificial neural network (ANN) represents one of artificial intelligence methods in the field of modeling and uncertainty in different applications. The objective of the proposed work was focused to apply ANN to identify isotopes and to predict uncertainties of their activities of some natural radioactive sources. The method was tested for analyzing gamma-ray spectra emitted from natural radionuclides in soil samples detected by a high-resolution gamma-ray spectrometry based on HPGe (high purity germanium). The principle of the suggested method is described, including, relevant input parameters definition, input data scaling and networks training. It is clear that there is satisfactory agreement between obtained and predicted results using neural network.

  11. A new method of on-line multiparameter amplitude analysis with compression

    International Nuclear Information System (INIS)

    Morhac, M.; matousek, V.

    1996-01-01

    An algorithm of one-line multidimensional amplitude analysis with compression using fast adaptive orthogonal transform is presented in the paper. The method is based on a direct modification of multiplication coefficients of the signal flow graph of the fast Cooley-Tukey's algorithm. The coefficients are modified according to a reference vector representing the processed data. The method has been tested to compress three parameter experimental nuclear data. The efficiency of the derived adaptive transform is compared with classical orthogonal transforms. (orig.)

  12. Methods of compression of digital holograms, based on 1-level wavelet transform

    International Nuclear Information System (INIS)

    Kurbatova, E A; Cheremkhin, P A; Evtikhiev, N N

    2016-01-01

    To reduce the size of memory required for storing information about 3D-scenes and to decrease the rate of hologram transmission, digital hologram compression can be used. Compression of digital holograms by wavelet transforms is among most powerful methods. In the paper the most popular wavelet transforms are considered and applied to the digital hologram compression. Obtained values of reconstruction quality and hologram's diffraction efficiencies are compared. (paper)

  13. [The Identification of the Origin of Chinese Wolfberry Based on Infrared Spectral Technology and the Artificial Neural Network].

    Science.gov (United States)

    Li, Zhong; Liu, Ming-de; Ji, Shou-xiang

    2016-03-01

    The Fourier Transform Infrared Spectroscopy (FTIR) is established to find the geographic origins of Chinese wolfberry quickly. In the paper, the 45 samples of Chinese wolfberry from different places of Qinghai Province are to be surveyed by FTIR. The original data matrix of FTIR is pretreated with common preprocessing and wavelet transform. Compared with common windows shifting smoothing preprocessing, standard normal variation correction and multiplicative scatter correction, wavelet transform is an effective spectrum data preprocessing method. Before establishing model through the artificial neural networks, the spectra variables are compressed by means of the wavelet transformation so as to enhance the training speed of the artificial neural networks, and at the same time the related parameters of the artificial neural networks model are also discussed in detail. The survey shows even if the infrared spectroscopy data is compressed to 1/8 of its original data, the spectral information and analytical accuracy are not deteriorated. The compressed spectra variables are used for modeling parameters of the backpropagation artificial neural network (BP-ANN) model and the geographic origins of Chinese wolfberry are used for parameters of export. Three layers of neural network model are built to predict the 10 unknown samples by using the MATLAB neural network toolbox design error back propagation network. The number of hidden layer neurons is 5, and the number of output layer neuron is 1. The transfer function of hidden layer is tansig, while the transfer function of output layer is purelin. Network training function is trainl and the learning function of weights and thresholds is learngdm. net. trainParam. epochs=1 000, while net. trainParam. goal = 0.001. The recognition rate of 100% is to be achieved. It can be concluded that the method is quite suitable for the quick discrimination of producing areas of Chinese wolfberry. The infrared spectral analysis technology

  14. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  15. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    Science.gov (United States)

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  16. Generalised synchronisation of spatiotemporal chaos using feedback control method and phase compression

    International Nuclear Information System (INIS)

    Xing-Yuan, Wang; Na, Zhang

    2010-01-01

    Coupled map lattices are taken as examples to study the synchronisation of spatiotemporal chaotic systems. First, a generalised synchronisation of two coupled map lattices is realised through selecting an appropriate feedback function and appropriate range of feedback parameter. Based on this method we use the phase compression method to extend the range of the parameter. So, we integrate the feedback control method with the phase compression method to implement the generalised synchronisation and obtain an exact range of feedback parameter. This technique is simple to implement in practice. Numerical simulations show the effectiveness and the feasibility of the proposed program. (general)

  17. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2006-01-01

    is a modification of the traditional method, the modification consisting of the introduction of circular fan stress fields. To ensure proper behaviour for the service load the -value ( = cot, where  is the angle relative to the beam axis of the uniaxial concrete compression) chosen should not be too large...

  18. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications.

    Science.gov (United States)

    Ma, JiaLi; Zhang, TanTan; Dong, MingChui

    2015-05-01

    This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

  19. Soft computing methods for estimating the uniaxial compressive strength of intact rock from index tests

    Czech Academy of Sciences Publication Activity Database

    Mishra, A. Deepak; Srigyan, M.; Basu, A.; Rokade, P. J.

    2015-01-01

    Roč. 80, December 2015 (2015), s. 418-424 ISSN 1365-1609 Institutional support: RVO:68145535 Keywords : uniaxial compressive strength * rock indices * fuzzy inference system * artificial neural network * adaptive neuro-fuzzy inference system Subject RIV: DH - Mining, incl. Coal Mining Impact factor: 2.010, year: 2015 http://ac.els-cdn.com/S1365160915300708/1-s2.0-S1365160915300708-main.pdf?_tid=318a7cec-8929-11e5-a3b8-00000aacb35f&acdnat=1447324752_2a9d947b573773f88da353a16f850eac

  20. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  1. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    Science.gov (United States)

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  2. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  3. Application of artificial intelligence methods for prediction of steel mechanical properties

    Directory of Open Access Journals (Sweden)

    Z. Jančíková

    2008-10-01

    Full Text Available The target of the contribution is to outline possibilities of applying artificial neural networks for the prediction of mechanical steel properties after heat treatment and to judge their perspective use in this field. The achieved models enable the prediction of final mechanical material properties on the basis of decisive parameters influencing these properties. By applying artificial intelligence methods in combination with mathematic-physical analysis methods it will be possible to create facilities for designing a system of the continuous rationalization of existing and also newly developing industrial technologies.

  4. Semi-implicit method for three-dimensional compressible MHD simulation

    International Nuclear Information System (INIS)

    Harned, D.S.; Kerner, W.

    1984-03-01

    A semi-implicit method for solving the full compressible MHD equations in three dimensions is presented. The method is unconditionally stable with respect to the fast compressional modes. The time step is instead limited by the slower shear Alfven motion. The computing time required for one time step is essentially the same as for explicit methods. Linear stability limits are derived and verified by three-dimensional tests on linear waves in slab geometry. (orig.)

  5. Image Signal Transfer Method in Artificial Retina using Laser

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, I.Y.; Lee, B.H.; Kim, S.J. [Seoul National University, Seoul (Korea)

    2002-05-01

    Recently, the research on artificial retina for the blind is active. In this paper a new optical link method for the retinal prosthesis is proposed. Laser diode system was chosen to transfer image into the eye in this project and the new optical system was designed and evaluated. The use of laser diode array in artificial retina system makes system simple for lack of signal processing part inside of the eyeball. Designed optical system is enough to focus laser diode array on photodiode array in 20X20 application. (author). 11 refs., 7 figs., 2 tabs.

  6. Combustion engine variable compression ratio apparatus and method

    Science.gov (United States)

    Lawrence,; Keith, E [Peoria, IL; Strawbridge, Bryan E [Dunlap, IL; Dutart, Charles H [Washington, IL

    2006-06-06

    An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.

  7. [Preparation of nano-nacre artificial bone].

    Science.gov (United States)

    Chen, Jian-ting; Tang, Yong-zhi; Zhang, Jian-gang; Wang, Jian-jun; Xiao, Ying

    2008-12-01

    To assess the improvements in the properties of nano-nacre artificial bone prepared on the basis of nacre/polylactide acid composite artificial bone and its potential for clinical use. The compound of nano-scale nacre powder and poly-D, L-lactide acid (PDLLA) was used to prepare the cylindrical hollow artificial bone, whose properties including raw material powder scale, pore size, porosity and biomechanical characteristics were compared with another artificial bone made of micron-scale nacre powder and PDLLA. Scanning electron microscope showed that the average particle size of the nano-nacre powder was 50.4-/+12.4 nm, and the average pore size of the artificial bone prepared using nano-nacre powder was 215.7-/+77.5 microm, as compared with the particle size of the micron-scale nacre powder of 5.0-/+3.0 microm and the pore size of the resultant artificial bone of 205.1-/+72.0 microm. The porosities of nano-nacre artificial bone and the micron-nacre artificial bone were (65.4-/+2.9)% and (53.4-/+2.2)%, respectively, and the two artificial bones had comparable compressive strength and Young's modulus, but the flexural strength of the nano-nacre artificial bone was lower than that of the micro-nacre artificial bone. The nano-nacre artificial bone allows better biodegradability and possesses appropriate pore size, porosity and biomechanical properties for use as a promising material in bone tissue engineering.

  8. The impact of mineral composition on compressibility of saturated soils

    OpenAIRE

    Dolinar, Bojana

    2012-01-01

    This article analyses the impact of soils` mineral composition on their compressibility. Physical and chemical properties of minerals which influence the quantity of intergrain water in soils and, consequently, the compressibility of soils are established by considering the previous theoretical findings. Test results obtained on artificially prepared samples are used to determine the analytical relationship between the water content and stress state, depending on the mineralogical properties ...

  9. About a method for compressing x-ray computed microtomography data

    Science.gov (United States)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  10. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  11. A study on measurement on artificial radiation dose rate using the response matrix method

    International Nuclear Information System (INIS)

    Kidachi, Hiroshi; Ishikawa, Yoichi; Konno, Tatsuya

    2004-01-01

    We examined accuracy and stability of estimated artificial dose contribution which is distinguished from natural background gamma-ray dose rate using Response Matrix method. Irradiation experiments using artificial gamma-ray sources indicated that there was a linear relationship between observed dose rate and estimated artificial dose contribution, when irradiated artificial gamma-ray dose rate was higher than about 2 nGy/h. Statistical and time-series analyses of long term data made it clear that estimated artificial contribution showed almost constant values under no artificial influence from the nuclear power plants. However, variations of estimated artificial dose contribution were infrequently observed due to of rainfall, detector maintenance operation and occurrence of calibration error. Some considerations on the factors to these variations were made. (author)

  12. A method of vehicle license plate recognition based on PCANet and compressive sensing

    Science.gov (United States)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  13. A nonlinear relaxation/quasi-Newton algorithm for the compressible Navier-Stokes equations

    Science.gov (United States)

    Edwards, Jack R.; Mcrae, D. S.

    1992-01-01

    A highly efficient implicit method for the computation of steady, two-dimensional compressible Navier-Stokes flowfields is presented. The discretization of the governing equations is hybrid in nature, with flux-vector splitting utilized in the streamwise direction and central differences with flux-limited artificial dissipation used for the transverse fluxes. Line Jacobi relaxation is used to provide a suitable initial guess for a new nonlinear iteration strategy based on line Gauss-Seidel sweeps. The applicability of quasi-Newton methods as convergence accelerators for this and other line relaxation algorithms is discussed, and efficient implementations of such techniques are presented. Convergence histories and comparisons with experimental data are presented for supersonic flow over a flat plate and for several high-speed compression corner interactions. Results indicate a marked improvement in computational efficiency over more conventional upwind relaxation strategies, particularly for flowfields containing large pockets of streamwise subsonic flow.

  14. A REVIEW OF VIBRATION MACHINE DIAGNOSTICS BY USING ARTIFICIAL INTELLIGENCE METHODS

    Directory of Open Access Journals (Sweden)

    Grover Zurita

    2016-09-01

    Full Text Available In the industry, gears and rolling bearings failures are one of the foremost causes of breakdown in rotating machines, reducing availability time of the production and resulting in costly systems downtime. Therefore, there are growing demands for vibration condition based monitoring of gears and bearings, and any method in order to improve the effectiveness, reliability, and accuracy of the bearing faults diagnosis ought to be evaluated. In order to perform machine diagnosis efficiently, researchers have extensively investigated different advanced digital signal processing techniques and artificial intelligence methods to accurately extract fault characteristics from vibration signals. The main goal of this article is to present the state-of-the-art development in vibration analysis for machine diagnosis based on artificial intelligence methods.

  15. Novel approach to the fabrication of an artificial small bone using a combination of sponge replica and electrospinning methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yang-Hee; Lee, Byong-Taek, E-mail: lbt@sch.ac.kr [Department of Biomedical Engineering and Materials, School of Medicine, Soonchunhyang University 366-1, Ssangyong-dong, Cheonan, Chungnam 330-090 (Korea, Republic of)

    2011-06-15

    In this study, a novel artificial small bone consisting of ZrO{sub 2}-biphasic calcium phosphate/polymethylmethacrylate-polycaprolactone-hydroxyapatite (ZrO{sub 2}-BCP/PMMA-PCL-HAp) was fabricated using a combination of sponge replica and electrospinning methods. To mimic the cancellous bone, the ZrO{sub 2}/BCP scaffold was composed of three layers, ZrO{sub 2}, ZrO{sub 2}/BCP and BCP, fabricated by the sponge replica method. The PMMA-PCL fibers loaded with HAp powder were wrapped around the ZrO{sub 2}/BCP scaffold using the electrospinning process. To imitate the Haversian canal region of the bone, HAp-loaded PMMA-PCL fibers were wrapped around a steel wire of 0.3 mm diameter. As a result, the bundles of fiber wrapped around the wires imitated the osteon structure of the cortical bone. Finally, the ZrO{sub 2}/BCP scaffold was surrounded by HAp-loaded PMMA-PCL composite bundles. After removal of the steel wires, the ZrO{sub 2}/BCP scaffold and bundles of HAp-loaded PMMA-PCL formed an interconnected structure resembling the human bone. Its diameter, compressive strength and porosity were approximately 12 mm, 5 MPa and 70%, respectively, and the viability of MG-63 osteoblast-like cells was determined to be over 90% by the MTT (3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide) assay. This artificial bone shows excellent cytocompatibility and is a promising bone regeneration material.

  16. An Improved Ghost-cell Immersed Boundary Method for Compressible Inviscid Flow Simulations

    KAUST Repository

    Chi, Cheng

    2015-05-01

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. In addition, a shock sensor is in- troduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently. The improved ghost-cell method is validated against five test cases: (a) double Mach reflections on a ramp, (b) supersonic flows in a wind tunnel with a forward- facing step, (c) supersonic flows over a circular cylinder, (d) smooth Prandtl-Meyer expansion flows, and (e) steady shock-induced combustion over a wedge. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Implementation of the improved ghost-cell method in reacting Euler flows further validates its general applicability for compressible flow simulations.

  17. Uniaxial Compressive Strength and Fracture Mode of Lake Ice at Moderate Strain Rates Based on a Digital Speckle Correlation Method for Deformation Measurement

    Directory of Open Access Journals (Sweden)

    Jijian Lian

    2017-05-01

    Full Text Available Better understanding of the complex mechanical properties of ice is the foundation to predict the ice fail process and avoid potential ice threats. In the present study, uniaxial compressive strength and fracture mode of natural lake ice are investigated over moderate strain-rate range of 0.4–10 s−1 at −5 °C and −10 °C. The digital speckle correlation method (DSCM is used for deformation measurement through constructing artificial speckle on ice sample surface in advance, and two dynamic load cells are employed to measure the dynamic load for monitoring the equilibrium of two ends’ forces under high-speed loading. The relationships between uniaxial compressive strength and strain-rate, temperature, loading direction, and air porosity are investigated, and the fracture mode of ice at moderate rates is also discussed. The experimental results show that there exists a significant difference between true strain-rate and nominal strain-rate derived from actuator displacement under dynamic loading conditions. Over the employed strain-rate range, the dynamic uniaxial compressive strength of lake ice shows positive strain-rate sensitivity and decreases with increasing temperature. Ice obtains greater strength values when it is with lower air porosity and loaded vertically. The fracture mode of ice seems to be a combination of splitting failure and crushing failure.

  18. Comparison between Two Methods for Diagnosis of Trichinellosis: Trichinoscopy and Artificial Digestion

    Directory of Open Access Journals (Sweden)

    María Laura Vignau

    1997-09-01

    Full Text Available Two direct methods for the diagnosis of trichinellosis were compared: trichinoscopy and artificial digestion. Muscles from 17 wistar rats, orally infected with 500 Trichinella spiralis encysted larvae were examined. From each of the following muscles: diaphragm, tongue, masseters, intercostals, triceps brachialis and cuadriceps femoralis, 648,440 larvae from 1 g samples were recovered. The linear correlation between trichinoscopy and artificial digestion was very high and significant (r=0.94, p< 0.0001, showing that both methods for the detection of muscular larvae did not differ significantly. In both methods, significant differences were found in the distribution of larvae per gramme of muscle

  19. A GPU-accelerated implicit meshless method for compressible flows

    Science.gov (United States)

    Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng

    2018-05-01

    This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.

  20. Numerical simulation of compressible two-phase flow using a diffuse interface method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Daramizadeh, A.

    2013-01-01

    Highlights: ► Compressible two-phase gas–gas and gas–liquid flows simulation are conducted. ► Interface conditions contain shock wave and cavitations. ► A high-resolution diffuse interface method is investigated. ► The numerical results exhibit very good agreement with experimental results. -- Abstract: In this article, a high-resolution diffuse interface method is investigated for simulation of compressible two-phase gas–gas and gas–liquid flows, both in the presence of shock wave and in flows with strong rarefaction waves similar to cavitations. A Godunov method and HLLC Riemann solver is used for discretization of the Kapila five-equation model and a modified Schmidt equation of state (EOS) is used to simulate the cavitation regions. This method is applied successfully to some one- and two-dimensional compressible two-phase flows with interface conditions that contain shock wave and cavitations. The numerical results obtained in this attempt exhibit very good agreement with experimental results, as well as previous numerical results presented by other researchers based on other numerical methods. In particular, the algorithm can capture the complex flow features of transient shocks, such as the material discontinuities and interfacial instabilities, without any oscillation and additional diffusion. Numerical examples show that the results of the method presented here compare well with other sophisticated modeling methods like adaptive mesh refinement (AMR) and local mesh refinement (LMR) for one- and two-dimensional problems

  1. METHOD AND APPARATUS FOR INSPECTION OF COMPRESSED DATA PACKAGES

    DEFF Research Database (Denmark)

    2008-01-01

    to be transferred over the data network. The method comprises the steps of: a) extracting payload data from the payload part of the package, b) appending the extracted payload data to a stream of data, c) probing the data package header so as to determine the compression scheme that is applied to the payload data...

  2. Methods for determining the carrying capacity of eccentrically compressed concrete elements

    Directory of Open Access Journals (Sweden)

    Starishko Ivan Nikolaevich

    2014-04-01

    Full Text Available The author presents the results of calculations of eccentrically compressed elements in the ultimate limit state of bearing capacity, taking into account all possiblestresses in the longitudinal reinforcement from the R to the R , caused by different values of eccentricity longitudinal force. The method of calculation is based on the simultaneous solution of the equilibrium equations of the longitudinal forces and internal forces with the equilibrium equations of bending moments in the ultimate limit state of the normal sections. Simultaneous solution of these equations, as well as additional equations, reflecting the stress-strain limit state elements, leads to the solution of a cubic equation with respect to height of uncracked concrete, or with respect to the carrying capacity. According to the author it is a significant advantage over the existing methods, in which the equilibrium equations using longitudinal forces obtained one value of the height, and the equilibrium equations of bending moments - another. Theoretical studies of the author, in this article and the reasons to calculate specific examples showed that a decrease in the eccentricity of the longitudinal force in the limiting state of eccentrically compressed concrete elements height uncracked concrete height increases, the tension in the longitudinal reinforcement area gradually (not abruptly goes from a state of tension compression, and load-bearing capacity of elements it increases, which is also confirmed by the experimental results. Designed journalist calculations of eccentrically compressed elements for 4 cases of eccentric compression, instead of 2 - as set out in the regulations, fully cover the entire spectrum of possible cases of the stress-strain limit state elements that comply with the European standards for reinforced concrete, in particular Eurocode 2 (2003.

  3. Hybrid digital signal processing and neural networks for automated diagnostics using NDE methods

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Yan, W.

    1993-11-01

    The primary purpose of the current research was to develop an integrated approach by combining information compression methods and artificial neural networks for the monitoring of plant components using nondestructive examination data. Specifically, data from eddy current inspection of heat exchanger tubing were utilized to evaluate this technology. The focus of the research was to develop and test various data compression methods (for eddy current data) and the performance of different neural network paradigms for defect classification and defect parameter estimation. Feedforward, fully-connected neural networks, that use the back-propagation algorithm for network training, were implemented for defect classification and defect parameter estimation using a modular network architecture. A large eddy current tube inspection database was acquired from the Metals and Ceramics Division of ORNL. These data were used to study the performance of artificial neural networks for defect type classification and for estimating defect parameters. A PC-based data preprocessing and display program was also developed as part of an expert system for data management and decision making. The results of the analysis showed that for effective (low-error) defect classification and estimation of parameters, it is necessary to identify proper feature vectors using different data representation methods. The integration of data compression and artificial neural networks for information processing was established as an effective technique for automation of diagnostics using nondestructive examination methods

  4. Biometric and Emotion Identification: An ECG Compression Based Method.

    Science.gov (United States)

    Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J

    2018-01-01

    We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  5. A Finite Element Method for Simulation of Compressible Cavitating Flows

    Science.gov (United States)

    Shams, Ehsan; Yang, Fan; Zhang, Yu; Sahni, Onkar; Shephard, Mark; Oberai, Assad

    2016-11-01

    This work focuses on a novel approach for finite element simulations of multi-phase flows which involve evolving interface with phase change. Modeling problems, such as cavitation, requires addressing multiple challenges, including compressibility of the vapor phase, interface physics caused by mass, momentum and energy fluxes. We have developed a mathematically consistent and robust computational approach to address these problems. We use stabilized finite element methods on unstructured meshes to solve for the compressible Navier-Stokes equations. Arbitrary Lagrangian-Eulerian formulation is used to handle the interface motions. Our method uses a mesh adaptation strategy to preserve the quality of the volumetric mesh, while the interface mesh moves along with the interface. The interface jump conditions are accurately represented using a discontinuous Galerkin method on the conservation laws. Condensation and evaporation rates at the interface are thermodynamically modeled to determine the interface velocity. We will present initial results on bubble cavitation the behavior of an attached cavitation zone in a separated boundary layer. We acknowledge the support from Army Research Office (ARO) under ARO Grant W911NF-14-1-0301.

  6. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  7. AN ENCODING METHOD FOR COMPRESSING GEOGRAPHICAL COORDINATES IN 3D SPACE

    Directory of Open Access Journals (Sweden)

    C. Qian

    2017-09-01

    Full Text Available This paper proposed an encoding method for compressing geographical coordinates in 3D space. By the way of reducing the length of geographical coordinates, it helps to lessen the storage size of geometry information. In addition, the encoding algorithm subdivides the whole space according to octree rules, which enables progressive transmission and loading. Three main steps are included in this method: (1 subdividing the whole 3D geographic space based on octree structure, (2 resampling all the vertices in 3D models, (3 encoding the coordinates of vertices with a combination of Cube Index Code (CIC and Geometry Code. A series of geographical 3D models were applied to evaluate the encoding method. The results showed that this method reduced the storage size of most test data by 90 % or even more under the condition of a speed of encoding and decoding. In conclusion, this method achieved a remarkable compression rate in vertex bit size with a steerable precision loss. It shall be of positive meaning to the web 3d map storing and transmission.

  8. Lagrangian particle method for compressible fluid dynamics

    Science.gov (United States)

    Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang

    2018-06-01

    A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.

  9. An evaluation of an organically bound tritium measurement method in artificial and natural urine

    International Nuclear Information System (INIS)

    Trivedi, A.; Duong, T.

    1993-03-01

    The accurate measurement of tritium in urine in the form of tritiated water (HTO) as well as in organic forms (organically bound tritium (OBT)) is an essential step in assessing tritium exposures correctly. Exchange between HTO and OBT, arising intrinsically in the separation of HTO from urine samples, is a source of error in determining the concentration of OBT using the low-temperature distillation (LTD) bioassay method. The accuracy and precision of OBT measurements using the LTD method was investigated using spiked natural and artificial urine samples. The relative bias for most of the measurements was less than 25%. The choice of testing matrix, artificial urine versus human urine, made little difference: the precisions for each urine type were similar. The appropriateness of the use of artificial urine for testing purposes was judged using a ratio of performance indices. Based on this evaluation, the artificial urine is a suitable test matrix for intercomparisons of OBT in urine measurements. It is further concluded that the LTD method is reliable for measuring OBT in urine samples. (author). 7 refs., 6 tabs

  10. An evaluation of an organically bound tritium measurement method in artificial and natural urine

    Energy Technology Data Exchange (ETDEWEB)

    Trivedi, A; Duong, T

    1993-03-01

    The accurate measurement of tritium in urine in the form of tritiated water (HTO) as well as in organic forms (organically bound tritium (OBT)) is an essential step in assessing tritium exposures correctly. Exchange between HTO and OBT, arising intrinsically in the separation of HTO from urine samples, is a source of error in determining the concentration of OBT using the low-temperature distillation (LTD) bioassay method. The accuracy and precision of OBT measurements using the LTD method was investigated using spiked natural and artificial urine samples. The relative bias for most of the measurements was less than 25%. The choice of testing matrix, artificial urine versus human urine, made little difference: the precisions for each urine type were similar. The appropriateness of the use of artificial urine for testing purposes was judged using a ratio of performance indices. Based on this evaluation, the artificial urine is a suitable test matrix for intercomparisons of OBT in urine measurements. It is further concluded that the LTD method is reliable for measuring OBT in urine samples. (author). 7 refs., 6 tabs.

  11. A blended pressure/density based method for the computation of incompressible and compressible flows

    International Nuclear Information System (INIS)

    Rossow, C.-C.

    2003-01-01

    An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation

  12. Technical Note: Validation of two methods to determine contact area between breast and compression paddle in mammography

    NARCIS (Netherlands)

    Branderhorst, Woutjan; de Groot, Jerry E.; van Lier, Monique G. J. T. B.; Highnam, Ralph P.; den Heeten, Gerard J.; Grimbergen, Cornelis A.

    2017-01-01

    Purpose: To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. Methods: For a

  13. Applicability of finite element method to collapse analysis of steel connection under compression

    International Nuclear Information System (INIS)

    Zhou, Zhiguang; Nishida, Akemi; Kuwamura, Hitoshi

    2010-01-01

    It is often necessary to study the collapse behavior of steel connections. In this study, the limit load of the steel pyramid-to-tube socket connection subjected to uniform compression was investigated by means of FEM and experiment. The steel connection was modeled using 4-node shell element. Three kinds of analysis were conducted: linear buckling, nonlinear buckling and modified Riks method analysis. For linear buckling analysis the linear eigenvalue analysis was done. For nonlinear buckling analysis, eigenvalue analysis was performed for buckling load in a nonlinear manner based on the incremental stiffness matrices, and nonlinear material properties and large displacement were considered. For modified Riks method analysis compressive load was loaded by using the modified Riks method, and nonlinear material properties and large displacement were considered. The results of FEM analyses were compared with the experimental results. It shows that nonlinear buckling and modified Riks method analyses are more accurate than linear buckling analysis because they employ nonlinear, large-deflection analysis to estimate buckling loads. Moreover, the calculated limit loads from nonlinear buckling and modified Riks method analysis are close. It can be concluded that modified Riks method analysis is more effective for collapse analysis of steel connection under compression. At last, modified Riks method analysis is used to do the parametric studies of the thickness of the pyramid. (author)

  14. Comparative Study on Theoretical and Machine Learning Methods for Acquiring Compressed Liquid Densities of 1,1,1,2,3,3,3-Heptafluoropropane (R227ea via Song and Mason Equation, Support Vector Machine, and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Hao Li

    2016-01-01

    Full Text Available 1,1,1,2,3,3,3-Heptafluoropropane (R227ea is a good refrigerant that reduces greenhouse effects and ozone depletion. In practical applications, we usually have to know the compressed liquid densities at different temperatures and pressures. However, the measurement requires a series of complex apparatus and operations, wasting too much manpower and resources. To solve these problems, here, Song and Mason equation, support vector machine (SVM, and artificial neural networks (ANNs were used to develop theoretical and machine learning models, respectively, in order to predict the compressed liquid densities of R227ea with only the inputs of temperatures and pressures. Results show that compared with the Song and Mason equation, appropriate machine learning models trained with precise experimental samples have better predicted results, with lower root mean square errors (RMSEs (e.g., the RMSE of the SVM trained with data provided by Fedele et al. [1] is 0.11, while the RMSE of the Song and Mason equation is 196.26. Compared to advanced conventional measurements, knowledge-based machine learning models are proved to be more time-saving and user-friendly.

  15. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    Science.gov (United States)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  16. Characterization of synthetic foam structures used to manufacture artificial vertebral trabecular bone.

    Science.gov (United States)

    Fürst, David; Senck, Sascha; Hollensteiner, Marianne; Esterer, Benjamin; Augat, Peter; Eckstein, Felix; Schrempf, Andreas

    2017-07-01

    Artificial materials reflecting the mechanical properties of human bone are essential for valid and reliable implant testing and design. They also are of great benefit for realistic simulation of surgical procedures. The objective of this study was therefore to characterize two groups of self-developed synthetic foam structures by static compressive testing and by microcomputed tomography. Two mineral fillers and varying amounts of a blowing agent were used to create different expansion behavior of the synthetic open-cell foams. The resulting compressive and morphometric properties thus differed within and also slightly between both groups. Apart from the structural anisotropy, the compressive and morphometric properties of the synthetic foam materials were shown to mirror the respective characteristics of human vertebral trabecular bone in good approximation. In conclusion, the artificial materials created can be used to manufacture valid synthetic bones for surgical training. Further, they provide novel possibilities for studying the relationship between trabecular bone microstructure and biomechanical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Flux Limiter Lattice Boltzmann for Compressible Flows

    International Nuclear Information System (INIS)

    Chen Feng; Li Yingjun; Xu Aiguo; Zhang Guangcai

    2011-01-01

    In this paper, a new flux limiter scheme with the splitting technique is successfully incorporated into a multiple-relaxation-time lattice Boltzmann (LB) model for shacked compressible flows. The proposed flux limiter scheme is efficient in decreasing the artificial oscillations and numerical diffusion around the interface. Due to the kinetic nature, some interface problems being difficult to handle at the macroscopic level can be modeled more naturally through the LB method. Numerical simulations for the Richtmyer-Meshkov instability show that with the new model the computed interfaces are smoother and more consistent with physical analysis. The growth rates of bubble and spike present a satisfying agreement with the theoretical predictions and other numerical simulations. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  18. Biometric and Emotion Identification: An ECG Compression Based Method

    Directory of Open Access Journals (Sweden)

    Susana Brás

    2018-04-01

    Full Text Available We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG. The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1 conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2 conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3 identification of the ECG record class, using a 1-NN (nearest neighbor classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  19. Biometric and Emotion Identification: An ECG Compression Based Method

    Science.gov (United States)

    Brás, Susana; Ferreira, Jacqueline H. T.; Soares, Sandra C.; Pinho, Armando J.

    2018-01-01

    We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model. PMID:29670564

  20. Method of controlling coherent synchroton radiation-driven degradation of beam quality during bunch length compression

    Science.gov (United States)

    Douglas, David R [Newport News, VA; Tennant, Christopher D [Williamsburg, VA

    2012-07-10

    A method of avoiding CSR induced beam quality defects in free electron laser operation by a) controlling the rate of compression and b) using a novel means of integrating the compression with the remainder of the transport system: both are accomplished by means of dispersion modulation. A large dispersion is created in the penultimate dipole magnet of the compression region leading to rapid compression; this large dispersion is demagnified and dispersion suppression performed in a final small dipole. As a result, the bunch is short for only a small angular extent of the transport, and the resulting CSR excitation is small.

  1. predicting the compressive strength of concretes made with granite

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... computational model based on artificial neural networks for the determination of the compressive strength of concrete ... Strength being the most important property of con- ... to cut corners use low quality concrete materials in .... manner of operation of natural neurons in the human body. ... the output ai.

  2. A novel method for estimating soil precompression stress from uniaxial confined compression tests

    DEFF Research Database (Denmark)

    Lamandé, Mathieu; Schjønning, Per; Labouriau, Rodrigo

    2017-01-01

    . Stress-strain curves were obtained by performing uniaxial, confined compression tests on undisturbed soil cores for three soil types at three soil water potentials. The new method performed better than the Gompertz fitting method in estimating precompression stress. The values of precompression stress...... obtained from the new method were linearly related to the maximum stress experienced by the soil samples prior to the uniaxial, confined compression test at each soil condition with a slope close to 1. Precompression stress determined with the new method was not related to soil type or dry bulk density......The concept of precompression stress is used for estimating soil strength of relevance to fieldtraffic. It represents the maximum stress experienced by the soil. The most recently developed fitting method to estimate precompression stress (Gompertz) is based on the assumption of an S-shape stress...

  3. Artificial neural networks for prediction of percentage of water ...

    Indian Academy of Sciences (India)

    have high compressive strengths in comparison with con- crete specimens ... presenting suitable model based on artificial neural networks. (ANNs) to ... by experimental ones to evaluate the software power for pre- dicting the ..... Figure 7. Correlation of measured and predicted percentage of water absorption values of.

  4. HPLC-QTOF-MS method for quantitative determination of active compounds in an anti-cellulite herbal compress

    Directory of Open Access Journals (Sweden)

    Ngamrayu Ngamdokmai

    2017-08-01

    Full Text Available A herbal compress used in Thai massage has been modified for use in cellulite treatment. Its main active ingredients were ginger, black pepper, java long pepper, tea and coffee. The objective of this study was to develop and validate an HPLCQTOF-MS method for determining its active compounds, i.e., caffeine, 6-gingerol, and piperine in raw materials as well as in the formulation together with the flavouring agent, camphor. The four compounds were chromatographically separated. The analytical method was validated through selectivity, intra-, inter day precision, accuracy and matrix effect. The results showed that the herbal compress contained caffeine (2.16 mg/g, camphor (106.15 mg/g, 6-gingerol (0.76 mg/g, and piperine (4.19 mg/g. The chemical stability study revealed that herbal compresses retained >80% of their active compounds after 1 month of storage at ambient conditions. Our method can be used for quality control of the herbal compress and its raw materials.

  5. Application of Minimally Invasive Treatment of Locking Compression Plate in Schatzker Ⅰ-Ⅲ Tibial Plateau Fracture

    OpenAIRE

    Guohui Zhao

    2014-01-01

    Objective: To investigate the clinical effect of minimally invasive treatment of locking compression plate (LCP) in Schatzker Ⅰ-Ⅲ tibial plateau fracture. Methods: Thirty-eight patients with Schatzker Ⅰ-Ⅲ tibial plateau fracture in our hospital were given minimally invasive treatment of LCP, and the artificial bone was transplanted to the depressed bone. Adverse responses, wound healing time and clinical efficacy were observed. Results: All patients were followed-up for 14- 20 months, and the...

  6. Method for compression molding of thermosetting plastics utilizing a temperature gradient across the plastic to cure the article

    Science.gov (United States)

    Heier, W. C. (Inventor)

    1974-01-01

    A method is described for compression molding of thermosetting plastics composition. Heat is applied to the compressed load in a mold cavity and adjusted to hold molding temperature at the interface of the cavity surface and the compressed compound to produce a thermal front. This thermal front advances into the evacuated compound at mean right angles to the compression load and toward a thermal fence formed at the opposite surface of the compressed compound.

  7. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  8. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  9. Hemodynamic deterioration during extracorporeal membrane oxygenation weaning in a patient with a total artificial heart.

    Science.gov (United States)

    Hosseinian, Leila; Levin, Matthew A; Fischer, Gregory W; Anyanwu, Anelechi C; Torregrossa, Gianluca; Evans, Adam S

    2015-01-01

    The Total Artificial Heart (Syncardia, Tucson, AZ) is approved for use as a bridge-to-transplant or destination therapy in patients who have irreversible end-stage biventricular heart failure. We present a unique case, in which the inferior vena cava compression by a total artificial heart was initially masked for days by the concurrent placement of an extracorporeal membrane oxygenation cannula. This is the case of a 33-year-old man admitted to our institution with recurrent episodes of ventricular tachycardia requiring emergent total artificial heart and venovenous extracorporeal membrane oxygenation placement. This interesting scenario highlights the importance for critical care physicians to have an understanding of exact anatomical localization of a total artificial heart, extracorporeal membrane oxygenation, and their potential interactions. In total artificial heart patients with hemodynamic compromise or reduced device filling, consideration should always be given to venous inflow compression, particularly in those with smaller body surface area. Transesophageal echocardiogram is a readily available diagnostic tool that must be considered standard of care, not only in the operating room but also in the ICU, when dealing with this complex subpopulation of cardiac patients.

  10. Thermoeconomic optimization of subcooled and superheated vapor compression refrigeration cycle

    International Nuclear Information System (INIS)

    Selbas, Resat; Kizilkan, Onder; Sencan, Arzu

    2006-01-01

    An exergy-based thermoeconomic optimization application is applied to a subcooled and superheated vapor compression refrigeration system. The advantage of using the exergy method of thermoeconomic optimization is that various elements of the system-i.e., condenser, evaporator, subcooling and superheating heat exchangers-can be optimized on their own. The application consists of determining the optimum heat exchanger areas with the corresponding optimum subcooling and superheating temperatures. A cost function is specified for the optimum conditions. All calculations are made for three refrigerants: R22, R134a, and R407c. Thermodynamic properties of refrigerants are formulated using the Artificial Neural Network methodology

  11. An improved ghost-cell immersed boundary method for compressible flow simulations

    KAUST Repository

    Chi, Cheng

    2016-05-20

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. A sensor is introduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently in the Cartesian grid system. The improved ghost-cell method is validated against four test cases: (a) double Mach reflections on a ramp, (b) smooth Prandtl-Meyer expansion flows, (c) supersonic flows in a wind tunnel with a forward-facing step, and (d) supersonic flows over a circular cylinder. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Copyright © 2016 John Wiley & Sons, Ltd.

  12. A novel full-field experimental method to measure the local compressibility of gas diffusion media

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Yeh-Hung; Li, Yongqiang [Electrochemical Energy Research Lab, GM R and D, Honeoye Falls, NY 14472 (United States); Rock, Jeffrey A. [GM Powertrain, Honeoye Falls, NY 14472 (United States)

    2010-05-15

    The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 {mu}m, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm x 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray trademark TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells. (author)

  13. A novel full-field experimental method to measure the local compressibility of gas diffusion media

    Science.gov (United States)

    Lai, Yeh-Hung; Li, Yongqiang; Rock, Jeffrey A.

    The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 μm, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm × 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray™ TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells.

  14. Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Myoung Keon [Agency for Defense Development, Daejeon (Korea, Republic of); Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2016-10-15

    This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)

  15. Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method

    International Nuclear Information System (INIS)

    Lee, Myoung Keon; Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon

    2016-01-01

    This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)

  16. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  17. A method of automatic control of the process of compressing pyrogas in olefin production

    Energy Technology Data Exchange (ETDEWEB)

    Podval' niy, M.L.; Bobrovnikov, N.R.; Kotler, L.D.; Shib, L.M.; Tuchinskiy, M.R.

    1982-01-01

    In the known method of automatically controlling the process of compressing pyrogas in olefin production by regulating the supply of cooling agents to the interstage coolers of the compression unit depending on the flow of hydrocarbons to the compression unit, to raise performance by lowering deposition of polymers on the flow through surfaces of the equipment, the coolant supply is also regulated as a function of the flows of hydrocarbons from the upper and lower parts of the demethanizer and the bottoms of the stripping tower. The coolant supply is regulated proportional to the difference between the flow of stripping tower bottoms and the ratio of the hydrocarbon flow from the upper and lower parts of the demethanizer to the hydrocarbon flow in the compression unit. With an increase in the proportion of light hydrocarbons (sum of upper and lower demethanizer products) in the total flow of pyrogas going to compression, the flow of coolant to the compression unit is reduced. Condensation of the given fractions in the separators, their amount in condensate going through the piping to the stripping tower, is reduced. With the reduction in the proportion of light hydrocarbons in the pyrogas, the flow of coolant is increased, thus improving condensation of heavy hydrocarbons in the separators and removing them from the compression unit in the bottoms of the stripping tower.

  18. A surface capturing method for the efficient computation of steady water waves

    NARCIS (Netherlands)

    Wackers, J.; Koren, B.

    2008-01-01

    A surface capturing method is developed for the computation of steady water–air flow with gravity. Fluxes are based on artificial compressibility and the method is solved with a multigrid technique and line Gauss–Seidel smoother. A test on a channel flow with a bottom bump shows the accuracy of the

  19. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    Science.gov (United States)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  20. Handwritten Javanese Character Recognition Using Several Artificial Neural Network Methods

    Directory of Open Access Journals (Sweden)

    Gregorius Satia Budhi

    2015-07-01

    Full Text Available Javanese characters are traditional characters that are used to write the Javanese language. The Javanese language is a language used by many people on the island of Java, Indonesia. The use of Javanese characters is diminishing more and more because of the difficulty of studying the Javanese characters themselves. The Javanese character set consists of basic characters, numbers, complementary characters, and so on. In this research we have developed a system to recognize Javanese characters. Input for the system is a digital image containing several handwritten Javanese characters. Preprocessing and segmentation are performed on the input image to get each character. For each character, feature extraction is done using the ICZ-ZCZ method. The output from feature extraction will become input for an artificial neural network. We used several artificial neural networks, namely a bidirectional associative memory network, a counterpropagation network, an evolutionary network, a backpropagation network, and a backpropagation network combined with chi2. From the experimental results it can be seen that the combination of chi2 and backpropagation achieved better recognition accuracy than the other methods.

  1. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  2. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  3. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  4. A practical method for estimating maximum shear modulus of cemented sands using unconfined compressive strength

    Science.gov (United States)

    Choo, Hyunwook; Nam, Hongyeop; Lee, Woojin

    2017-12-01

    The composition of naturally cemented deposits is very complicated; thus, estimating the maximum shear modulus (Gmax, or shear modulus at very small strains) of cemented sands using the previous empirical formulas is very difficult. The purpose of this experimental investigation is to evaluate the effects of particle size and cement type on the Gmax and unconfined compressive strength (qucs) of cemented sands, with the ultimate goal of estimating Gmax of cemented sands using qucs. Two sands were artificially cemented using Portland cement or gypsum under varying cement contents (2%-9%) and relative densities (30%-80%). Unconfined compression tests and bender element tests were performed, and the results from previous studies of two cemented sands were incorporated in this study. The results of this study demonstrate that the effect of particle size on the qucs and Gmax of four cemented sands is insignificant, and the variation of qucs and Gmax can be captured by the ratio between volume of void and volume of cement. qucs and Gmax of sand cemented with Portland cement are greater than those of sand cemented with gypsum. However, the relationship between qucs and Gmax of the cemented sand is not affected by the void ratio, cement type and cement content, revealing that Gmax of the complex naturally cemented soils with unknown in-situ void ratio, cement type and cement content can be estimated using qucs.

  5. Compressed sensing of ECG signal for wireless system with new fast iterative method.

    Science.gov (United States)

    Tawfic, Israa; Kayhan, Sema

    2015-12-01

    Recent experiments in wireless body area network (WBAN) show that compressive sensing (CS) is a promising tool to compress the Electrocardiogram signal ECG signal. The performance of CS is based on algorithms use to reconstruct exactly or approximately the original signal. In this paper, we present two methods work with absence and presence of noise, these methods are Least Support Orthogonal Matching Pursuit (LS-OMP) and Least Support Denoising-Orthogonal Matching Pursuit (LSD-OMP). The algorithms achieve correct support recovery without requiring sparsity knowledge. We derive an improved restricted isometry property (RIP) based conditions over the best known results. The basic procedures are done by observational and analytical of a different Electrocardiogram signal downloaded them from PhysioBankATM. Experimental results show that significant performance in term of reconstruction quality and compression rate can be obtained by these two new proposed algorithms, and help the specialist gathering the necessary information from the patient in less time if we use Magnetic Resonance Imaging (MRI) application, or reconstructed the patient data after sending it through the network. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  7. An activated energy approach for accelerated testing of the deformation of UHMWPE in artificial joints.

    Science.gov (United States)

    Galetz, Mathias Christian; Glatzel, Uwe

    2010-05-01

    The deformation behavior of ultrahigh molecular polyethylene (UHMWPE) is studied in the temperature range of 23-80 degrees C. Samples are examined in quasi-static compression, tensile and creep tests to determine the accelerated deformation of UHMWPE at elevated temperatures. The deformation mechanisms under compression load can be described by one strain rate and temperature dependent Eyring process. The activation energy and volume of that process do not change between 23 degrees C and 50 degrees C. This suggests that the deformation mechanism under compression remains stable within this temperature range. Tribological tests are conducted to transfer this activated energy approach to the deformation behavior under loading typical for artificial knee joints. While this approach does not cover the wear mechanisms close to the surface, testing at higher temperatures is shown to have a significant potential to reduce the testing time for lifetime predictions in terms of the macroscopic creep and deformation behavior of artificial joints. Copyright 2010. Published by Elsevier Ltd.

  8. Iterative methods for compressible Navier-Stokes and Euler equations

    Energy Technology Data Exchange (ETDEWEB)

    Tang, W.P.; Forsyth, P.A.

    1996-12-31

    This workshop will focus on methods for solution of compressible Navier-Stokes and Euler equations. In particular, attention will be focused on the interaction between the methods used to solve the non-linear algebraic equations (e.g. full Newton or first order Jacobian) and the resulting large sparse systems. Various types of block and incomplete LU factorization will be discussed, as well as stability issues, and the use of Newton-Krylov methods. These techniques will be demonstrated on a variety of model transonic and supersonic airfoil problems. Applications to industrial CFD problems will also be presented. Experience with the use of C++ for solution of large scale problems will also be discussed. The format for this workshop will be four fifteen minute talks, followed by a roundtable discussion.

  9. Stabilization study on a wet-granule tableting method for a compression-sensitive benzodiazepine receptor agonist.

    Science.gov (United States)

    Fujita, Megumi; Himi, Satoshi; Iwata, Motokazu

    2010-03-01

    SX-3228, 6-benzyl-3-(5-methoxy-1,3,4-oxadiazol-2-yl)-5,6,7,8-tetrahydro-1,6-naphthyridin-2(1H)-one, is a newly-synthesized benzodiazepine receptor agonist intended to be developed as a tablet preparation. This compound, however, becomes chemically unstable due to decreased crystallinity when it undergoes mechanical treatments such as grinding and compression. A wet-granule tableting method, where wet granules are compressed before being dried, was therefore investigated as it has the advantage of producing tablets of sufficient hardness at quite low compression pressures. The results of the stability testing showed that the drug substance was chemically considerably more stable in wet-granule compression tablets compared to conventional tablets. Furthermore, the drug substance was found to be relatively chemically stable in wet-granule compression tablets even when high compression pressure was used and the effect of this pressure was small. After investigating the reason for this excellent stability, it became evident that near-isotropic pressure was exerted on the crystals of the drug substance because almost all the empty spaces in the tablets were occupied with water during the wet-granule compression process. Decreases in crystallinity of the drug substance were thus small, making the drug substance chemically stable in the wet-granule compression tablets. We believe that this novel approach could be useful for many other compounds that are destabilized by mechanical treatments.

  10. Technical Note: Validation of two methods to determine contact area between breast and compression paddle in mammography.

    Science.gov (United States)

    Branderhorst, Woutjan; de Groot, Jerry E; van Lier, Monique G J T B; Highnam, Ralph P; den Heeten, Gerard J; Grimbergen, Cornelis A

    2017-08-01

    To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. For a set of 300 breast compressions, we measured the contact areas between breast and paddle, both capacitively using a transparent foil with indium-tin-oxide (ITO) coating attached to the paddle, and retrospectively from the obtained mammograms using image processing software (Volpara Enterprise, algorithm version 1.5.2). A gold standard was obtained from video images of the compressed breast. During each compression, the breast was illuminated from the sides in order to create a dark shadow on the video image where the breast was in contact with the compression paddle. We manually segmented the shadows captured at the time of x-ray exposure and measured their areas. We found a strong correlation between the manual segmentations and the capacitive measurements [r = 0.989, 95% CI (0.987, 0.992)] and between the manual segmentations and the image processing software [r = 0.978, 95% CI (0.972, 0.982)]. Bland-Altman analysis showed a bias of -0.0038 dm 2 for the capacitive measurement (SD 0.0658, 95% limits of agreement [-0.1329, 0.1252]) and -0.0035 dm 2 for the image processing software [SD 0.0962, 95% limits of agreement (-0.1921, 0.1850)]. The size of the contact area between the paddle and the breast can be determined accurately and precisely, both in real-time using the capacitive method, and retrospectively using image processing software. This result is beneficial for scientific research, data analysis and quality control systems that depend on one of these two methods for determining the average pressure on the breast during mammographic compression. © 2017 Sigmascreening B.V. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  11. Meshless Method for Simulation of Compressible Flow

    Science.gov (United States)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow

  12. Compression method of anastomosis of large intestines by implants with memory of shape: alternative to traditional sutures

    Directory of Open Access Journals (Sweden)

    F. Sh. Aliev

    2015-01-01

    Full Text Available Research objective. To prove experimentally the possibility of forming a compression colonic anastomoses using nickel-titanium devices in comparison with traditional methods of anastomosis. Materials and methods. In experimental studies the quality of the compression anastomosis of the colon in comparison with sutured and stapled anastomoses was performed. There were three experimental groups in mongrel dogs formed: in the 1st series (n = 30 compression anastomoses nickel-titanium implants were formed; in the 2nd (n = 25 – circular stapling anastomoses; in the 3rd (n = 25 – ligature way to Mateshuk– Lambert. In the experiment the physical durability, elasticity, and biological tightness, morphogenesis colonic anastomoses were studied. Results. Optimal sizes of compression devices are 32 × 18 and 28 × 15 mm with a wire diameter of 2.2 mm, the force of winding compression was 740 ± 180 g/mm2. Compression suture has a higher physical durability compared to stapled (W = –33.0; p < 0.05 and sutured (W = –28.0; p < 0.05, higher elasticity (p < 0.05 in all terms of tests and biological tightness since 3 days (p < 0.001 after surgery. The regularities of morphogenesis colonic anastomoses allocated by 4 periods of the regeneration of intestinal suture. Conclusion. Obtained experimental data of the use of compression anastomosis of the colon by the nickel-titanium devices are the convincing arguments for their clinical application. 

  13. Comparative Survey of Ultrasound Images Compression Methods Dedicated to a Tele-Echography Robotic System

    National Research Council Canada - National Science Library

    Delgorge, C

    2001-01-01

    .... For the purpose of this work, we selected seven compression methods : Fourier Transform, Discrete Cosine Transform, Wavelets, Quadtrees Transform, Fractals, Histogram Thresholding, and Run Length Coding...

  14. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  15. A parallel finite-volume finite-element method for transient compressible turbulent flows with heat transfer

    International Nuclear Information System (INIS)

    Masoud Ziaei-Rad

    2010-01-01

    In this paper, a two-dimensional numerical scheme is presented for the simulation of turbulent, viscous, transient compressible flows in the simultaneously developing hydraulic and thermal boundary layer region. The numerical procedure is a finite-volume-based finite-element method applied to unstructured grids. This combination together with a new method applied for the boundary conditions allows for accurate computation of the variables in the entrance region and for a wide range of flow fields from subsonic to transonic. The Roe-Riemann solver is used for the convective terms, whereas the standard Galerkin technique is applied for the viscous terms. A modified κ-ε model with a two-layer equation for the near-wall region combined with a compressibility correction is used to predict the turbulent viscosity. Parallel processing is also employed to divide the computational domain among the different processors to reduce the computational time. The method is applied to some test cases in order to verify the numerical accuracy. The results show significant differences between incompressible and compressible flows in the friction coefficient, Nusselt number, shear stress and the ratio of the compressible turbulent viscosity to the molecular viscosity along the developing region. A transient flow generated after an accidental rupture in a pipeline was also studied as a test case. The results show that the present numerical scheme is stable, accurate and efficient enough to solve the problem of transient wall-bounded flow.

  16. Methodes spectrales paralleles et applications aux simulations de couches de melange compressibles

    OpenAIRE

    Male , Jean-Michel; Fezoui , Loula ,

    1993-01-01

    La resolution des equations de Navier-Stokes en methodes spectrales pour des ecoulements compressibles peut etre assez gourmande en temps de calcul. On etudie donc ici la parallelisation d'un tel algorithme et son implantation sur une machine massivement parallele, la connection-machine CM-2. La methode spectrale s'adapte bien aux exigences du parallelisme massif, mais l'un des outils de base de cette methode, la transformee de Fourier rapide (lorsqu'elle doit etre appliquee sur les deux dime...

  17. Novel method to load multiple genes onto a mammalian artificial chromosome.

    Directory of Open Access Journals (Sweden)

    Anna Tóth

    Full Text Available Mammalian artificial chromosomes are natural chromosome-based vectors that may carry a vast amount of genetic material in terms of both size and number. They are reasonably stable and segregate well in both mitosis and meiosis. A platform artificial chromosome expression system (ACEs was earlier described with multiple loading sites for a modified lambda-integrase enzyme. It has been shown that this ACEs is suitable for high-level industrial protein production and the treatment of a mouse model for a devastating human disorder, Krabbe's disease. ACEs-treated mutant mice carrying a therapeutic gene lived more than four times longer than untreated counterparts. This novel gene therapy method is called combined mammalian artificial chromosome-stem cell therapy. At present, this method suffers from the limitation that a new selection marker gene should be present for each therapeutic gene loaded onto the ACEs. Complex diseases require the cooperative action of several genes for treatment, but only a limited number of selection marker genes are available and there is also a risk of serious side-effects caused by the unwanted expression of these marker genes in mammalian cells, organs and organisms. We describe here a novel method to load multiple genes onto the ACEs by using only two selectable marker genes. These markers may be removed from the ACEs before therapeutic application. This novel technology could revolutionize gene therapeutic applications targeting the treatment of complex disorders and cancers. It could also speed up cell therapy by allowing researchers to engineer a chromosome with a predetermined set of genetic factors to differentiate adult stem cells, embryonic stem cells and induced pluripotent stem (iPS cells into cell types of therapeutic value. It is also a suitable tool for the investigation of complex biochemical pathways in basic science by producing an ACEs with several genes from a signal transduction pathway of interest.

  18. Novel method to load multiple genes onto a mammalian artificial chromosome.

    Science.gov (United States)

    Tóth, Anna; Fodor, Katalin; Praznovszky, Tünde; Tubak, Vilmos; Udvardy, Andor; Hadlaczky, Gyula; Katona, Robert L

    2014-01-01

    Mammalian artificial chromosomes are natural chromosome-based vectors that may carry a vast amount of genetic material in terms of both size and number. They are reasonably stable and segregate well in both mitosis and meiosis. A platform artificial chromosome expression system (ACEs) was earlier described with multiple loading sites for a modified lambda-integrase enzyme. It has been shown that this ACEs is suitable for high-level industrial protein production and the treatment of a mouse model for a devastating human disorder, Krabbe's disease. ACEs-treated mutant mice carrying a therapeutic gene lived more than four times longer than untreated counterparts. This novel gene therapy method is called combined mammalian artificial chromosome-stem cell therapy. At present, this method suffers from the limitation that a new selection marker gene should be present for each therapeutic gene loaded onto the ACEs. Complex diseases require the cooperative action of several genes for treatment, but only a limited number of selection marker genes are available and there is also a risk of serious side-effects caused by the unwanted expression of these marker genes in mammalian cells, organs and organisms. We describe here a novel method to load multiple genes onto the ACEs by using only two selectable marker genes. These markers may be removed from the ACEs before therapeutic application. This novel technology could revolutionize gene therapeutic applications targeting the treatment of complex disorders and cancers. It could also speed up cell therapy by allowing researchers to engineer a chromosome with a predetermined set of genetic factors to differentiate adult stem cells, embryonic stem cells and induced pluripotent stem (iPS) cells into cell types of therapeutic value. It is also a suitable tool for the investigation of complex biochemical pathways in basic science by producing an ACEs with several genes from a signal transduction pathway of interest.

  19. Nonlinear viscoelasticity of pre-compressed layered polymeric composite under oscillatory compression

    KAUST Repository

    Xu, Yangguang

    2018-05-03

    Describing nonlinear viscoelastic properties of polymeric composites when subjected to dynamic loading is essential for development of practical applications of such materials. An efficient and easy method to analyze nonlinear viscoelasticity remains elusive because the dynamic moduli (storage modulus and loss modulus) are not very convenient when the material falls into nonlinear viscoelastic range. In this study, we utilize two methods, Fourier transform and geometrical nonlinear analysis, to quantitatively characterize the nonlinear viscoelasticity of a pre-compressed layered polymeric composite under oscillatory compression. We discuss the influences of pre-compression, dynamic loading, and the inner structure of polymeric composite on the nonlinear viscoelasticity. Furthermore, we reveal the nonlinear viscoelastic mechanism by combining with other experimental results from quasi-static compressive tests and microstructural analysis. From a methodology standpoint, it is proved that both Fourier transform and geometrical nonlinear analysis are efficient tools for analyzing the nonlinear viscoelasticity of a layered polymeric composite. From a material standpoint, we consequently posit that the dynamic nonlinear viscoelasticity of polymeric composites with complicated inner structures can also be well characterized using these methods.

  20. Three dimensional simulation of compressible and incompressible flows through the finite element method

    International Nuclear Information System (INIS)

    Costa, Gustavo Koury

    2004-11-01

    Although incompressible fluid flows can be regarded as a particular case of a general problem, numerical methods and the mathematical formulation aimed to solve compressible and incompressible flows have their own peculiarities, in such a way, that it is generally not possible to attain both regimes with a single approach. In this work, we start from a typically compressible formulation, slightly modified to make use of pressure variables and, through augmenting the stabilising parameters, we end up with a simplified model which is able to deal with a wide range of flow regimes, from supersonic to low speed gas flows. The resulting methodology is flexible enough to allow for the simulation of liquid flows as well. Examples using conservative and pressure variables are shown and the results are compared to those published in the literature, in order to validate the method. (author)

  1. Development of a geopolymer solidification method for radioactive wastes by compression molding and heat curing

    International Nuclear Information System (INIS)

    Shimoda, Chiaki; Matsuyama, Kanae; Okabe, Hirofumi; Kaneko, Masaaki; Miyamoto, Shinya

    2017-01-01

    Geopolymer solidification is a good method for managing waste because of it is inexpensive as compared with vitrification and has a reduced risk of hydrogen generation. In general, when geopolymers are made, water is added to the geopolymer raw materials, and then the slurry is mixed, poured into a mold, and cured. However, it is difficult to control the reaction because, depending on the types of materials, the viscosity can immediately increase after mixing. Slurries of geopolymers easily attach to the agitating wing of the mixer and easily clog the plumbing during transportation. Moreover, during long-term storage of solidified wastes containing concentrated radionuclides in a sealed container without vents, the hydrogen concentration in the container increases over time. Therefore, a simple method using as little water as possible is needed. In this work, geopolymer solidification by compression molding was studied. As compared with the usual methods, it provides a simple and stable method for preparing waste for long-term storage. From investigations performed before and after solidification by compression molding, it was shown that the crystal structure changed. From this result, it was concluded that the geopolymer reaction proceeded during compression molding. This method (1) reduces the energy needed for drying, (2) has good workability, (3) reduces the overall volume, and (4) reduces hydrogen generation. (author)

  2. Uniaxial compression tests on diesel contaminated frozen silty soil specimens

    International Nuclear Information System (INIS)

    Chenaf, D.; Stampli, N.; Bathurst, R.; Chapuis, R.P.

    1999-01-01

    Results of a uniaxial, unconfined compression test on artificial diesel-contaminated and uncontaminated frozen silty soils are discussed. The testing program involved 59 specimens. The results show that for the same fluid content, diesel contamination reduced the strength of the frozen specimens by increasing the unfrozen water content. For example, in specimens containing 50 per cent diesel oil of the fluid content by weight the maximum strength was reduced by 95 per cent compared to the strength of an uncontaminated specimen. Diesel contamination was also shown to contribute to the slippage between soil particles by acting as a lubricant, thus accelerating the loss of compressive strength.13 refs., 18 figs

  3. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  4. Compressed sensing method for human activity recognition using tri-axis accelerometer on mobile phone

    Institute of Scientific and Technical Information of China (English)

    Song Hui; Wang Zhongmin

    2017-01-01

    The diversity in the phone placements of different mobile users' dailylife increases the difficulty of recognizing human activities by using mobile phone accelerometer data.To solve this problem,a compressed sensing method to recognize human activities that is based on compressed sensing theory and utilizes both raw mobile phone accelerometer data and phone placement information is proposed.First,an over-complete dictionary matrix is constructed using sufficient raw tri-axis acceleration data labeled with phone placement information.Then,the sparse coefficient is evaluated for the samples that need to be tested by resolving L1 minimization.Finally,residual values are calculated and the minimum value is selected as the indicator to obtain the recognition results.Experimental results show that this method can achieve a recognition accuracy reaching 89.86%,which is higher than that of a recognition method that does not adopt the phone placement information for the recognition process.The recognition accuracy of the proposed method is effective and satisfactory.

  5. Methods for compressible multiphase flows and their applications

    Science.gov (United States)

    Kim, H.; Choe, Y.; Kim, H.; Min, D.; Kim, C.

    2018-06-01

    This paper presents an efficient and robust numerical framework to deal with multiphase real-fluid flows and their broad spectrum of engineering applications. A homogeneous mixture model incorporated with a real-fluid equation of state and a phase change model is considered to calculate complex multiphase problems. As robust and accurate numerical methods to handle multiphase shocks and phase interfaces over a wide range of flow speeds, the AUSMPW+_N and RoeM_N schemes with a system preconditioning method are presented. These methods are assessed by extensive validation problems with various types of equation of state and phase change models. Representative realistic multiphase phenomena, including the flow inside a thermal vapor compressor, pressurization in a cryogenic tank, and unsteady cavitating flow around a wedge, are then investigated as application problems. With appropriate physical modeling followed by robust and accurate numerical treatments, compressible multiphase flow physics such as phase changes, shock discontinuities, and their interactions are well captured, confirming the suitability of the proposed numerical framework to wide engineering applications.

  6. A theoretical global optimization method for vapor-compression refrigeration systems based on entransy theory

    International Nuclear Information System (INIS)

    Xu, Yun-Chao; Chen, Qun

    2013-01-01

    The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases

  7. Analysis of time integration methods for the compressible two-fluid model for pipe flow simulations

    NARCIS (Netherlands)

    B. Sanderse (Benjamin); I. Eskerud Smith (Ivar); M.H.W. Hendrix (Maurice)

    2017-01-01

    textabstractIn this paper we analyse different time integration methods for the two-fluid model and propose the BDF2 method as the preferred choice to simulate transient compressible multiphase flow in pipelines. Compared to the prevailing Backward Euler method, the BDF2 scheme has a significantly

  8. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  9. Artificially lengthened and constricted vocal tract in vocal training methods.

    Science.gov (United States)

    Bele, Irene Velsvik

    2005-01-01

    It is common practice in vocal training to make use of vocal exercise techniques that involve partial occlusion of the vocal tract. Various techniques are used; some of them form an occlusion within the front part of the oral cavity or at the lips. Another vocal exercise technique involves lengthening the vocal tract; for example, the method of phonation into small tubes. This essay presents some studies made on the effects of various vocal training methods that involve an artificially lengthened and constricted vocal tract. The influence of sufficient acoustic impedance on vocal fold vibration and economical voice production is presented.

  10. Selectively Lossy, Lossless, and/or Error Robust Data Compression Method

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Lossless compression techniques provide efficient compression of hyperspectral satellite data. The present invention combines the advantages of a clustering with...

  11. Retrofit device and method to improve humidity control of vapor compression cooling systems

    Science.gov (United States)

    Roth, Robert Paul; Hahn, David C.; Scaringe, Robert P.

    2016-08-16

    A method and device for improving moisture removal capacity of a vapor compression system is disclosed. The vapor compression system is started up with the evaporator blower initially set to a high speed. A relative humidity in a return air stream is measured with the evaporator blower operating at the high speed. If the measured humidity is above the predetermined high relative humidity value, the evaporator blower speed is reduced from the initially set high speed to the lowest possible speed. The device is a control board connected with the blower and uses a predetermined change in measured relative humidity to control the blower motor speed.

  12. Finite Element Analysis of Increasing Column Section and CFRP Reinforcement Method under Different Axial Compression Ratio

    Science.gov (United States)

    Jinghai, Zhou; Tianbei, Kang; Fengchi, Wang; Xindong, Wang

    2017-11-01

    Eight less stirrups in the core area frame joints are simulated by ABAQUS finite element numerical software. The composite reinforcement method is strengthened with carbon fiber and increasing column section, the axial compression ratio of reinforced specimens is 0.3, 0.45 and 0.6 respectively. The results of the load-displacement curve, ductility and stiffness are analyzed, and it is found that the different axial compression ratio has great influence on the bearing capacity of increasing column section strengthening method, and has little influence on carbon fiber reinforcement method. The different strengthening schemes improve the ultimate bearing capacity and ductility of frame joints in a certain extent, composite reinforcement joints strengthening method to improve the most significant, followed by increasing column section, reinforcement method of carbon fiber reinforced joints to increase the minimum.

  13. A time-domain method to generate artificial time history from a given reference response spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Gang Sik [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Song, Oh Seop [Dept. of Mechanical Engineering, Chungnam National University, Daejeon (Korea, Republic of)

    2016-06-15

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance.

  14. A time-domain method to generate artificial time history from a given reference response spectrum

    International Nuclear Information System (INIS)

    Shin, Gang Sik; Song, Oh Seop

    2016-01-01

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance

  15. Method for Calculation of Steam-Compression Heat Transformers

    Directory of Open Access Journals (Sweden)

    S. V. Zditovetckaya

    2012-01-01

    Full Text Available The paper considers a method for joint numerical analysis of cycle parameters and heatex-change equipment of steam-compression heat transformer contour that takes into account a non-stationary operational mode and irreversible losses in devices and pipeline contour. The method has been realized in the form of the software package and can be used while making design or selection of a heat transformer with due account of a coolant and actual equipment being included in its structure.The paper presents investigation results revealing influence of pressure loss in an evaporator and a condenser from the side of the coolant caused by a friction and local resistance on power efficiency of the heat transformer which is operating in the mode of refrigerating and heating installation and a thermal pump. Actually obtained operational parameters of the thermal pump in the nominal and off-design operatinal modes depend on the structure of the concrete contour equipment.

  16. Mechanical properties of silorane-based and methacrylate-based composite resins after artificial aging.

    Science.gov (United States)

    de Castro, Denise Tornavoi; Lepri, César Penazzo; Valente, Mariana Lima da Costa; dos Reis, Andréa Cândido

    2016-01-01

    The aim of this study was to compare the compressive strength of a silorane-based composite resin (Filtek P90) to that of conventional composite resins (Charisma, Filtek Z250, Fill Magic, and NT Premium) before and after accelerated artificial aging (AAA). For each composite resin, 16 cylindrical specimens were prepared and divided into 2 groups. One group underwent analysis of compressive strength in a universal testing machine 24 hours after preparation, and the other was subjected first to 192 hours of AAA and then the compressive strength test. Data were analyzed by analysis of variance, followed by the Tukey HSD post hoc test (α = 0.05). Some statistically significant differences in compressive strength were found among the commercial brands (P aging. Comparison of each material before and after AAA revealed that the aging process did not influence the compressive strength of the tested resins (P = 0.785).

  17. Method for data compression by associating complex numbers with files of data values

    Science.gov (United States)

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  18. Stabilization of Gob-Side Entry with an Artificial Side for Sustaining Mining Work

    Directory of Open Access Journals (Sweden)

    Hong-sheng Wang

    2016-07-01

    Full Text Available A concrete artificial side (AS is introduced to stabilize a gob-side entry (GSE. To evaluate the stability of the AS, a uniaxial compression failure experiment was conducted with large and small-scale specimens. The distribution characteristics of the shear stress were obtained from a numerical simulation. Based on the failure characteristics and the variation of the shear stress, a failure criterion was determined and implemented in the strengthening method for the artificial side. In an experimental test, the distribution pattern of the maximum shear stress showed an X shape, which contributed to the failure shape of the specimen. The shear stress distribution and failure shape are induced by a combination of two sets of shear stresses, which implies that failure of the AS follows the twin shear strength theory. The use of anchor bolts, bolts, and anchor bars enhances the shear strength of the artificial side. When this side is stable, the components can constrain the lateral deformation as well as improve the internal friction angle and cohesion. When the AS is damaged, the components prevent the sliding of broken blocks along the shear failure plane and improve the residual strength of the artificial side. When reinforced with an anchor bar, the AS is still stable even after mining operations for three years.

  19. Neural Network for Principal Component Analysis with Applications in Image Compression

    Directory of Open Access Journals (Sweden)

    Luminita State

    2007-04-01

    Full Text Available Classical feature extraction and data projection methods have been extensively investigated in the pattern recognition and exploratory data analysis literature. Feature extraction and multivariate data projection allow avoiding the "curse of dimensionality", improve the generalization ability of classifiers and significantly reduce the computational requirements of pattern classifiers. During the past decade a large number of artificial neural networks and learning algorithms have been proposed for solving feature extraction problems, most of them being adaptive in nature and well-suited for many real environments where adaptive approach is required. Principal Component Analysis, also called Karhunen-Loeve transform is a well-known statistical method for feature extraction, data compression and multivariate data projection and so far it has been broadly used in a large series of signal and image processing, pattern recognition and data analysis applications.

  20. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  1. Artificial life and Piaget.

    Science.gov (United States)

    Mueller, Ulrich; Grobman, K H.

    2003-04-01

    Artificial life provides important theoretical and methodological tools for the investigation of Piaget's developmental theory. This new method uses artificial neural networks to simulate living phenomena in a computer. A recent study by Parisi and Schlesinger suggests that artificial life might reinvigorate the Piagetian framework. We contrast artificial life with traditional cognitivist approaches, discuss the role of innateness in development, and examine the relation between physiological and psychological explanations of intelligent behaviour.

  2. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  3. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  4. SOLVING TRANSPORT LOGISTICS PROBLEMS IN A VIRTUAL ENTERPRISE THROUGH ARTIFICIAL INTELLIGENCE METHODS

    OpenAIRE

    PAVLENKO, Vitaliy; PAVLENKO, Tetiana; MOROZOVA, Olga; KUZNETSOVA, Anna; VOROPAI, Olena

    2017-01-01

    The paper offers a solution to the problem of material flow allocation within a virtual enterprise by using artificial intelligence methods. The research is based on the use of fuzzy relations when planning for optimal transportation modes to deliver components for manufactured products. The Fuzzy Logic Toolbox is used to determine the optimal route for transportation of components for manufactured products. The methods offered have been exemplified in the present research. The authors have b...

  5. The Use of Compressed Air for Micro-Jet Cooling After MIG Welding

    Directory of Open Access Journals (Sweden)

    Hadryś D.

    2016-09-01

    Full Text Available The material selected for this investigation was low alloy steel weld metal deposit (WMD after MIG welding with micro-jet cooling. The present investigation was aimed as the following tasks: obtained WMD with various amount of acicular ferrite and further analyze impact toughness of WMD in terms of acicular ferrite amount in it. Weld metal deposit (WMD was first time carried out for MIG welding with micro-jet cooling of compressed air and gas mixture of argon and air. Until that moment only argon, helium and nitrogen were tested as micro-jet gases for MIG/MAG processes. An important role in the interpretation of the results can give methods of artificial intelligence.

  6. A method for predicting the impact velocity of a projectile fired from a compressed air gun facility

    International Nuclear Information System (INIS)

    Attwood, G.J.

    1988-03-01

    This report describes the development and use of a method for calculating the velocity at impact of a projectile fired from a compressed air gun. The method is based on a simple but effective approach which has been incorporated into a computer program. The method was developed principally for use with the Horizontal Impact Facility at AEE Winfrith but has been adapted so that it can be applied to any compressed air gun of a similar design. The method has been verified by comparison of predicted velocities with test data and the program is currently being used in a predictive manner to specify test conditions for the Horizontal Impact Facility at Winfrith. (author)

  7. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  8. Giant negative linear compression positively coupled to massive thermal expansion in a metal-organic framework.

    Science.gov (United States)

    Cai, Weizhao; Katrusiak, Andrzej

    2014-07-04

    Materials with negative linear compressibility are sought for various technological applications. Such effects were reported mainly in framework materials. When heated, they typically contract in the same direction of negative linear compression. Here we show that this common inverse relationship rule does not apply to a three-dimensional metal-organic framework crystal, [Ag(ethylenediamine)]NO3. In this material, the direction of the largest intrinsic negative linear compression yet observed in metal-organic frameworks coincides with the strongest positive thermal expansion. In the perpendicular direction, the large linear negative thermal expansion and the strongest crystal compressibility are collinear. This seemingly irrational positive relationship of temperature and pressure effects is explained and the mechanism of coupling of compressibility with expansivity is presented. The positive coupling between compression and thermal expansion in this material enhances its piezo-mechanical response in adiabatic process, which may be used for designing new artificial composites and ultrasensitive measuring devices.

  9. Increasing Lift by Releasing Compressed Air on Suction Side of Airfoil

    Science.gov (United States)

    Seewald, F

    1927-01-01

    The investigation was limited chiefly to the region of high angles of attack since it is only in this region that any considerable change in the character of the flow can be expected from such artificial aids. The slot, through which compressed air was blown, was formed by two pieces of sheet steel connected by screws at intervals of about 5 cm. It was intended to regulate the width of the slot by means of these screws. Much more compressed air was required than was originally supposed, hence all the delivery pipes were much too small. This experiment, therefore, is to be regarded as only a preliminary one.

  10. Research of Block-Based Motion Estimation Methods for Video Compression

    Directory of Open Access Journals (Sweden)

    Tropchenko Andrey

    2016-08-01

    Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.

  11. Key technical issues associated with a method of pulse compression. Final technical report

    International Nuclear Information System (INIS)

    Hunter, R.O. Jr.

    1980-06-01

    Key technical issues for angular multiplexing as a method of pulse compression in a 100 KJ KrF laser have been studied. Environmental issues studied include seismic vibrations man-made vibrations, air propagation, turbulence, and thermal gradient-induced density fluctuations. These studies have been incorporated in the design of mirror mounts and an alignment system, both of which are reported. A design study and performance analysis of the final amplifier have been undertaken. The pulse compression optical train has been designed and assessed as to its performance. Individual components are described and analytical relationships between the optical component size, surface quality, damage threshold and final focus properties are derived. The optical train primary aberrations are obtained and a method for aberration minimization is presented. Cost algorithms for the mirrors, mounts, and electrical hardware are integrated into a cost model to determine system costs as a function of pulse length, aperture size, and spot size

  12. A Novel CAE Method for Compression Molding Simulation of Carbon Fiber-Reinforced Thermoplastic Composite Sheet Materials

    Directory of Open Access Journals (Sweden)

    Yuyang Song

    2018-06-01

    Full Text Available Its high-specific strength and stiffness with lower cost make discontinuous fiber-reinforced thermoplastic (FRT materials an ideal choice for lightweight applications in the automotive industry. Compression molding is one of the preferred manufacturing processes for such materials as it offers the opportunity to maintain a longer fiber length and higher volume production. In the past, we have demonstrated that compression molding of FRT in bulk form can be simulated by treating melt flow as a continuum using the conservation of mass and momentum equations. However, the compression molding of such materials in sheet form using a similar approach does not work well. The assumption of melt flow as a continuum does not hold for such deformation processes. To address this challenge, we have developed a novel simulation approach. First, the draping of the sheet was simulated as a structural deformation using the explicit finite element approach. Next, the draped shape was compressed using fluid mechanics equations. The proposed method was verified by building a physical part and comparing the predicted fiber orientation and warpage measurements performed on the physical parts. The developed method and tools are expected to help in expediting the development of FRT parts, which will help achieve lightweight targets in the automotive industry.

  13. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  14. Fast lossless compression via cascading Bloom filters.

    Science.gov (United States)

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  15. Triaxial- and uniaxial-compression testing methods developed for extraction of pore water from unsaturated tuff, Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Mower, T.E.; Higgins, J.D. [Colorado School of Mines, Golden, CO (USA). Dept. of Geology and Geological Engineering; Yang, I.C. [Geological Survey, Denver, CO (USA). Water Resources Div.

    1989-12-31

    To support the study of hydrologic system in the unsaturated zone at Yucca Mountain, Nevada, two extraction methods were examined to obtain representative, uncontaminated pore-water samples from unsaturated tuff. Results indicate that triaxial compression, which uses a standard cell, can remove pore water from nonwelded tuff that has an initial moisture content greater than 11% by weight; uniaxial compression, which uses a specifically fabricated cell, can extract pore water from nonwelded tuff that has an initial moisture content greater than 8% and from welded tuff that has an initial moisture content greater than 6.5%. For the ambient moisture conditions of Yucca Mountain tuffs, uniaxial compression is the most efficient method of pore-water extraction. 12 refs., 7 figs., 2 tabs.

  16. Compressible flow modelling in unstructured mesh topologies using numerical methods developed for incompressible flows

    International Nuclear Information System (INIS)

    Caruso, A.; Mechitoua, N.; Duplex, J.

    1995-01-01

    The R and D thermal hydraulic codes, notably the finite difference codes Melodie (2D) and ESTET (3D) or the 2D and 3D versions of the finite element code N3S were initially developed for incompressible, possibly dilatable, turbulent flows, i.e. those where density is not pressure-dependent. Subsequent minor modifications to these finite difference code algorithms enabled extension of their scope to subsonic compressible flows. The first applications in both single-phase and two flow contexts have now been completed. This paper presents the techniques used to adapt these algorithms for the processing of compressible flows in an N3S type finite element code, whereby complex geometries normally difficult to model in finite difference meshes could be successfully dealt with. The development of version 3.0 of he N3S code led to dilatable flow calculations at lower cost. On this basis, a 2-D prototype version of N3S was programmed, tested and validated, drawing maximum benefit from Cray vectorization possibilities and from physical, numerical or data processing experience with other fluid dynamics codes, such as Melodie, ESTET or TELEMAC. The algorithms are the same as those used in finite difference codes, but their formulation is variational. The first part of the paper deals with the fundamental equations involved, expressed in basic form, together with the associated digital method. The modifications to the k-epsilon turbulence model extended to compressible flows are also described. THe second part presents the algorithm used, indicating the additional terms required by the extension. The third part presents the equations in integral form and the associated matrix systems. The solutions adopted for calculation of the compressibility related terms are indicated. Finally, a few representative applications and test cases are discussed. These include subsonic, but also transsonic and supersonic cases, showing the shock responses of the digital method. The application of

  17. Nuclear power plant monitoring and fault diagnosis methods based on the artificial intelligence technique

    International Nuclear Information System (INIS)

    Yoshikawa, S.; Saiki, A.; Ugolini, D.; Ozawa, K.

    1996-01-01

    The main objective of this paper is to develop an advanced diagnosis system based on the artificial intelligence technique to monitor the operation and to improve the operational safety of nuclear power plants. Three different methods have been elaborated in this study: an artificial neural network local diagnosis (NN ds ) scheme that acting at the component level discriminates between normal and abnormal transients, a model-based diagnostic reasoning mechanism that combines a physical causal network model-based knowledge compiler (KC) that generates applicable diagnostic rules from widely accepted physical knowledge compiler (KC) that generates applicable diagnostic rules from widely accepted physical knowledge. Although the three methods have been developed and verified independently, they are highly correlated and, when connected together, form a effective and robust diagnosis and monitoring tool. (authors)

  18. Computer calculations of compressibility of natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Kassem, J.H.; Mattar, L.; Dranchuk, P.M

    An alternative method for the calculation of pseudo reduced compressibility of natural gas is presented. The method is incorporated into the routines by adding a single FORTRAN statement before the RETURN statement. The method is suitable for computer and hand-held calculator applications. It produces the same reduced compressibility as other available methods but is computationally superior. Tabular definitions of coefficients and comparisons of predicted pseudo reduced compressibility using different methods are presented, along with appended FORTRAN subroutines. 7 refs., 2 tabs.

  19. Estimating Penetration Resistance in Agricultural Soils of Ardabil Plain Using Artificial Neural Network and Regression Methods

    Directory of Open Access Journals (Sweden)

    Gholam Reza Sheykhzadeh

    2017-02-01

    Full Text Available Introduction: Penetration resistance is one of the criteria for evaluating soil compaction. It correlates with several soil properties such as vehicle trafficability, resistance to root penetration, seedling emergence, and soil compaction by farm machinery. Direct measurement of penetration resistance is time consuming and difficult because of high temporal and spatial variability. Therefore, many different regressions and artificial neural network pedotransfer functions have been proposed to estimate penetration resistance from readily available soil variables such as particle size distribution, bulk density (Db and gravimetric water content (θm. The lands of Ardabil Province are one of the main production regions of potato in Iran, thus, obtaining the soil penetration resistance in these regions help with the management of potato production. The objective of this research was to derive pedotransfer functions by using regression and artificial neural network to predict penetration resistance from some soil variations in the agricultural soils of Ardabil plain and to compare the performance of artificial neural network with regression models. Materials and methods: Disturbed and undisturbed soil samples (n= 105 were systematically taken from 0-10 cm soil depth with nearly 3000 m distance in the agricultural lands of the Ardabil plain ((lat 38°15' to 38°40' N, long 48°16' to 48°61' E. The contents of sand, silt and clay (hydrometer method, CaCO3 (titration method, bulk density (cylinder method, particle density (Dp (pychnometer method, organic carbon (wet oxidation method, total porosity(calculating from Db and Dp, saturated (θs and field soil water (θf using the gravimetric method were measured in the laboratory. Mean geometric diameter (dg and standard deviation (σg of soil particles were computed using the percentages of sand, silt and clay. Penetration resistance was measured in situ using cone penetrometer (analog model at 10

  20. Unidirectional Expiratory Valve Method to Assess Maximal Inspiratory Pressure in Individuals without Artificial Airway.

    Directory of Open Access Journals (Sweden)

    Samantha Torres Grams

    Full Text Available Maximal Inspiratory Pressure (MIP is considered an effective method to estimate strength of inspiratory muscles, but still leads to false positive diagnosis. Although MIP assessment with unidirectional expiratory valve method has been used in patients undergoing mechanical ventilation, no previous studies investigated the application of this method in subjects without artificial airway.This study aimed to compare the MIP values assessed by standard method (MIPsta and by unidirectional expiratory valve method (MIPuni in subjects with spontaneous breathing without artificial airway. MIPuni reproducibility was also evaluated.This was a crossover design study, and 31 subjects performed MIPsta and MIPuni in a random order. MIPsta measured MIP maintaining negative pressure for at least one second after forceful expiration. MIPuni evaluated MIP using a unidirectional expiratory valve attached to a face mask and was conducted by two evaluators (A and B at two moments (Tests 1 and 2 to determine interobserver and intraobserver reproducibility of MIP values. Intraclass correlation coefficient (ICC[2,1] was used to determine intraobserver and interobserver reproducibility.The mean values for MIPuni were 14.3% higher (-117.3 ± 24.8 cmH2O than the mean values for MIPsta (-102.5 ± 23.9 cmH2O (p<0.001. Interobserver reproducibility assessment showed very high correlation for Test 1 (ICC[2,1] = 0.91, and high correlation for Test 2 (ICC[2,1] = 0.88. The assessment of the intraobserver reproducibility showed high correlation for evaluator A (ICC[2,1] = 0.86 and evaluator B (ICC[2,1] = 0.77.MIPuni presented higher values when compared with MIPsta and proved to be reproducible in subjects with spontaneous breathing without artificial airway.

  1. A Modified SPH Method for Dynamic Failure Simulation of Heterogeneous Material

    Directory of Open Access Journals (Sweden)

    G. W. Ma

    2014-01-01

    Full Text Available A modified smoothed particle hydrodynamics (SPH method is applied to simulate the failure process of heterogeneous materials. An elastoplastic damage model based on an extension form of the unified twin shear strength (UTSS criterion is adopted. Polycrystalline modeling is introduced to generate the artificial microstructure of specimen for the dynamic simulation of Brazilian splitting test and uniaxial compression test. The strain rate effect on the predicted dynamic tensile and compressive strength is discussed. The final failure patterns and the dynamic strength increments demonstrate good agreements with experimental results. It is illustrated that the polycrystalline modeling approach combined with the SPH method is promising to simulate more complex failure process of heterogeneous materials.

  2. A practical discrete-adjoint method for high-fidelity compressible turbulence simulations

    International Nuclear Information System (INIS)

    Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.

    2015-01-01

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space–time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge–Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that

  3. Comparison of three artificial digestion methods for detection of non-encapsulated Trichinella pseudospiralis larvae in pork

    DEFF Research Database (Denmark)

    Nockler, K.; Reckinger, S.; Szabo, I.

    2009-01-01

    In a ring trial involving five laboratories (A, B, C, D, and E), three different methods of artificial digestion were compared for the detection of non-encapsulated Trichinella pseudospiralis larvae in minced meat. Each sample panel consisted often 1 g minced pork samples. All samples in each panel...... were derived from a bulk meat preparation with a nominal value of either 7 or 17 larvae per g (Ipg). Samples were tested for the number of muscle larvae using the magnetic stirrer method (labs A, B, and E), stomacher method (lab B), and Trichomatic 35 (R) (labs C and D). T. pseudospiralis larvae were...... by using the magnetic stirrer method (22%), followed by the stomacher method (25%), and Trichomatic 35 (R) (30%). Results revealed that T. pseudospiralis larvae in samples with a nominal value of 7 and 17 Ipg can be detected by all three methods of artificial digestion....

  4. Monitoring of operation with artificial intelligence methods; Betriebsueberwachung mit Verfahren der Kuenstlichen Intelligenz

    Energy Technology Data Exchange (ETDEWEB)

    Bruenninghaus, H. [DMT-Gesellschaft fuer Forschung und Pruefung mbH, Essen (Germany). Geschaeftsbereich Systemtechnik

    1999-03-11

    Taking the applications `early detection of fires` and `reduction of burst of messages` as an example, the usability of artificial intelligence (AI) methods in the monitoring of operation was checked in a R and D project. The contribution describes the concept, development and evaluation of solutions to the specified problems. A platform, which made it possible to investigate different AI methods (in particular artificial neuronal networks), had to be creaated as a basis for the project. At the same time ventilation data had to be acquired and processed by the networks for the classification. (orig.) [Deutsch] Am Beispiel der Anwendungsfaelle `Brandfrueherkennung` und `Meldungsschauerreduzierung` wurde im Rahmen eines F+E-Vorhabens die Einsetzbarkeit von Kuenstliche-Intelligenz-Methoden (KI) in der Betriebsueberwachung geprueft. Der Beitrag stellt Konzeption, Entwicklung und Bewertung von Loesungsansaetzen fuer die genannten Aufgabenstellungen vor. Als Grundlage fuer das Vorhaben wurden anhand KI-Methoden (speziell: Kuenstliche Neuronale Netze -KNN) auf der Grundlage gewonnener und aufbereiteter Wetterdaten die Beziehungen zwischen den Wettermessstellen im Laufe des Wetterwegs klassifiziert. (orig.)

  5. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    Science.gov (United States)

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  6. An artificial nonlinear diffusivity method for supersonic reacting flows with shocks

    Science.gov (United States)

    Fiorina, B.; Lele, S. K.

    2007-03-01

    A computational approach for modeling interactions between shocks waves, contact discontinuities and reactions zones with a high-order compact scheme is investigated. To prevent the formation of spurious oscillations around shocks, artificial nonlinear viscosity [A.W. Cook, W.H. Cabot, A high-wavenumber viscosity for high resolution numerical method, J. Comput. Phys. 195 (2004) 594-601] based on high-order derivative of the strain rate tensor is used. To capture temperature and species discontinuities a nonlinear diffusivity based on the entropy gradient is added. It is shown that the damping of 'wiggles' is controlled by the model constants and is largely independent of the mesh size and the shock strength. The same holds for the numerical shock thickness and allows a determination of the L2 error. In the shock tube problem, with fluids of different initial entropy separated by the diaphragm, an artificial diffusivity is required to accurately capture the contact surface. Finally, the method is applied to a shock wave propagating into a medium with non-uniform density/entropy and to a CJ detonation wave. Multi-dimensional formulation of the model is presented and is illustrated by a 2D oblique wave reflection from an inviscid wall, by a 2D supersonic blunt body flow and by a Mach reflection problem.

  7. A Comparative Study of Compression Methods and the Development of CODEC Program of Biological Signal for Emergency Telemedicine Service

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, T.S.; Kim, J.S. [Changwon National University, Changwon (Korea); Lim, Y.H. [Visionite Co., Ltd., Seoul (Korea); Yoo, S.K. [Yonsei University, Seoul (Korea)

    2003-05-01

    In an emergency telemedicine system such as the High-quality Multimedia based Real-time Emergency Telemedicine(HMRET) service, it is very important to examine the status of the patient continuously using the multimedia data including the biological signals(ECG, BP, Respiration, S{sub p}O{sub 2}) of the patient. In order to transmit these data real time through the communication means which have the limited transmission capacity, it is also necessary to compress the biological data besides other multimedia data. For this purpose, we investigate and compare the ECG compression techniques in the time domain and in the wavelet transform domain, and present an effective lossless compression method of the biological signals using JPEG Huffman table for an emergency telemedicine system. And, for the HMRET service, we developed the lossless compression and reconstruction program of the biological signals in MSVC++ 6.0 using DPCM method and JPEG Huffman table, and tested in an internet environment. (author). 15 refs., 17 figs., 7 tabs.

  8. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  9. The direct Discontinuous Galerkin method for the compressible Navier-Stokes equations on arbitrary grids

    Science.gov (United States)

    Yang, Xiaoquan; Cheng, Jian; Liu, Tiegang; Luo, Hong

    2015-11-01

    The direct discontinuous Galerkin (DDG) method based on a traditional discontinuous Galerkin (DG) formulation is extended and implemented for solving the compressible Navier-Stokes equations on arbitrary grids. Compared to the widely used second Bassi-Rebay (BR2) scheme for the discretization of diffusive fluxes, the DDG method has two attractive features: first, it is simple to implement as it is directly based on the weak form, and therefore there is no need for any local or global lifting operator; second, it can deliver comparable results, if not better than BR2 scheme, in a more efficient way with much less CPU time. Two approaches to perform the DDG flux for the Navier- Stokes equations are presented in this work, one is based on conservative variables, the other is based on primitive variables. In the implementation of the DDG method for arbitrary grid, the definition of mesh size plays a critical role as the formation of viscous flux explicitly depends on the geometry. A variety of test cases are presented to demonstrate the accuracy and efficiency of the DDG method for discretizing the viscous fluxes in the compressible Navier-Stokes equations on arbitrary grids.

  10. Artificial Evolution for the Detection of Group Identities in Complex Artificial Societies

    DEFF Research Database (Denmark)

    Grappiolo, Corrado; Togelius, Julian; Yannakakis, Georgios N.

    2013-01-01

    This paper aims at detecting the presence of group structures in complex artificial societies by solely observing and analysing the interactions occurring among the artificial agents. Our approach combines: (1) an unsupervised method for clustering interactions into two possible classes, namely in...

  11. The production of fully deacetylated chitosan by compression method

    Directory of Open Access Journals (Sweden)

    Xiaofei He

    2016-03-01

    Full Text Available Chitosan’s activities are significantly affected by degree of deacetylation (DDA, while fully deacetylated chitosan is difficult to produce in a large scale. Therefore, this paper introduces a compression method for preparing 100% deacetylated chitosan with less environmental pollution. The product is characterized by XRD, FT-IR, UV and HPLC. The 100% fully deacetylated chitosan is produced in low-concentration alkali and high-pressure conditions, which only requires 15% alkali solution and 1:10 chitosan powder to NaOH solution ratio under 0.11–0.12 MPa for 120 min. When the alkali concentration varied from 5% to 15%, the chitosan with ultra-high DDA value (up to 95% is produced.

  12. Incorporation of omics analyses into artificial gravity research for space exploration countermeasure development.

    Science.gov (United States)

    Schmidt, Michael A; Goodwin, Thomas J; Pelligra, Ralph

    The next major steps in human spaceflight include flyby, orbital, and landing missions to the Moon, Mars, and near earth asteroids. The first crewed deep space mission is expected to launch in 2022, which affords less than 7 years to address the complex question of whether and how to apply artificial gravity to counter the effects of prolonged weightlessness. Various phenotypic changes are demonstrated during artificial gravity experiments. However, the molecular dynamics (genotype and molecular phenotypes) that underlie these morphological, physiological, and behavioral phenotypes are far more complex than previously understood. Thus, targeted molecular assessment of subjects under various G conditions can be expected to miss important patterns of molecular variance that inform the more general phenotypes typically being measured. Use of omics methods can help detect changes across broad molecular networks, as various G-loading paradigms are applied. This will be useful in detecting off-target, or unanticipated effects of the different gravity paradigms applied to humans or animals. Insights gained from these approaches may eventually be used to inform countermeasure development or refine the deployment of existing countermeasures. This convergence of the omics and artificial gravity research communities may be critical if we are to develop the proper artificial gravity solutions under the severely compressed timelines currently established. Thus, the omics community may offer a unique ability to accelerate discovery, provide new insights, and benefit deep space missions in ways that have not been previously considered.

  13. A comparative analysis of the cryo-compression and cryo-adsorption hydrogen storage methods

    Energy Technology Data Exchange (ETDEWEB)

    Petitpas, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Benard, P [Universite du Quebec a Trois-Rivieres (Canada); Klebanoff, L E [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Xiao, J [Universite du Quebec a Trois-Rivieres (Canada); Aceves, S M [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-07-01

    While conventional low-pressure LH₂ dewars have existed for decades, advanced methods of cryogenic hydrogen storage have recently been developed. These advanced methods are cryo-compression and cryo-adsorption hydrogen storage, which operate best in the temperature range 30–100 K. We present a comparative analysis of both approaches for cryogenic hydrogen storage, examining how pressure and/or sorbent materials are used to effectively increase onboard H₂ density and dormancy. We start by reviewing some basic aspects of LH₂ properties and conventional means of storing it. From there we describe the cryo-compression and cryo-adsorption hydrogen storage methods, and then explore the relationship between them, clarifying the materials science and physics of the two approaches in trying to solve the same hydrogen storage task (~5–8 kg H₂, typical of light duty vehicles). Assuming that the balance of plant and the available volume for the storage system in the vehicle are identical for both approaches, the comparison focuses on how the respective storage capacities, vessel weight and dormancy vary as a function of temperature, pressure and type of cryo-adsorption material (especially, powder MOF-5 and MIL-101). By performing a comparative analysis, we clarify the science of each approach individually, identify the regimes where the attributes of each can be maximized, elucidate the properties of these systems during refueling, and probe the possible benefits of a combined “hybrid” system with both cryo-adsorption and cryo-compression phenomena operating at the same time. In addition the relationships found between onboard H₂ capacity, pressure vessel and/or sorbent mass and dormancy as a function of rated pressure, type of sorbent material and fueling conditions are useful as general designing guidelines in future engineering efforts using these two hydrogen storage approaches.

  14. Reasoning methods in medical consultation systems: artificial intelligence approaches.

    Science.gov (United States)

    Shortliffe, E H

    1984-01-01

    It has been argued that the problem of medical diagnosis is fundamentally ill-structured, particularly during the early stages when the number of possible explanations for presenting complaints can be immense. This paper discusses the process of clinical hypothesis evocation, contrasts it with the structured decision making approaches used in traditional computer-based diagnostic systems, and briefly surveys the more open-ended reasoning methods that have been used in medical artificial intelligence (AI) programs. The additional complexity introduced when an advice system is designed to suggest management instead of (or in addition to) diagnosis is also emphasized. Example systems are discussed to illustrate the key concepts.

  15. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  16. Probabilistic machine learning and artificial intelligence.

    Science.gov (United States)

    Ghahramani, Zoubin

    2015-05-28

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  17. Probabilistic machine learning and artificial intelligence

    Science.gov (United States)

    Ghahramani, Zoubin

    2015-05-01

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  18. Assessing artificial neural networks and statistical methods for infilling missing soil moisture records

    Science.gov (United States)

    Dumedah, Gift; Walker, Jeffrey P.; Chik, Li

    2014-07-01

    Soil moisture information is critically important for water management operations including flood forecasting, drought monitoring, and groundwater recharge estimation. While an accurate and continuous record of soil moisture is required for these applications, the available soil moisture data, in practice, is typically fraught with missing values. There are a wide range of methods available to infilling hydrologic variables, but a thorough inter-comparison between statistical methods and artificial neural networks has not been made. This study examines 5 statistical methods including monthly averages, weighted Pearson correlation coefficient, a method based on temporal stability of soil moisture, and a weighted merging of the three methods, together with a method based on the concept of rough sets. Additionally, 9 artificial neural networks are examined, broadly categorized into feedforward, dynamic, and radial basis networks. These 14 infilling methods were used to estimate missing soil moisture records and subsequently validated against known values for 13 soil moisture monitoring stations for three different soil layer depths in the Yanco region in southeast Australia. The evaluation results show that the top three highest performing methods are the nonlinear autoregressive neural network, rough sets method, and monthly replacement. A high estimation accuracy (root mean square error (RMSE) of about 0.03 m/m) was found in the nonlinear autoregressive network, due to its regression based dynamic network which allows feedback connections through discrete-time estimation. An equally high accuracy (0.05 m/m RMSE) in the rough sets procedure illustrates the important role of temporal persistence of soil moisture, with the capability to account for different soil moisture conditions.

  19. Short-Time Structural Stability of Compressible Vortex Sheets with Surface Tension

    Science.gov (United States)

    Stevens, Ben

    2016-11-01

    Assume we start with an initial vortex-sheet configuration which consists of two inviscid fluids with density bounded below flowing smoothly past each other, where a strictly positive fixed coefficient of surface tension produces a surface tension force across the common interface, balanced by the pressure jump. We model the fluids by the compressible Euler equations in three space dimensions with a very general equation of state relating the pressure, entropy and density such that the sound speed is positive. We prove that, for a short time, there exists a unique solution of the equations with the same structure. The mathematical approach consists of introducing a carefully chosen artificial viscosity-type regularisation which allows one to linearise the system so as to obtain a collection of transport equations for the entropy, pressure and curl together with a parabolic-type equation for the velocity which becomes fairly standard after rotating the velocity according to the interface normal. We prove a high order energy estimate for the non-linear equations that is independent of the artificial viscosity parameter which allows us to send it to zero. This approach loosely follows that introduced by Shkoller et al. in the setting of a compressible liquid-vacuum interface. Although already considered by Coutand et al. [10] and Lindblad [17], we also make some brief comments on the case of a compressible liquid-vacuum interface, which is obtained from the vortex sheets problem by replacing one of the fluids by vacuum, where it is possible to obtain a structural stability result even without surface tension.

  20. A Schur complement method for compressible two-phase flow models

    International Nuclear Information System (INIS)

    Dao, Thu-Huyen; Ndjinga, Michael; Magoules, Frederic

    2014-01-01

    In this paper, we will report our recent efforts to apply a Schur complement method for nonlinear hyperbolic problems. We use the finite volume method and an implicit version of the Roe approximate Riemann solver. With the interface variable introduced in [4] in the context of single phase flows, we are able to simulate two-fluid models ([12]) with various schemes such as upwind, centered or Rusanov. Moreover, we introduce a scaling strategy to improve the condition number of both the interface system and the local systems. Numerical results for the isentropic two-fluid model and the compressible Navier-Stokes equations in various 2D and 3D configurations and various schemes show that our method is robust and efficient. The scaling strategy considerably reduces the number of GMRES iterations in both interface system and local system resolutions. Comparisons of performances with classical distributed computing with up to 218 processors are also reported. (authors)

  1. A discrete fibre dispersion method for excluding fibres under compression in the modelling of fibrous tissues.

    Science.gov (United States)

    Li, Kewei; Ogden, Ray W; Holzapfel, Gerhard A

    2018-01-01

    Recently, micro-sphere-based methods derived from the angular integration approach have been used for excluding fibres under compression in the modelling of soft biological tissues. However, recent studies have revealed that many of the widely used numerical integration schemes over the unit sphere are inaccurate for large deformation problems even without excluding fibres under compression. Thus, in this study, we propose a discrete fibre dispersion model based on a systematic method for discretizing a unit hemisphere into a finite number of elementary areas, such as spherical triangles. Over each elementary area, we define a representative fibre direction and a discrete fibre density. Then, the strain energy of all the fibres distributed over each elementary area is approximated based on the deformation of the representative fibre direction weighted by the corresponding discrete fibre density. A summation of fibre contributions over all elementary areas then yields the resultant fibre strain energy. This treatment allows us to exclude fibres under compression in a discrete manner by evaluating the tension-compression status of the representative fibre directions only. We have implemented this model in a finite-element programme and illustrate it with three representative examples, including simple tension and simple shear of a unit cube, and non-homogeneous uniaxial extension of a rectangular strip. The results of all three examples are consistent and accurate compared with the previously developed continuous fibre dispersion model, and that is achieved with a substantial reduction of computational cost. © 2018 The Author(s).

  2. EFFECTIVENESS OF ADJUVANT USE OF POSTERIOR MANUAL COMPRESSION WITH GRADED COMPRESSION IN THE SONOGRAPHIC DIAGNOSIS OF ACUTE APPENDICITIS

    Directory of Open Access Journals (Sweden)

    Senthilnathan V

    2018-01-01

    Full Text Available BACKGROUND Diagnosing appendicitis by Graded Compression Ultrasonogram is a difficult task because of limiting factors such as operator– dependent technique, retrocaecal location of the appendix and patient obesity. Posterior manual compression technique visualizes the appendix better in the Grey-scale Ultrasonogram. The Aim of this study is to determine the accuracy of ultrasound in detecting or excluding acute appendicitis and to evaluate the usefulness of the adjuvant use of posterior manual compression technique in visualization of the appendix and in the diagnosis of acute appendicitis MATERIALS AND METHODS This prospective study involved a total of 240 patients in all age groups and both sexes. All these patients underwent USG for suspected appendicitis. Ultrasonography was performed with transverse and longitudinal graded compression sonography. If the appendix is not visualized on graded compression sonography, posterior manual compression technique was used to further improve the detection of appendix. RESULTS The vermiform appendix was visualized in 185 patients (77.1% out of 240 patients with graded compression alone. 55 out of 240 patients whose appendix could not be visualized by graded compression alone were subjected to both graded followed by posterior manual compression technique among that Appendix was visualized in 43 patients on posterior manual compression technique amounting to 78.2% of cases, Appendix could not be visualized in the remaining 12 patients (21.8% out of 55. CONCLUSION Combined method of graded compression with posterior manual compression technique is better than the graded compression technique alone in diagnostic accuracy and detection rate of the vermiform appendix.

  3. Proposed Sandia frequency shift for anti-islanding detection method based on artificial immune system

    Directory of Open Access Journals (Sweden)

    A.Y. Hatata

    2018-03-01

    Full Text Available Sandia frequency shift (SFS is one of the active anti-islanding detection methods that depend on frequency drift to detect an islanding condition for inverter-based distributed generation. The non-detection zone (NDZ of the SFS method depends to a great extent on its parameters. Improper adjusting of these parameters may result in failure of the method. This paper presents a proposed artificial immune system (AIS-based technique to obtain optimal parameters of SFS anti-islanding detection method. The immune system is highly distributed, highly adaptive, and self-organizing in nature, maintains a memory of past encounters, and has the ability to continually learn about new encounters. The proposed method generates less total harmonic distortion (THD than the conventional SFS, which results in faster island detection and better non-detection zone. The performance of the proposed method is derived analytically and simulated using Matlab/Simulink. Two case studies are used to verify the proposed method. The first case includes a photovoltaic (PV connected to grid and the second includes a wind turbine connected to grid. The deduced optimized parameter setting helps to achieve the “non-islanding inverter” as well as least potential adverse impact on power quality. Keywords: Anti-islanding detection, Sandia frequency shift (SFS, Non-detection zone (NDZ, Total harmonic distortion (THD, Artificial immune system (AIS, Clonal selection algorithm

  4. Chapter 22: Compressed Air Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Benton, Nathanael [Nexant, Inc., San Francisco, CA (United States); Burns, Patrick [Nexant, Inc., San Francisco, CA (United States)

    2017-10-18

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressor replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  5. Mechanical Characterization and Numerical Modelling of Rubber Shockpads in 3G Artificial Turf

    Directory of Open Access Journals (Sweden)

    David Cole

    2018-02-01

    Full Text Available Third generation (3G artificial turf systems use in sporting applications is increasingly prolific. These multi-component systems are comprised of a range of polymeric and elastomeric materials that exhibit non-linear and strain rate dependent behaviours under the complex loads applied from players and equipment. To further study and better understand the behaviours of these systems, the development of a numerical model to accurately predict individual layers’ behaviour as well as the overall system response under different loading conditions is necessary. The purpose of this study was to characterise and model the mechanical behaviour of a rubber shockpad found in 3G artificial surfaces for vertical shock absorption using finite element analysis. A series of uniaxial compression tests were performed to characterise the mechanical behaviour of the shockpad. Compression loading was performed at 0.9 Hz to match human walking speeds. A Microfoam material model was selected from the PolyUMod library and optimised using MCalibration software before being imported into ABAQUS for analysis. A finite element model was created for the shockpad using ABAQUS and a compressive load applied to match that of the experimental data. Friction coefficients were altered to view the effect on the loading response. The accuracy of the model was compared using a series of comparative measures including the energy loss and root mean square error.

  6. Calculation of the energy provided by a PV generator. Comparative study: Conventional methods vs. artificial neural networks

    International Nuclear Information System (INIS)

    Almonacid, F.; Rus, C.; Perez-Higueras, P.; Hontoria, L.

    2011-01-01

    The use of photovoltaics for electricity generation purposes has recorded one of the largest increases in the field of renewable energies. The energy production of a grid-connected PV system depends on various factors. In a wide sense, it is considered that the annual energy provided by a generator is directly proportional to the annual radiation incident on the plane of the generator and to the installed nominal power. However, a range of factors is influencing the expected outcome by reducing the generation of energy. The aim of this study is to compare the results of four different methods for estimating the annual energy produced by a PV generator: three of them are classical methods and the fourth one is based on an artificial neural network developed by the R and D Group for Solar and Automatic Energy at the University of Jaen. The results obtained shown that the method based on an artificial neural network provides better results than the alternative classical methods in study, mainly due to the fact that this method takes also into account some second order effects, such as low irradiance, angular and spectral effects. -- Research highlights: → It is considered that the annual energy provided by a PV generator is directly proportional to the annual radiation incident on the plane of the generator and to the installed nominal power. → A range of factors are influencing the expected outcome by reducing the generation of energy (mismatch losses, dirt and dust, Ohmic losses,.). → The aim of this study is to compare the results of four different methods for estimating the annual energy produced by a PV generator: three of them are classical methods and the fourth one is based on an artificial neural network. → The results obtained shown that the method based on an artificial neural network provides better results than the alternative classical methods in study. While classical methods have only taken into account temperature losses, the method based in

  7. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  8. Compresso: Efficient Compression of Segmentation Data for Connectomics

    KAUST Repository

    Matejek, Brian

    2017-09-03

    Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient storage and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with structured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and image segmentation datasets. Our method provides a factor of 600–2200x compression for label volumes, with running times suitable for practice.

  9. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  10. Prediction of zeolite-cement-sand unconfined compressive strength using polynomial neural network

    Science.gov (United States)

    MolaAbasi, H.; Shooshpasha, I.

    2016-04-01

    The improvement of local soils with cement and zeolite can provide great benefits, including strengthening slopes in slope stability problems, stabilizing problematic soils and preventing soil liquefaction. Recently, dosage methodologies are being developed for improved soils based on a rational criterion as it exists in concrete technology. There are numerous earlier studies showing the possibility of relating Unconfined Compressive Strength (UCS) and Cemented sand (CS) parameters (voids/cement ratio) as a power function fits. Taking into account the fact that the existing equations are incapable of estimating UCS for zeolite cemented sand mixture (ZCS) well, artificial intelligence methods are used for forecasting them. Polynomial-type neural network is applied to estimate the UCS from more simply determined index properties such as zeolite and cement content, porosity as well as curing time. In order to assess the merits of the proposed approach, a total number of 216 unconfined compressive tests have been done. A comparison is carried out between the experimentally measured UCS with the predictions in order to evaluate the performance of the current method. The results demonstrate that generalized polynomial-type neural network has a great ability for prediction of the UCS. At the end sensitivity analysis of the polynomial model is applied to study the influence of input parameters on model output. The sensitivity analysis reveals that cement and zeolite content have significant influence on predicting UCS.

  11. An Object-Based Image Analysis Method for Monitoring Land Conversion by Artificial Sprawl Use of RapidEye and IRS Data

    Directory of Open Access Journals (Sweden)

    Maud Balestrat

    2012-02-01

    Full Text Available In France, in the peri-urban context, urban sprawl dynamics are particularly strong with huge population growth as well as a land crisis. The increase and spreading of built-up areas from the city centre towards the periphery takes place to the detriment of natural and agricultural spaces. The conversion of land with agricultural potential is all the more worrying as it is usually irreversible. The French Ministry of Agriculture therefore needs reliable and repeatable spatial-temporal methods to locate and quantify loss of land at both local and national scales. The main objective of this study was to design a repeatable method to monitor land conversion characterized by artificial sprawl: (i We used an object-based image analysis to extract artificial areas from satellite images; (ii We built an artificial patch that consists of aggregating all the peripheral areas that characterize artificial areas. The “artificialized” patch concept is an innovative extension of the urban patch concept, but differs in the nature of its components and in the continuity distance applied; (iii The diachronic analysis of artificial patch maps enables characterization of artificial sprawl. The method was applied at the scale of four departments (similar to provinces along the coast of Languedoc-Roussillon, in the South of France, based on two satellite datasets, one acquired in 1996–1997 (Indian Remote Sensing and the other in 2009 (RapidEye. In the four departments, we measured an increase in artificial areas of from 113,000 ha in 1997 to 133,000 ha in 2009, i.e., an 18% increase in 12 years. The package comes in the form of a 1/15,000 valid cartography, usable at the scale of a commune (the smallest territorial division used for administrative purposes in France that can be adapted to departmental and regional scales. The method is reproducible in homogenous spatial-temporal terms, so that it could be used periodically to assess changes in land conversion

  12. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  13. Investigation of GDL compression effects on the performance of a PEM fuel cell cathode by lattice Boltzmann method

    Science.gov (United States)

    Molaeimanesh, G. R.; Nazemian, M.

    2017-08-01

    Proton exchange membrane (PEM) fuel cells with a great potential for application in vehicle propulsion systems will have a promising future. However, to overcome the exiting challenges against their wider commercialization further fundamental research is inevitable. The effects of gas diffusion layer (GDL) compression on the performance of a PEM fuel cell is not well-recognized; especially, via pore-scale simulation technique capturing the fibrous microstructure of the GDL. In the current investigation, a stochastic microstructure reconstruction method is proposed which can capture GDL microstructure changes by compression. Afterwards, lattice Boltzmann pore-scale simulation technique is adopted to simulate the reactive gas flow through 10 different cathode electrodes with dissimilar carbon paper GDLs produced from five different compression levels and two different carbon fiber diameters. The distributions of oxygen mole fraction, water vapor mole fraction and current density for the simulated cases are presented and analyzed. The results of simulations demonstrate that when the fiber diameter is 9 μm adding compression leads to lower average current density while when the fiber diameter is 7 μm the compression effect is not monotonic.

  14. Stress analysis of shear/compression test

    International Nuclear Information System (INIS)

    Nishijima, S.; Okada, T.; Ueno, S.

    1997-01-01

    Stress analysis has been made on the glass fiber reinforced plastics (GFRP) subjected to the combined shear and compression stresses by means of finite element method. The two types of experimental set up were analyzed, that is parallel and series method where the specimen were compressed by tilted jigs which enable to apply the combined stresses, to the specimen. Modified Tsai-Hill criterion was employed to judge the failure under the combined stresses that is the shear strength under the compressive stress. The different failure envelopes were obtained between the two set ups. In the parallel system the shear strength once increased with compressive stress then decreased. On the contrary in the series system the shear strength decreased monotonicly with compressive stress. The difference is caused by the different stress distribution due to the different constraint conditions. The basic parameters which control the failure under the combined stresses will be discussed

  15. 2D-RBUC for efficient parallel compression of residuals

    Science.gov (United States)

    Đurđević, Đorđe M.; Tartalja, Igor I.

    2018-02-01

    In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.

  16. A Space-Frequency Data Compression Method for Spatially Dense Laser Doppler Vibrometer Measurements

    Directory of Open Access Journals (Sweden)

    José Roberto de França Arruda

    1996-01-01

    Full Text Available When spatially dense mobility shapes are measured with scanning laser Doppler vibrometers, it is often impractical to use phase-separation modal parameter estimation methods due to the excessive number of highly coupled modes and to the prohibitive computational cost of processing huge amounts of data. To deal with this problem, a data compression method using Chebychev polynomial approximation in the frequency domain and two-dimensional discrete Fourier series approximation in the spatial domain, is proposed in this article. The proposed space-frequency regressive approach was implemented and verified using a numerical simulation of a free-free-free-free suspended rectangular aluminum plate. To make the simulation more realistic, the mobility shapes were synthesized by modal superposition using mode shapes obtained experimentally with a scanning laser Doppler vibrometer. A reduced and smoothed model, which takes advantage of the sinusoidal spatial pattern of structural mobility shapes and the polynomial frequency-domain pattern of the mobility shapes, is obtained. From the reduced model, smoothed curves with any desired frequency and spatial resolution can he produced whenever necessary. The procedure can he used either to generate nonmodal models or to compress the measured data prior to modal parameter extraction.

  17. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    Science.gov (United States)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  18. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  19. Color matching of fabric blends: hybrid Kubelka-Munk + artificial neural network based method

    Science.gov (United States)

    Furferi, Rocco; Governi, Lapo; Volpe, Yary

    2016-11-01

    Color matching of fabric blends is a key issue for the textile industry, mainly due to the rising need to create high-quality products for the fashion market. The process of mixing together differently colored fibers to match a desired color is usually performed by using some historical recipes, skillfully managed by company colorists. More often than desired, the first attempt in creating a blend is not satisfactory, thus requiring the experts to spend efforts in changing the recipe with a trial-and-error process. To confront this issue, a number of computer-based methods have been proposed in the last decades, roughly classified into theoretical and artificial neural network (ANN)-based approaches. Inspired by the above literature, the present paper provides a method for accurate estimation of spectrophotometric response of a textile blend composed of differently colored fibers made of different materials. In particular, the performance of the Kubelka-Munk (K-M) theory is enhanced by introducing an artificial intelligence approach to determine a more consistent value of the nonlinear function relationship between the blend and its components. Therefore, a hybrid K-M+ANN-based method capable of modeling the color mixing mechanism is devised to predict the reflectance values of a blend.

  20. Effect of the Fineness of Artificial Pozzolana (Sarooj on the Properties of Lime-Pozzolana Mixes

    Directory of Open Access Journals (Sweden)

    Abdel Wahid Hago

    2002-06-01

    Full Text Available Strength development of lime-pozzolana mortars is affected by the fineness of the pozzolan. This paper examines the effect of the fineness of artificial pozzolana on the strength development of lime-pozzolana mixtures. An artificial pozzolana produced by calcining clays from Oman was used in this study. The artificial pozzolana is locally known as “Sarooj”, and is currently being used in a big project for the restoration of historical monuments undertaken by the Omani Ministry of National Heritage and Culture. The artificial pozzolana was ground to various degrees of fineness, blended with hydrated lime with a ratio of 3:1, and the resulting mortar was used to make hardened mortar cubes. Strength of mortar cubes was measured at ages of 7, 14, 28 and 90 days of casting. The experimental results show that good artificial pozzolanas show a linear correlation between the Blaine fineness of the artificial pozzolana and the compressive strength, but such relationship does not exists for weak type pozzolanas. The fineness of the artificial pozzolana has its most significant effect on delayed strength gain, with more pronounced effect for good type pozzolan.

  1. Discrete Wigner Function Reconstruction and Compressed Sensing

    OpenAIRE

    Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin

    2011-01-01

    A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

  2. Artificial Intelligence in Civil Engineering

    Directory of Open Access Journals (Sweden)

    Pengzhen Lu

    2012-01-01

    Full Text Available Artificial intelligence is a branch of computer science, involved in the research, design, and application of intelligent computer. Traditional methods for modeling and optimizing complex structure systems require huge amounts of computing resources, and artificial-intelligence-based solutions can often provide valuable alternatives for efficiently solving problems in the civil engineering. This paper summarizes recently developed methods and theories in the developing direction for applications of artificial intelligence in civil engineering, including evolutionary computation, neural networks, fuzzy systems, expert system, reasoning, classification, and learning, as well as others like chaos theory, cuckoo search, firefly algorithm, knowledge-based engineering, and simulated annealing. The main research trends are also pointed out in the end. The paper provides an overview of the advances of artificial intelligence applied in civil engineering.

  3. Schlieren method diagnostics of plasma compression in front of coaxial gun

    International Nuclear Information System (INIS)

    Kravarik, J.; Kubes, P.; Hruska, J.; Bacilek, J.

    1983-01-01

    The schlieren method employing a movable knife edge placed in the focal plane of a laser beam was used for the diagnostics of plasma produced by a coaxial plasma gun. When compared with the interferometric method reported earlier, spatial resolution was improved by more than one order of magnitude. In the determination of electron density near the gun orifice, spherical symmetry of the current sheath inhomogeneities and cylindrical symmetry of the compression maximum were assumed. Radial variation of electron density could be reconstructed from the photometric measurements of the transversal variation of schlieren light intensity. Due to small plasma dimensions, electron density was determined directly from the knife edge shift necessary for shadowing the corresponding part of the picture. (J.U.)

  4. Continuous surveillance of transformers using artificial intelligence methods; Surveillance continue des transformateurs: application des methodes d'intelligence artificielle

    Energy Technology Data Exchange (ETDEWEB)

    Schenk, A.; Germond, A. [Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Boss, P.; Lorin, P. [ABB Secheron SA, Geneve (Switzerland)

    2000-07-01

    The article describes a new method for the continuous surveillance of power transformers based on the application of artificial intelligence (AI) techniques. An experimental pilot project on a specially equipped, strategically important power transformer is described. Traditional surveillance methods and the use of mathematical models for the prediction of faults are described. The article describes the monitoring equipment used in the pilot project and the AI principles such as self-organising maps that are applied. The results obtained from the pilot project and methods for their graphical representation are discussed.

  5. A comparative study of laser induced breakdown spectroscopy analysis for element concentrations in aluminum alloy using artificial neural networks and calibration methods

    International Nuclear Information System (INIS)

    Inakollu, Prasanthi; Philip, Thomas; Rai, Awadhesh K.; Yueh Fangyu; Singh, Jagdish P.

    2009-01-01

    A comparative study of analysis methods (traditional calibration method and artificial neural networks (ANN) prediction method) for laser induced breakdown spectroscopy (LIBS) data of different Al alloy samples was performed. In the calibration method, the intensity of the analyte lines obtained from different samples are plotted against their concentration to form calibration curves for different elements from which the concentrations of unknown elements were deduced by comparing its LIBS signal with the calibration curves. Using ANN, an artificial neural network model is trained with a set of input data of known composition samples. The trained neural network is then used to predict the elemental concentration from the test spectra. The present results reveal that artificial neural networks are capable of predicting values better than traditional method in most cases

  6. n-Gram-Based Text Compression

    Science.gov (United States)

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  7. n-Gram-Based Text Compression

    Directory of Open Access Journals (Sweden)

    Vu H. Nguyen

    2016-01-01

    Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  8. Shock absorbing properties of toroidal shells under compression, 3

    International Nuclear Information System (INIS)

    Sugita, Yuji

    1985-01-01

    The author has previously presented the static load-deflection relations of a toroidal shell subjected to axisymmetric compression between rigid plates and those of its outer half when subjected to lateral compression. In both these cases, the analytical method was based on the incremental Rayleigh-Ritz method. In this paper, the effects of compression angle and strain rate on the load-deflection relations of the toroidal shell are investigated for its use as a shock absorber for the radioactive material shipping cask which must keep its structural integrity even after accidental falls at any angle. Static compression tests have been carried out at four angles of compression, 10 0 , 20 0 , 50 0 , 90 0 and the applications of the preceding analytical method have been discussed. Dynamic compression tests have also been performed using the free-falling drop hammer. The results are compared with those in the static compression tests. (author)

  9. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  10. SOLVING TRANSPORT LOGISTICS PROBLEMS IN A VIRTUAL ENTERPRISE THROUGH ARTIFICIAL INTELLIGENCE METHODS

    Directory of Open Access Journals (Sweden)

    Vitaliy PAVLENKO

    2017-06-01

    Full Text Available The paper offers a solution to the problem of material flow allocation within a virtual enterprise by using artificial intelligence methods. The research is based on the use of fuzzy relations when planning for optimal transportation modes to deliver components for manufactured products. The Fuzzy Logic Toolbox is used to determine the optimal route for transportation of components for manufactured products. The methods offered have been exemplified in the present research. The authors have built a simulation model for component transportation and delivery for manufactured products using the Simulink graphical environment for building models.

  11. Lightweight, compressible and electrically conductive polyurethane sponges coated with synergistic multiwalled carbon nanotubes and graphene for piezoresistive sensors.

    Science.gov (United States)

    Ma, Zhonglei; Wei, Ajing; Ma, Jianzhong; Shao, Liang; Jiang, Huie; Dong, Diandian; Ji, Zhanyou; Wang, Qian; Kang, Songlei

    2018-04-19

    Lightweight, compressible and highly sensitive pressure/strain sensing materials are highly desirable for the development of health monitoring, wearable devices and artificial intelligence. Herein, a very simple, low-cost and solution-based approach is presented to fabricate versatile piezoresistive sensors based on conductive polyurethane (PU) sponges coated with synergistic multiwalled carbon nanotubes (MWCNTs) and graphene. These sensor materials are fabricated by convenient dip-coating layer-by-layer (LBL) electrostatic assembly followed by in situ reduction without using any complicated microfabrication processes. The resultant conductive MWCNT/RGO@PU sponges exhibit very low densities (0.027-0.064 g cm-3), outstanding compressibility (up to 75%) and high electrical conductivity benefiting from the porous PU sponges and synergistic conductive MWCNT/RGO structures. In addition, the MWCNT/RGO@PU sponges present larger relative resistance changes and superior sensing performances under external applied pressures (0-5.6 kPa) and a wide range of strains (0-75%) compared with the RGO@PU and MWCNT@PU sponges, due to the synergistic effect of multiple mechanisms: "disconnect-connect" transition of nanogaps, microcracks and fractured skeletons at low compression strain and compressive contact of the conductive skeletons at high compression strain. The electrical and piezoresistive properties of MWCNT/RGO@PU sponges are strongly associated with the dip-coating cycle, suspension concentration, and the applied pressure and strain. Fully functional applications of MWCNT/RGO@PU sponge-based piezoresistive sensors in lighting LED lamps and detecting human body movements are demonstrated, indicating their excellent potential for emerging applications such as health monitoring, wearable devices and artificial intelligence.

  12. Isostatic compression of buffer blocks. Middle scale

    International Nuclear Information System (INIS)

    Ritola, J.; Pyy, E.

    2012-01-01

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  13. Isostatic compression of buffer blocks. Middle scale

    Energy Technology Data Exchange (ETDEWEB)

    Ritola, J.; Pyy, E. [VTT Technical Research Centre of Finland, Espoo (Finland)

    2012-01-15

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  14. Graph Compression by BFS

    Directory of Open Access Journals (Sweden)

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  15. A comparison of sputum induction methods: ultrasonic vs compressed-air nebulizer and hypertonic vs isotonic saline inhalation.

    Science.gov (United States)

    Loh, L C; Eg, K P; Puspanathan, P; Tang, S P; Yip, K S; Vijayasingham, P; Thayaparan, T; Kumar, S

    2004-03-01

    Airway inflammation can be demonstrated by the modem method of sputum induction using ultrasonic nebulizer and hypertonic saline. We studied whether compressed-air nebulizer and isotonic saline which are commonly available and cost less, are as effective in inducing sputum in normal adult subjects as the above mentioned tools. Sixteen subjects underwent weekly sputum induction in the following manner: ultrasonic nebulizer (Medix Sonix 2000, Clement Clarke, UK) using hypertonic saline, ultrasonic nebulizer using isotonic saline, compressed-air nebulizer (BestNeb, Taiwan) using hypertonic saline, and compressed-air nebulizer using isotonic saline. Overall, the use of an ultrasonic nebulizer and hypertonic saline yielded significantly higher total sputum cell counts and a higher percentage of cell viability than compressed-air nebulizers and isotonic saline. With the latter, there was a trend towards squamous cell contaminations. The proportion of various sputum cell types was not significantly different between the groups, and the reproducibility in sputum macrophages and neutrophils was high (Intraclass correlation coefficient, r [95%CI]: 0.65 [0.30-0.91] and 0.58 [0.22-0.89], p compressed-air nebulizers and isotonic saline. We conclude that in normal subjects, although both nebulizers and saline types can induce sputum with reproducible cellular profile, ultrasonic nebulizers and hypertonic saline are more effective but less well tolerated.

  16. Artificial intelligence methods for diagnostic

    International Nuclear Information System (INIS)

    Dourgnon-Hanoune, A.; Porcheron, M.; Ricard, B.

    1996-01-01

    To assist in diagnosis of its nuclear power plants, the Research and Development Division of Electricite de France has been developing skills in Artificial Intelligence for about a decade. Different diagnostic expert systems have been designed. Among them, SILEX for control rods cabinet troubleshooting, DIVA for turbine generator diagnosis, DIAPO for reactor coolant pump diagnosis. This know how in expert knowledge modeling and acquisition is direct result of experience gained during developments and of a more general reflection on knowledge based system development. We have been able to reuse this results for other developments such as a guide for auxiliary rotating machines diagnosis. (authors)

  17. Diffuse-Interface Capturing Methods for Compressible Two-Phase Flows

    Science.gov (United States)

    Saurel, Richard; Pantano, Carlos

    2018-01-01

    Simulation of compressible flows became a routine activity with the appearance of shock-/contact-capturing methods. These methods can determine all waves, particularly discontinuous ones. However, additional difficulties may appear in two-phase and multimaterial flows due to the abrupt variation of thermodynamic properties across the interfacial region, with discontinuous thermodynamical representations at the interfaces. To overcome this difficulty, researchers have developed augmented systems of governing equations to extend the capturing strategy. These extended systems, reviewed here, are termed diffuse-interface models, because they are designed to compute flow variables correctly in numerically diffused zones surrounding interfaces. In particular, they facilitate coupling the dynamics on both sides of the (diffuse) interfaces and tend to the proper pure fluid-governing equations far from the interfaces. This strategy has become efficient for contact interfaces separating fluids that are governed by different equations of state, in the presence or absence of capillary effects, and with phase change. More sophisticated materials than fluids (e.g., elastic-plastic materials) have been considered as well.

  18. Comparison of two solution ways of district heating control: Using analysis methods, using artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Balate, J.; Sysala, T. [Technical Univ., Zlin (Czech Republic). Dept. of Automation and Control Technology

    1997-12-31

    The District Heating Systems - DHS (Centralized Heat Supply Systems - CHSS) are being developed in large cities in accordance with their growth. The systems are formed by enlarging networks of heat distribution to consumers and at the same time they interconnect the heat sources gradually built. The heat is distributed to the consumers through the circular networks, that are supplied by several cooperating heat sources, that means by power and heating plants and heating plants. The complicated process of heat production technology and supply requires the system approach when solving the concept of automatized control. The paper deals with comparison of the solution way using the analysis methods and using the artificial intelligence methods. (orig.)

  19. Development of a compressive surface capturing formulation for modelling free-surface flow by using the volume-of-fluid approach

    CSIR Research Space (South Africa)

    Heyns, Johan A

    2012-06-01

    Full Text Available combines a blended higher resolution scheme with the addition of an artificial compressive term to the volume-of-fluid equation. This reduces the numerical smearing of the interface associated with explicit higher resolution schemes while limiting...

  20. The research of optimal selection method for wavelet packet basis in compressing the vibration signal of a rolling bearing in fans and pumps

    International Nuclear Information System (INIS)

    Hao, W; Jinji, G

    2012-01-01

    Compressing the vibration signal of a rolling bearing has important significance to wireless monitoring and remote diagnosis of fans and pumps which is widely used in the petrochemical industry. In this paper, according to the characteristics of the vibration signal in a rolling bearing, a compression method based on the optimal selection of wavelet packet basis is proposed. We analyze several main attributes of wavelet packet basis and the effect to the compression of the vibration signal in a rolling bearing using wavelet packet transform in various compression ratios, and proposed a method to precisely select a wavelet packet basis. Through an actual signal, we come to the conclusion that an orthogonal wavelet packet basis with low vanishing moment should be used to compress the vibration signal of a rolling bearing to get an accurate energy proportion between the feature bands in the spectrum of reconstructing the signal. Within these low vanishing moments, orthogonal wavelet packet basis, and 'coif' wavelet packet basis can obtain the best signal-to-noise ratio in the same compression ratio for its best symmetry.

  1. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  2. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  3. Limiting density ratios in piston-driven compressions

    International Nuclear Information System (INIS)

    Lee, S.

    1985-07-01

    By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximum compression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)

  4. Data compression of digital X-ray images from a clinical viewpoint

    International Nuclear Information System (INIS)

    Ando, Yutaka

    1992-01-01

    For the PACS (picture archiving and communication system), large storage capacity recording media and a fast data transfer network are necessary. When the PACS are working, these technology requirements become an large problem. So we need image data compression having a higher recording efficiency media and an improved transmission ratio. There are two kinds of data compression methods, one is reversible compression and other is the irreversible one. By these reversible compression methods, a compressed-expanded image is exactly equal to the original image. The ratio of data compression is about between 1/2 an d1/3. On the other hand, for irreversible data compression, the compressed-expanded image is a distorted image, and we can achieve a high compression ratio by using this method. In the medical field, the discrete cosine transform (DCT) method is popular because of the low distortion and fast performance. The ratio of data compression is actually from 1/10 to 1/20. It is important for us to decide the compression ratio according to the purposes and modality of the image. We must carefully select the ratio of the data compression because the suitable compression ratio alters in the usage of image for education, clinical diagnosis and reference. (author)

  5. Methods for compressible fluid simulation on GPUs using high-order finite differences

    Science.gov (United States)

    Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer

    2017-08-01

    We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.

  6. Exploiting of the Compression Methods for Reconstruction of the Antenna Far-Field Using Only Amplitude Near-Field Measurements

    Directory of Open Access Journals (Sweden)

    J. Puskely

    2010-06-01

    Full Text Available The novel approach exploits the principle of the conventional two-plane amplitude measurements for the reconstruction of the unknown electric field distribution on the antenna aperture. The method combines a global optimization with a compression method. The global optimization method (GO is used to minimize the functional, and the compression method is used to reduce the number of unknown variables. The algorithm employs the Real Coded Genetic Algorithm (RCGA as the global optimization approach. The Discrete Cosine Transform (DCT and the Discrete Wavelet Transform (DWT are applied to reduce the number of unknown variables. Pros and cons of methods are investigated and reported for the solution of the problem. In order to make the algorithm faster, exploitation of amplitudes from a single scanning plane is also discussed. First, the algorithm is used to obtain an initial estimate. Subsequently, the common Fourier iterative algorithm is used to reach global minima with sufficient accuracy. The method is examined measuring the dish antenna.

  7. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  8. Improved artificial bee colony algorithm based gravity matching navigation method.

    Science.gov (United States)

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  9. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  10. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  11. Role of Artificial Intelligence Techniques (Automatic Classifiers) in Molecular Imaging Modalities in Neurodegenerative Diseases.

    Science.gov (United States)

    Cascianelli, Silvia; Scialpi, Michele; Amici, Serena; Forini, Nevio; Minestrini, Matteo; Fravolini, Mario Luca; Sinzinger, Helmut; Schillaci, Orazio; Palumbo, Barbara

    2017-01-01

    Artificial Intelligence (AI) is a very active Computer Science research field aiming to develop systems that mimic human intelligence and is helpful in many human activities, including Medicine. In this review we presented some examples of the exploiting of AI techniques, in particular automatic classifiers such as Artificial Neural Network (ANN), Support Vector Machine (SVM), Classification Tree (ClT) and ensemble methods like Random Forest (RF), able to analyze findings obtained by positron emission tomography (PET) or single-photon emission tomography (SPECT) scans of patients with Neurodegenerative Diseases, in particular Alzheimer's Disease. We also focused our attention on techniques applied in order to preprocess data and reduce their dimensionality via feature selection or projection in a more representative domain (Principal Component Analysis - PCA - or Partial Least Squares - PLS - are examples of such methods); this is a crucial step while dealing with medical data, since it is necessary to compress patient information and retain only the most useful in order to discriminate subjects into normal and pathological classes. Main literature papers on the application of these techniques to classify patients with neurodegenerative disease extracting data from molecular imaging modalities are reported, showing that the increasing development of computer aided diagnosis systems is very promising to contribute to the diagnostic process.

  12. Artificial Intelligence in Civil Engineering

    OpenAIRE

    Lu, Pengzhen; Chen, Shengyong; Zheng, Yujun

    2012-01-01

    Artificial intelligence is a branch of computer science, involved in the research, design, and application of intelligent computer. Traditional methods for modeling and optimizing complex structure systems require huge amounts of computing resources, and artificial-intelligence-based solutions can often provide valuable alternatives for efficiently solving problems in the civil engineering. This paper summarizes recently developed methods and theories in the developing direction for applicati...

  13. Based on Short Motion Paths and Artificial Intelligence Method for Chinese Chess Game

    Directory of Open Access Journals (Sweden)

    Chien-Ming Hung

    2017-08-01

    Full Text Available The article develops the decision rules to win each set of the Chinese chess game using evaluation algorithm and artificial intelligence method, and uses the mobile robot to be instead of the chess, and presents the movement scenarios using the shortest motion paths for mobile robots. Player can play the Chinese chess game according to the game rules with the supervised computer. The supervised computer decides the optimal motion path to win the set using artificial intelligence method, and controls mobile robots according to the programmed motion paths of the assigned chesses moving on the platform via wireless RF interface. We uses enhance A* searching algorithm to solve the shortest path problem of the assigned chess, and solve the collision problems of the motion paths for two mobile robots moving on the platform simultaneously. We implement a famous set to be called lwild horses run in farmr using the proposed method. First we use simulation method to display the motion paths of the assigned chesses for the player and the supervised computer. Then the supervised computer implements the simulation results on the chessboard platform using mobile robots. Mobile robots move on the chessboard platform according to the programmed motion paths and is guided to move on the centre line of the corridor, and avoid the obstacles (chesses, and detect the cross point of the platform using three reflective IR modules.

  14. Experimental investigation and empirical modelling of FDM process for compressive strength improvement

    Directory of Open Access Journals (Sweden)

    Anoop K. Sood

    2012-01-01

    Full Text Available Fused deposition modelling (FDM is gaining distinct advantage in manufacturing industries because of its ability to manufacture parts with complex shapes without any tooling requirement and human interface. The properties of FDM built parts exhibit high dependence on process parameters and can be improved by setting parameters at suitable levels. Anisotropic and brittle nature of build part makes it important to study the effect of process parameters to the resistance to compressive loading for enhancing service life of functional parts. Hence, the present work focuses on extensive study to understand the effect of five important parameters such as layer thickness, part build orientation, raster angle, raster width and air gap on the compressive stress of test specimen. The study not only provides insight into complex dependency of compressive stress on process parameters but also develops a statistically validated predictive equation. The equation is used to find optimal parameter setting through quantum-behaved particle swarm optimization (QPSO. As FDM process is a highly complex one and process parameters influence the responses in a non linear manner, compressive stress is predicted using artificial neural network (ANN and is compared with predictive equation.

  15. A multiscale method for compressible liquid-vapor flow with surface tension*

    Directory of Open Access Journals (Sweden)

    Jaegle Felix

    2013-01-01

    Full Text Available Discontinuous Galerkin methods have become a powerful tool for approximating the solution of compressible flow problems. Their direct use for two-phase flow problems with phase transformation is not straightforward because this type of flows requires a detailed tracking of the phase front. We consider the fronts in this contribution as sharp interfaces and propose a novel multiscale approach. It combines an efficient high-order Discontinuous Galerkin solver for the computation in the bulk phases on the macro-scale with the use of a generalized Riemann solver on the micro-scale. The Riemann solver takes into account the effects of moderate surface tension via the curvature of the sharp interface as well as phase transformation. First numerical experiments in three space dimensions underline the overall performance of the method.

  16. Augmented Lagrangian Method and Compressible Visco-plastic Flows: Applications to Shallow Dense Avalanches

    Science.gov (United States)

    Bresch, D.; Fernández-Nieto, E. D.; Ionescu, I. R.; Vigneaux, P.

    In this paper we propose a well-balanced finite volume/augmented Lagrangian method for compressible visco-plastic models focusing on a compressible Bingham type system with applications to dense avalanches. For the sake of completeness we also present a method showing that such a system may be derived for a shallow flow of a rigid-viscoplastic incompressible fluid, namely for incompressible Bingham type fluid with free surface. When the fluid is relatively shallow and spreads slowly, lubrication-style asymptotic approximations can be used to build reduced models for the spreading dynamics, see for instance [N.J. Balmforth et al., J. Fluid Mech (2002)]. When the motion is a little bit quicker, shallow water theory for non-Newtonian flows may be applied, for instance assuming a Navier type boundary condition at the bottom. We start from the variational inequality for an incompressible Bingham fluid and derive a shallow water type system. In the case where Bingham number and viscosity are set to zero we obtain the classical Shallow Water or Saint-Venant equations obtained for instance in [J.F. Gerbeau, B. Perthame, DCDS (2001)]. For numerical purposes, we focus on the one-dimensional in space model: We study associated static solutions with sufficient conditions that relate the slope of the bottom with the Bingham number and domain dimensions. We also propose a well-balanced finite volume/augmented Lagrangian method. It combines well-balanced finite volume schemes for spatial discretization with the augmented Lagrangian method to treat the associated optimization problem. Finally, we present various numerical tests.

  17. Artificial Intelligence in Cardiology.

    Science.gov (United States)

    Johnson, Kipp W; Torres Soto, Jessica; Glicksberg, Benjamin S; Shameer, Khader; Miotto, Riccardo; Ali, Mohsin; Ashley, Euan; Dudley, Joel T

    2018-06-12

    Artificial intelligence and machine learning are poised to influence nearly every aspect of the human condition, and cardiology is not an exception to this trend. This paper provides a guide for clinicians on relevant aspects of artificial intelligence and machine learning, reviews selected applications of these methods in cardiology to date, and identifies how cardiovascular medicine could incorporate artificial intelligence in the future. In particular, the paper first reviews predictive modeling concepts relevant to cardiology such as feature selection and frequent pitfalls such as improper dichotomization. Second, it discusses common algorithms used in supervised learning and reviews selected applications in cardiology and related disciplines. Third, it describes the advent of deep learning and related methods collectively called unsupervised learning, provides contextual examples both in general medicine and in cardiovascular medicine, and then explains how these methods could be applied to enable precision cardiology and improve patient outcomes. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Studies on improvement of diagnostic ability of computed tomography (CT) in the parenchymatous organs in the upper abdomen, 1. Study on the upper abdominal compression method

    Energy Technology Data Exchange (ETDEWEB)

    Kawata, Ryo [Gifu Univ. (Japan). Faculty of Medicine

    1982-07-01

    1) The upper abdominal compression method was easily applicable for CT examination in practically all the patients. It gave no harm and considerably improved CT diagnosis. 2) The materials used for compression were foamed polystyrene, the Mix-Dp and a water bag. When CT examination was performed to diagnose such lesions as a circumscribed tumor, compression with the Mix-Dp was most useful, and when it was performed for screening examination of upper abdominal diseases, compression with a water bag was most effective. 3) Improvement in contour-depicting ability of CT by the compression method was most marked at the body of the pancreas, followed by the head of the pancreas and the posterior surface of the left lobe of the liver. Slight improvement was seen also at the tail of the pancreas and the left adrenal gland. 4) Improvement in organ-depicting ability of CT by the compression method was estimated by a 4-category classification method. It was found that the improvement was most marked at the body and the head of the pancreas. Considerable improvement was observed also at the left lobe of the liver and the both adrenal glands. Little improvement was obtained at the spleen. When contrast enhancement was combined with the compression method, improvement at such organs which were liable to be enhanced, as the liver and the adrenal glands, was promoted, while the organ-depicting ability was decreased at the pancreas. 5) By comparing the CT image under compression with that without compression, continuous infiltrations of gastric cancer into the body and the tail of the pancreas in 2 cases and a retroperitoneal infiltration of pancreatic tumor in 1 case were diagnosed preoperatively.

  19. Reaction kinetics, reaction products and compressive strength of ternary activators activated slag designed by Taguchi method

    NARCIS (Netherlands)

    Yuan, B.; Yu, Q.L.; Brouwers, H.J.H.

    2015-01-01

    This study investigates the reaction kinetics, the reaction products and the compressive strength of slag activated by ternary activators, namely waterglass, sodium hydroxide and sodium carbonate. Nine mixtures are designed by the Taguchi method considering the factors of sodium carbonate content

  20. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  1. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  2. Risk assessment for pipelines with active defects based on artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Anghel, Calin I. [Department of Chemical Engineering, Faculty of Chemistry and Chemical Engineering, University ' Babes-Bolyai' , Cluj-Napoca (Romania)], E-mail: canghel@chem.ubbcluj.ro

    2009-07-15

    The paper provides another insight into the pipeline risk assessment for in-service pressure piping containing defects. Beside of the traditional analytical approximation methods or sampling-based methods safety index and failure probability of pressure piping containing defects will be obtained based on a novel type of support vector machine developed in a minimax manner. The safety index or failure probability is carried out based on a binary classification approach. The procedure named classification reliability procedure, involving a link between artificial intelligence and reliability methods was developed as a user-friendly computer program in MATLAB language. To reveal the capacity of the proposed procedure two comparative numerical examples replicating a previous related work and predicting the failure probabilities of pressured pipeline with defects were presented.

  3. Risk assessment for pipelines with active defects based on artificial intelligence methods

    International Nuclear Information System (INIS)

    Anghel, Calin I.

    2009-01-01

    The paper provides another insight into the pipeline risk assessment for in-service pressure piping containing defects. Beside of the traditional analytical approximation methods or sampling-based methods safety index and failure probability of pressure piping containing defects will be obtained based on a novel type of support vector machine developed in a minimax manner. The safety index or failure probability is carried out based on a binary classification approach. The procedure named classification reliability procedure, involving a link between artificial intelligence and reliability methods was developed as a user-friendly computer program in MATLAB language. To reveal the capacity of the proposed procedure two comparative numerical examples replicating a previous related work and predicting the failure probabilities of pressured pipeline with defects were presented.

  4. Macron Formed Liner Compression as a Practical Method for Enabling Magneto-Inertial Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John

    2011-12-10

    The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. The main impediment for current nuclear fusion concepts is the complexity and large mass associated with the confinement systems. To take advantage of the smaller scale, higher density regime of magnetic fusion, an efficient method for achieving the compressional heating required to reach fusion gain conditions must be found. The very compact, high energy density plasmoid commonly referred to as a Field Reversed Configuration (FRC) provides for an ideal target for this purpose. To make fusion with the FRC practical, an efficient method for repetitively compressing the FRC to fusion gain conditions is required. A novel approach to be explored in this endeavor is to remotely launch a converging array of small macro-particles (macrons) that merge and form a more massive liner inside the reactor which then radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target FRC plasmoid suppresses the thermal transport to the confining liner significantly lowering the imploding power needed to compress the target. With the momentum flux being delivered by an assemblage of low mass, but high velocity macrons, many of the difficulties encountered with the liner implosion power technology are eliminated. The undertaking to be described in this proposal is to evaluate the feasibility achieving fusion conditions from this simple and low cost approach to fusion. During phase I the design and testing of the key components for the creation of the macron formed liner have been successfully carried out. Detailed numerical calculations of the merging, formation and radial implosion of the Macron Formed Liner (MFL) were also performed. The phase II effort will focus on an experimental demonstration of the macron launcher at full power, and the demonstration

  5. Diagnostic methods and interpretation of the experiments on microtarget compression in the Iskra-4 device

    International Nuclear Information System (INIS)

    Kochemasov, G.G.

    1992-01-01

    Studies on the problem of laser fusion, which is mainly based on experiments conducted in the Iskra-4 device are reviewed. Different approaches to solution of the problem of DT-fuel ignition, methods of diagnostics of characteristics of laser radiation and plasma, occurring on microtarget heating and compression, are considered

  6. A compressed sensing based method with support refinement for impulse noise cancelation in DSL

    KAUST Repository

    Quadeer, Ahmed Abdul

    2013-06-01

    This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.

  7. Analysis and development of adjoint-based h-adaptive direct discontinuous Galerkin method for the compressible Navier-Stokes equations

    Science.gov (United States)

    Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang

    2018-06-01

    In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.

  8. Algorithms and architectures of artificial intelligence

    CERN Document Server

    Tyugu, E

    2007-01-01

    This book gives an overview of methods developed in artificial intelligence for search, learning, problem solving and decision-making. It gives an overview of algorithms and architectures of artificial intelligence that have reached the degree of maturity when a method can be presented as an algorithm, or when a well-defined architecture is known, e.g. in neural nets and intelligent agents. It can be used as a handbook for a wide audience of application developers who are interested in using artificial intelligence methods in their software products. Parts of the text are rather independent, so that one can look into the index and go directly to a description of a method presented in the form of an abstract algorithm or an architectural solution. The book can be used also as a textbook for a course in applied artificial intelligence. Exercises on the subject are added at the end of each chapter. Neither programming skills nor specific knowledge in computer science are expected from the reader. However, some p...

  9. Raman study of radiation-damaged zircon under hydrostatic compression

    Science.gov (United States)

    Nasdala, Lutz; Miletich, Ronald; Ruschel, Katja; Váczi, Tamás

    2008-12-01

    Pressure-induced changes of Raman band parameters of four natural, gem-quality zircon samples with different degrees of self-irradiation damage, and synthetic ZrSiO4 without radiation damage, have been studied under hydrostatic compression in a diamond anvil cell up to ~10 GPa. Radiation-damaged zircon shows similar up-shifts of internal SiO4 stretching modes at elevated pressures as non-damaged ZrSiO4. Only minor changes of band-widths were observed in all cases. This makes it possible to estimate the degree of radiation damage from the width of the ν3(SiO4) band of zircon inclusions in situ, almost independent from potential “fossilized pressures” or compressive strain acting on the inclusions. An application is the non-destructive analysis of gemstones such as corundum or spinel: broadened Raman bands are a reliable indicator of self-irradiation damage in zircon inclusions, whose presence allows one to exclude artificial color enhancement by high-temperature treatment of the specimen.

  10. An Applied Method for Predicting the Load-Carrying Capacity in Compression of Thin-Wall Composite Structures with Impact Damage

    Science.gov (United States)

    Mitrofanov, O.; Pavelko, I.; Varickis, S.; Vagele, A.

    2018-03-01

    The necessity for considering both strength criteria and postbuckling effects in calculating the load-carrying capacity in compression of thin-wall composite structures with impact damage is substantiated. An original applied method ensuring solution of these problems with an accuracy sufficient for practical design tasks is developed. The main advantage of the method is its applicability in terms of computing resources and the set of initial data required. The results of application of the method to solution of the problem of compression of fragments of thin-wall honeycomb panel damaged by impacts of various energies are presented. After a comparison of calculation results with experimental data, a working algorithm for calculating the reduction in the load-carrying capacity of a composite object with impact damage is adopted.

  11. Multilevel local refinement and multigrid methods for 3-D turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Liao, C.; Liu, C. [UCD, Denver, CO (United States); Sung, C.H.; Huang, T.T. [David Taylor Model Basin, Bethesda, MD (United States)

    1996-12-31

    A numerical approach based on multigrid, multilevel local refinement, and preconditioning methods for solving incompressible Reynolds-averaged Navier-Stokes equations is presented. 3-D turbulent flow around an underwater vehicle is computed. 3 multigrid levels and 2 local refinement grid levels are used. The global grid is 24 x 8 x 12. The first patch is 40 x 16 x 20 and the second patch is 72 x 32 x 36. 4th order artificial dissipation are used for numerical stability. The conservative artificial compressibility method are used for further improvement of convergence. To improve the accuracy of coarse/fine grid interface of local refinement, flux interpolation method for refined grid boundary is used. The numerical results are in good agreement with experimental data. The local refinement can improve the prediction accuracy significantly. The flux interpolation method for local refinement can keep conservation for a composite grid, therefore further modify the prediction accuracy.

  12. Acceleration methods for multi-physics compressible flow

    Science.gov (United States)

    Peles, Oren; Turkel, Eli

    2018-04-01

    In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation

  13. Using the Maturity Method in Predicting the Compressive Strength of Vinyl Ester Polymer Concrete at an Early Age

    Directory of Open Access Journals (Sweden)

    Nan Ji Jin

    2017-01-01

    Full Text Available The compressive strength of vinyl ester polymer concrete is predicted using the maturity method. The compressive strength rapidly increased until the curing age of 24 hrs and thereafter slowly increased until the curing age of 72 hrs. As the MMA content increased, the compressive strength decreased. Furthermore, as the curing temperature decreased, compressive strength decreased. For vinyl ester polymer concrete, datum temperature, ranging from −22.5 to −24.6°C, decreased as the MMA content increased. The maturity index equation for cement concrete cannot be applied to polymer concrete and the maturity of vinyl ester polymer concrete can only be estimated through control of the time interval Δt. Thus, this study introduced a suitable scaled-down factor (n for the determination of polymer concrete’s maturity, and a factor of 0.3 was the most suitable. Also, the DR-HILL compressive strength prediction model was determined as applicable to vinyl ester polymer concrete among the dose-response models. For the parameters of the prediction model, applying the parameters by combining all data obtained from the three different amounts of MMA content was deemed acceptable. The study results could be useful for the quality control of vinyl ester polymer concrete and nondestructive prediction of early age strength.

  14. Using a Bayesian Network to Predict L5/S1 Spinal Compression Force from Posture, Hand Load, Anthropometry, and Disc Injury Status

    Directory of Open Access Journals (Sweden)

    Richard E. Hughes

    2017-01-01

    Full Text Available Stochastic biomechanical modeling has become a useful tool most commonly implemented using Monte Carlo simulation, advanced mean value theorem, or Markov chain modeling. Bayesian networks are a novel method for probabilistic modeling in artificial intelligence, risk modeling, and machine learning. The purpose of this study was to evaluate the suitability of Bayesian networks for biomechanical modeling using a static biomechanical model of spinal forces during lifting. A 20-node Bayesian network model was used to implement a well-established static two-dimensional biomechanical model for predicting L5/S1 compression and shear forces. The model was also implemented as a Monte Carlo simulation in MATLAB. Mean L5/S1 spinal compression force estimates differed by 0.8%, and shear force estimates were the same. The model was extended to incorporate evidence about disc injury, which can modify the prior probability estimates to provide posterior probability estimates of spinal compression force. An example showed that changing disc injury status from false to true increased the estimate of mean L5/S1 compression force by 14.7%. This work shows that Bayesian networks can be used to implement a whole-body biomechanical model used in occupational biomechanics and incorporate disc injury.

  15. High-speed photographic methods for compression dynamics investigation of laser irradiated shell target

    International Nuclear Information System (INIS)

    Basov, N.G.; Kologrivov, A.A.; Krokhin, O.N.; Rupasov, A.A.; Shikanov, A.S.

    1979-01-01

    Three methods are described for a high-speed diagnostics of compression dynamics of shell targets being spherically laser-heated on the installation ''Kal'mar''. The first method is based on the direct investigation of the space-time evolution of the critical-density region for Nd-laser emission (N sub(e) asymptotically equals 10 21 I/cm 3 ) by means of the streak photography of plasma image in the second-harmonic light. The second method involves investigation of time evolution of the second-harmonic spectral distribution by means of a spectrograph coupled with a streak camera. The use of a special laser pulse with two time-distributed intensity maxima for the irradiation of shell targets, and the analysis of the obtained X-ray pin-hole pictures constitute the basis of the third method. (author)

  16. Investigation of Surface Pre-Treatment Methods for Wafer-Level Cu-Cu Thermo-Compression Bonding

    Directory of Open Access Journals (Sweden)

    Koki Tanaka

    2016-12-01

    Full Text Available To increase the yield of the wafer-level Cu-Cu thermo-compression bonding method, certain surface pre-treatment methods for Cu are studied which can be exposed to the atmosphere before bonding. To inhibit re-oxidation under atmospheric conditions, the reduced pure Cu surface is treated by H2/Ar plasma, NH3 plasma and thiol solution, respectively, and is covered by Cu hydride, Cu nitride and a self-assembled monolayer (SAM accordingly. A pair of the treated wafers is then bonded by the thermo-compression bonding method, and evaluated by the tensile test. Results show that the bond strengths of the wafers treated by NH3 plasma and SAM are not sufficient due to the remaining surface protection layers such as Cu nitride and SAMs resulting from the pre-treatment. In contrast, the H2/Ar plasma–treated wafer showed the same strength as the one with formic acid vapor treatment, even when exposed to the atmosphere for 30 min. In the thermal desorption spectroscopy (TDS measurement of the H2/Ar plasma–treated Cu sample, the total number of the detected H2 was 3.1 times more than the citric acid–treated one. Results of the TDS measurement indicate that the modified Cu surface is terminated by chemisorbed hydrogen atoms, which leads to high bonding strength.

  17. Lossy image compression for digital medical imaging systems

    Science.gov (United States)

    Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.

    1990-07-01

    Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to

  18. First North American 50 cc Total Artificial Heart Experience: Conversion from a 70 cc Total Artificial Heart.

    Science.gov (United States)

    Khalpey, Zain; Kazui, Toshinobu; Ferng, Alice S; Connell, Alana; Tran, Phat L; Meyer, Mark; Rawashdeh, Badi; Smith, Richard G; Sweitzer, Nancy K; Friedman, Mark; Lick, Scott; Slepian, Marvin J; Copeland, Jack G

    2016-01-01

    The 70 cc total artificial heart (TAH) has been utilized as bridge to transplant (BTT) for biventricular failure. However, the utilization of 70 cc TAH has been limited to large patients for the low output from the pulmonary as well as systemic vein compression after chest closure. Therefore, the 50 cc TAH was developed by SynCardia (Tucson, AZ) to accommodate smaller chest cavity. We report the first TAH exchange from a 70 to 50 cc due to a fit difficulty. The patient failed to be closed with a 70 cc TAH, although the patient met the conventional 70 cc TAH fit criteria. We successfully closed the chest with a 50 cc TAH.

  19. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  20. The Effects of Design Strength, Fly Ash Content and Curing Method on Compressive Strength of High Volume Fly Ash Concrete: A Design of Experimental

    Directory of Open Access Journals (Sweden)

    Solikin Mochamad

    2017-01-01

    Full Text Available High volume fly ash concrete becomes one of alternatives to produce green concrete as it uses waste material and significantly reduces the utilization of Portland cement in concrete production. Although using less cement, its compressive strength is comparable to ordinary Portland cement (hereafter OPC and the its durability increases significantly. This paper reports investigation on the effect of design strength, fly ash content and curing method on compressive strength of High Volume Fly Ash Concrete. The experiment and data analysis were prepared using minitab, a statistic software for design of experimental. The specimens were concrete cylinder with diameter of 15 cm and height of 30 cm, tested for its compressive strength at 56 days. The result of the research demonstrates that high volume fly ash concrete can produce comparable compressive strength which meets the strength of OPC design strength especially for high strength concrete. In addition, the best mix proportion to achieve the design strength is the combination of high strength concrete and 50% content of fly ash. Moreover, the use of spraying method for curing method of concrete on site is still recommended as it would not significantly reduce the compressive strength result.

  1. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    Science.gov (United States)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  2. Experimental Study on Artificial Cemented Sand Prepared with Ordinary Portland Cement with Different Contents

    Directory of Open Access Journals (Sweden)

    Dongliang Li

    2015-07-01

    Full Text Available Artificial cemented sand test samples were prepared by using ordinary Portland cement (OPC as the cementing agent. Through uniaxial compression tests and consolidated drained triaxial compression tests, the stress-strain curves of the artificial cemented sand with different cementing agent contents (0.01, 0.03, 0.05 and 0.08 under various confining pressures (0.00 MPa, 0.25 MPa, 0.50 MPa and 1.00 MPa were obtained. Based on the test results, the effect of the cementing agent content (Cv on the physical and mechanical properties of the artificial cemented sand were analyzed and the Mohr-Coulomb strength theory was modified by using Cv. The research reveals that when Cv is high (e.g., Cv = 0.03, 0.05 or 0.08, the stress-strain curves of the samples indicate a strain softening behavior; under the same confining pressure, as Cv increases, both the peak strength and residual strength of the samples show a significant increase. When Cv is low (e.g., Cv = 0.01, the stress-strain curves of the samples indicate strain hardening behavior. From the test data, a function of Cv (the cementing agent content with c′ (the cohesion force of the sample and Δϕ′ (the increment of the angle of shearing resistance is obtained. Furthermore, through modification of the Mohr-Coulomb strength theory, the effect of cementing agent content on the strength of the cemented sand is demonstrated.

  3. Experimental Study on Artificial Cemented Sand Prepared with Ordinary Portland Cement with Different Contents.

    Science.gov (United States)

    Li, Dongliang; Liu, Xinrong; Liu, Xianshan

    2015-07-02

    Artificial cemented sand test samples were prepared by using ordinary Portland cement (OPC) as the cementing agent. Through uniaxial compression tests and consolidated drained triaxial compression tests, the stress-strain curves of the artificial cemented sand with different cementing agent contents (0.01, 0.03, 0.05 and 0.08) under various confining pressures (0.00 MPa, 0.25 MPa, 0.50 MPa and 1.00 MPa) were obtained. Based on the test results, the effect of the cementing agent content ( C v ) on the physical and mechanical properties of the artificial cemented sand were analyzed and the Mohr-Coulomb strength theory was modified by using C v . The research reveals that when C v is high (e.g., C v = 0.03, 0.05 or 0.08), the stress-strain curves of the samples indicate a strain softening behavior; under the same confining pressure, as C v increases, both the peak strength and residual strength of the samples show a significant increase. When C v is low (e.g., C v = 0.01), the stress-strain curves of the samples indicate strain hardening behavior. From the test data, a function of C v (the cementing agent content) with c ' (the cohesion force of the sample) and Δϕ' (the increment of the angle of shearing resistance) is obtained. Furthermore, through modification of the Mohr-Coulomb strength theory, the effect of cementing agent content on the strength of the cemented sand is demonstrated.

  4. Survey of artificial intelligence methods for detection and identification of component faults in nuclear power plants

    International Nuclear Information System (INIS)

    Reifman, J.

    1997-01-01

    A comprehensive survey of computer-based systems that apply artificial intelligence methods to detect and identify component faults in nuclear power plants is presented. Classification criteria are established that categorize artificial intelligence diagnostic systems according to the types of computing approaches used (e.g., computing tools, computer languages, and shell and simulation programs), the types of methodologies employed (e.g., types of knowledge, reasoning and inference mechanisms, and diagnostic approach), and the scope of the system. The major issues of process diagnostics and computer-based diagnostic systems are identified and cross-correlated with the various categories used for classification. Ninety-five publications are reviewed

  5. Efficient solution of the non-linear Reynolds equation for compressible fluid using the finite element method

    DEFF Research Database (Denmark)

    Larsen, Jon Steffen; Santos, Ilmar

    2015-01-01

    An efficient finite element scheme for solving the non-linear Reynolds equation for compressible fluid coupled to compliant structures is presented. The method is general and fast and can be used in the analysis of airfoil bearings with simplified or complex foil structure models. To illustrate...

  6. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  7. Extruded Bread Classification on the Basis of Acoustic Emission Signal With Application of Artificial Neural Networks

    Science.gov (United States)

    Świetlicka, Izabela; Muszyński, Siemowit; Marzec, Agata

    2015-04-01

    The presented work covers the problem of developing a method of extruded bread classification with the application of artificial neural networks. Extruded flat graham, corn, and rye breads differening in water activity were used. The breads were subjected to the compression test with simultaneous registration of acoustic signal. The amplitude-time records were analyzed both in time and frequency domains. Acoustic emission signal parameters: single energy, counts, amplitude, and duration acoustic emission were determined for the breads in four water activities: initial (0.362 for rye, 0.377 for corn, and 0.371 for graham bread), 0.432, 0.529, and 0.648. For classification and the clustering process, radial basis function, and self-organizing maps (Kohonen network) were used. Artificial neural networks were examined with respect to their ability to classify or to cluster samples according to the bread type, water activity value, and both of them. The best examination results were achieved by the radial basis function network in classification according to water activity (88%), while the self-organizing maps network yielded 81% during bread type clustering.

  8. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  9. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  10. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  11. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  12. Prediction of enthalpy of fusion of pure compounds using an Artificial Neural Network-Group Contribution method

    International Nuclear Information System (INIS)

    Gharagheizi, Farhad; Salehi, Gholam Reza

    2011-01-01

    Highlights: → An Artificial Neural Network-Group Contribution method is presented for prediction of enthalpy of fusion of pure compounds at their normal melting point. → Validity of the model is confirmed using a large evaluated data set containing 4157 pure compounds. → The average percent error of the model is equal to 2.65% in comparison with the experimental data. - Abstract: In this work, the Artificial Neural Network-Group Contribution (ANN-GC) method is applied to estimate the enthalpy of fusion of pure chemical compounds at their normal melting point. 4157 pure compounds from various chemical families are investigated to propose a comprehensive and predictive model. The obtained results show the Squared Correlation Coefficient (R 2 ) of 0.999, Root Mean Square Error of 0.82 kJ/mol, and average absolute deviation lower than 2.65% for the estimated properties from existing experimental values.

  13. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  14. Data Collection Method for Mobile Control Sink Node in Wireless Sensor Network Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Ling Yongfa

    2016-01-01

    Full Text Available The paper proposes a mobile control sink node data collection method in the wireless sensor network based on compressive sensing. This method, with regular track, selects the optimal data collection points in the monitoring area via the disc method, calcu-lates the shortest path by using the quantum genetic algorithm, and hence determines the data collection route. Simulation results show that this method has higher network throughput and better energy efficiency, capable of collecting a huge amount of data with balanced energy consumption in the network.

  15. Three-dimensional numerical simulation for plastic injection-compression molding

    Science.gov (United States)

    Zhang, Yun; Yu, Wenjie; Liang, Junjie; Lang, Jianlin; Li, Dequn

    2018-03-01

    Compared with conventional injection molding, injection-compression molding can mold optical parts with higher precision and lower flow residual stress. However, the melt flow process in a closed cavity becomes more complex because of the moving cavity boundary during compression and the nonlinear problems caused by non-Newtonian polymer melt. In this study, a 3D simulation method was developed for injection-compression molding. In this method, arbitrary Lagrangian- Eulerian was introduced to model the moving-boundary flow problem in the compression stage. The non-Newtonian characteristics and compressibility of the polymer melt were considered. The melt flow and pressure distribution in the cavity were investigated by using the proposed simulation method and compared with those of injection molding. Results reveal that the fountain flow effect becomes significant when the cavity thickness increases during compression. The back flow also plays an important role in the flow pattern and redistribution of cavity pressure. The discrepancy in pressures at different points along the flow path is complicated rather than monotonically decreased in injection molding.

  16. Optimization of the segmented method for optical compression and multiplexing system

    Science.gov (United States)

    Al Falou, Ayman

    2002-05-01

    Because of the constant increasing demands of images exchange, and despite the ever increasing bandwidth of the networks, compression and multiplexing of images is becoming inseparable from their generation and display. For high resolution real time motion pictures, electronic performing of compression requires complex and time-consuming processing units. On the contrary, by its inherent bi-dimensional character, coherent optics is well fitted to perform such processes that are basically bi-dimensional data handling in the Fourier domain. Additionally, the main limiting factor that was the maximum frame rate is vanishing because of the recent improvement of spatial light modulator technology. The purpose of this communication is to benefit from recent optical correlation algorithms. The segmented filtering used to store multi-references in a given space bandwidth product optical filter can be applied to networks to compress and multiplex images in a given bandwidth channel.

  17. Spectral Element Method for the Simulation of Unsteady Compressible Flows

    Science.gov (United States)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

  18. An efficient finite differences method for the computation of compressible, subsonic, unsteady flows past airfoils and panels

    Science.gov (United States)

    Colera, Manuel; Pérez-Saborid, Miguel

    2017-09-01

    A finite differences scheme is proposed in this work to compute in the time domain the compressible, subsonic, unsteady flow past an aerodynamic airfoil using the linearized potential theory. It improves and extends the original method proposed in this journal by Hariharan, Ping and Scott [1] by considering: (i) a non-uniform mesh, (ii) an implicit time integration algorithm, (iii) a vectorized implementation and (iv) the coupled airfoil dynamics and fluid dynamic loads. First, we have formulated the method for cases in which the airfoil motion is given. The scheme has been tested on well known problems in unsteady aerodynamics -such as the response to a sudden change of the angle of attack and to a harmonic motion of the airfoil- and has been proved to be more accurate and efficient than other finite differences and vortex-lattice methods found in the literature. Secondly, we have coupled our method to the equations governing the airfoil dynamics in order to numerically solve problems where the airfoil motion is unknown a priori as happens, for example, in the cases of the flutter and the divergence of a typical section of a wing or of a flexible panel. Apparently, this is the first self-consistent and easy-to-implement numerical analysis in the time domain of the compressible, linearized coupled dynamics of the (generally flexible) airfoil-fluid system carried out in the literature. The results for the particular case of a rigid airfoil show excellent agreement with those reported by other authors, whereas those obtained for the case of a cantilevered flexible airfoil in compressible flow seem to be original or, at least, not well-known.

  19. Development and validation of a sensitive LC-MS-MS method for the simultaneous determination of multicomponent contents in artificial Calculus Bovis.

    Science.gov (United States)

    Peng, Can; Tian, Jixin; Lv, Mengying; Huang, Yin; Tian, Yuan; Zhang, Zunjian

    2014-02-01

    Artificial Calculus Bovis is a major substitute in clinical treatment for Niuhuang, a widely used, efficacious but rare traditional Chinese medicine. However, its chemical structures and the physicochemical properties of its components are complicated, which causes difficulty in establishing a set of effective and comprehensive methods for its identification and quality control. In this study, a simple, sensitive and reliable liquid chromatography-tandem mass spectrometry method was successfully developed and validated for the simultaneous determination of bilirubin, taurine and major bile acids (including six unconjugated bile acids, two glycine-conjugated bile acids and three taurine-conjugated bile acids) in artificial Calculus Bovis using a Zorbax SB-C18 column with a gradient elution of methanol and 10 mmol/L ammonium acetate in aqueous solution (adjusted to pH 3.0 with formic acid). The mass spectra were obtained in the negative ion mode using dehydrocholic acid as the internal standard. The content of each analyte in artificial Calculus Bovis was determined by monitoring specific ion pairs in the selected reaction monitoring mode. All analytes demonstrated perfect linearity (r(2) > 0.994) in a wide dynamic range, and 10 batches of samples from different sources were further analyzed. This study provided a comprehensive method for the quality control of artificial Calculus Bovis.

  20. Predicting the fidelity of JPEG2000 compressed CT images using DICOM header information

    International Nuclear Information System (INIS)

    Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Choi, Hosik; Jeon, Jong-June; Ahn, Jeong-Hwan; Lee, Kyoung Ho

    2011-01-01

    Purpose: To propose multiple logistic regression (MLR) and artificial neural network (ANN) models constructed using digital imaging and communications in medicine (DICOM) header information in predicting the fidelity of Joint Photographic Experts Group (JPEG) 2000 compressed abdomen computed tomography (CT) images. Methods: Our institutional review board approved this study and waived informed patient consent. Using a JPEG2000 algorithm, 360 abdomen CT images were compressed reversibly (n = 48, as negative control) or irreversibly (n = 312) to one of different compression ratios (CRs) ranging from 4:1 to 10:1. Five radiologists independently determined whether the original and compressed images were distinguishable or indistinguishable. The 312 irreversibly compressed images were divided randomly into training (n = 156) and testing (n = 156) sets. The MLR and ANN models were constructed regarding the DICOM header information as independent variables and the pooled radiologists' responses as dependent variable. As independent variables, we selected the CR (DICOM tag number: 0028, 2112), effective tube current-time product (0018, 9332), section thickness (0018, 0050), and field of view (0018, 0090) among the DICOM tags. Using the training set, an optimal subset of independent variables was determined by backward stepwise selection in a four-fold cross-validation scheme. The MLR and ANN models were constructed with the determined independent variables using the training set. The models were then evaluated on the testing set by using receiver-operating-characteristic (ROC) analysis regarding the radiologists' pooled responses as the reference standard and by measuring Spearman rank correlation between the model prediction and the number of radiologists who rated the two images as distinguishable. Results: The CR and section thickness were determined as the optimal independent variables. The areas under the ROC curve for the MLR and ANN predictions were 0.91 (95% CI; 0

  1. Artificial intelligence in medicine.

    OpenAIRE

    Ramesh, A. N.; Kambhampati, C.; Monson, J. R. T.; Drew, P. J.

    2004-01-01

    INTRODUCTION: Artificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios. METHODS: Medline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of ...

  2. Feasibility of gas-discharge and optical methods of creating artificial ozone layers of the earth

    International Nuclear Information System (INIS)

    Batanov, G.M.; Kossyi, I.A.; Matveev, A.A.; Silakov, V.P.

    1996-01-01

    Gas-discharge (microwave) and optical (laser) methods of generating large-scale artificial ozone layers in the stratosphere are analyzed. A kinetic model is developed to calculate the plasma-chemical consequences of discharges localized in the stratosphere. Computations and simple estimates indicate that, in order to implement gas-discharge and optical methods, the operating power of ozone-producing sources should be comparable to or even much higher than the present-day power production throughout the world. Consequently, from the engineering and economic standpoints, microwave and laser methods cannot be used to repair large-scale ozone 'holes'

  3. Standard test method for compressive (crushing) strength of fired whiteware materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2006-01-01

    1.1 This test method covers two test procedures (A and B) for the determination of the compressive strength of fired whiteware materials. 1.2 Procedure A is generally applicable to whiteware products of low- to moderately high-strength levels (up to 150 000 psi or 1030 MPa). 1.3 Procedure B is specifically devised for testing of high-strength ceramics (over 100 000 psi or 690 MPa). 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  4. Comparison of three artificial digestion methods for detection of non-encapsulated Trichinella pseudospiralis larvae in pork.

    Science.gov (United States)

    Nöckler, K; Reckinger, S; Szabó, I; Maddox-Hyttel, C; Pozio, E; van der Giessen, J; Vallée, I; Boireau, P

    2009-02-23

    In a ring trial involving five laboratories (A, B, C, D, and E), three different methods of artificial digestion were compared for the detection of non-encapsulated Trichinella pseudospiralis larvae in minced meat. Each sample panel consisted of ten 1g minced pork samples. All samples in each panel were derived from a bulk meat preparation with a nominal value of either 7 or 17 larvae per g (lpg). Samples were tested for the number of muscle larvae using the magnetic stirrer method (labs A, B, and E), stomacher method (lab B), and Trichomatic 35 (labs C and D). T. pseudospiralis larvae were found in all 120 samples tested. For samples with 7 lpg, larval recoveries were significantly higher using the stomacher method versus the magnetic stirrer method, but there were no significant differences for samples with 17 lpg. In comparing laboratory results irrespective of the method used, lab B detected a significantly higher number of larvae than lab E for samples with 7 lpg, and lab E detected significantly less larvae than labs A, B, and D in samples with 17 lpg. The lowest overall variation for quantitative results (i.e. larval recoveries which were outside the tolerance range) was achieved by using the magnetic stirrer method (22%), followed by the stomacher method (25%), and Trichomatic 35 (30%). Results revealed that T. pseudospiralis larvae in samples with a nominal value of 7 and 17 lpg can be detected by all three methods of artificial digestion.

  5. DELIMINATE--a fast and efficient method for loss-less compression of genomic sequences: sequence analysis.

    Science.gov (United States)

    Mohammed, Monzoorul Haque; Dutta, Anirban; Bose, Tungadri; Chadaram, Sudha; Mande, Sharmila S

    2012-10-01

    An unprecedented quantity of genome sequence data is currently being generated using next-generation sequencing platforms. This has necessitated the development of novel bioinformatics approaches and algorithms that not only facilitate a meaningful analysis of these data but also aid in efficient compression, storage, retrieval and transmission of huge volumes of the generated data. We present a novel compression algorithm (DELIMINATE) that can rapidly compress genomic sequence data in a loss-less fashion. Validation results indicate relatively higher compression efficiency of DELIMINATE when compared with popular general purpose compression algorithms, namely, gzip, bzip2 and lzma. Linux, Windows and Mac implementations (both 32 and 64-bit) of DELIMINATE are freely available for download at: http://metagenomics.atc.tcs.com/compression/DELIMINATE. sharmila@atc.tcs.com Supplementary data are available at Bioinformatics online.

  6. Depicting mass flow rate of R134a /LPG refrigerant through straight and helical coiled adiabatic capillary tubes of vapor compression refrigeration system using artificial neural network approach

    Science.gov (United States)

    Gill, Jatinder; Singh, Jagdev

    2018-07-01

    In this work, an experimental investigation is carried out with R134a and LPG refrigerant mixture for depicting mass flow rate through straight and helical coil adiabatic capillary tubes in a vapor compression refrigeration system. Various experiments were conducted under steady-state conditions, by changing capillary tube length, inner diameter, coil diameter and degree of subcooling. The results showed that mass flow rate through helical coil capillary tube was found lower than straight capillary tube by about 5-16%. Dimensionless correlation and Artificial Neural Network (ANN) models were developed to predict mass flow rate. It was found that dimensionless correlation and ANN model predictions agreed well with experimental results and brought out an absolute fraction of variance of 0.961 and 0.988, root mean square error of 0.489 and 0.275 and mean absolute percentage error of 4.75% and 2.31% respectively. The results suggested that ANN model shows better statistical prediction than dimensionless correlation model.

  7. Artificial Consciousness or Artificial Intelligence

    OpenAIRE

    Spanache Florin

    2017-01-01

    Artificial intelligence is a tool designed by people for the gratification of their own creative ego, so we can not confuse conscience with intelligence and not even intelligence in its human representation with conscience. They are all different concepts and they have different uses. Philosophically, there are differences between autonomous people and automatic artificial intelligence. This is the difference between intelligence and artificial intelligence, autonomous versus a...

  8. Alternatives to the discrete cosine transform for irreversible tomographic image compression

    International Nuclear Information System (INIS)

    Villasenor, J.D.

    1993-01-01

    Full-frame irreversible compression of medical images is currently being performed using the discrete cosine transform (DCT). Although the DCT is the optimum fast transform for video compression applications, the authors show here that it is out-performed by the discrete Fourier transform (DFT) and discrete Hartley transform (DHT) for images obtained using positron emission tomography (PET) and magnetic resonance imaging (MRI), and possibly for certain types of digitized radiographs. The difference occurs because PET and MRI images are characterized by a roughly circular region D of non-zero intensity bounded by a region R in which the Image intensity is essentially zero. Clipping R to its minimum extent can reduce the number of low-intensity pixels but the practical requirement that images be stored on a rectangular grid means that a significant region of zero intensity must remain an integral part of the image to be compressed. With this constraint imposed, the DCT loses its advantage over the DFT because neither transform introduces significant artificial discontinuities. The DFT and DHT have the further important advantage of requiring less computation time than the DCT

  9. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation

    OpenAIRE

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-01-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects ...

  10. Soft computing in artificial intelligence

    CERN Document Server

    Matson, Eric

    2014-01-01

    This book explores the concept of artificial intelligence based on knowledge-based algorithms. Given the current hardware and software technologies and artificial intelligence theories, we can think of how efficient to provide a solution, how best to implement a model and how successful to achieve it. This edition provides readers with the most recent progress and novel solutions in artificial intelligence. This book aims at presenting the research results and solutions of applications in relevance with artificial intelligence technologies. We propose to researchers and practitioners some methods to advance the intelligent systems and apply artificial intelligence to specific or general purpose. This book consists of 13 contributions that feature fuzzy (r, s)-minimal pre- and β-open sets, handling big coocurrence matrices, Xie-Beni-type fuzzy cluster validation, fuzzy c-regression models, combination of genetic algorithm and ant colony optimization, building expert system, fuzzy logic and neural network, ind...

  11. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    International Nuclear Information System (INIS)

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-01-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced. (paper)

  12. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    Science.gov (United States)

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.

    2015-03-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  13. System using data compression and hashing adapted for use for multimedia encryption

    Science.gov (United States)

    Coffland, Douglas R [Livermore, CA

    2011-07-12

    A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

  14. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Martinez B, M. R.; Vega C, H. R.; Gallego D, E.; Lorente F, A.; Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E.

    2011-01-01

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  15. Coabsorbent and thermal recovery compression heat pumping technologies

    CERN Document Server

    Staicovici, Mihail-Dan

    2014-01-01

    This book introduces two of the most exciting heat pumping technologies, the coabsorbent and the thermal recovery (mechanical vapor) compression, characterized by a high potential in primary energy savings and environmental protection. New cycles with potential applications of nontruncated, truncated, hybrid truncated, and multi-effect coabsorbent types are introduced in this work.   Thermal-to-work recovery compression (TWRC) is the first of two particular methods explored here, including how superheat is converted into work, which diminishes the compressor work input. In the second method, thermal-to-thermal recovery compression (TTRC), the superheat is converted into useful cooling and/or heating, and added to the cycle output effect via the coabsorbent technology. These and other methods of discharge gas superheat recovery are analyzed for single-, two-, three-, and multi-stage compression cooling and heating, ammonia and ammonia-water cycles, and the effectiveness results are given.  The author presen...

  16. A new method for simplification and compression of 3D meshes

    OpenAIRE

    Attene, Marco

    2001-01-01

    We focus on the lossy compression of manifold triangle meshes. Our SwingWrapper approach partitions the surface of an original mesh M into simply-connected regions, called triangloids. We compute a new mesh M'. Each triangle of M' is a close approximation of a pseudo-triangle of M. By construction, the connectivity of M' is fairly regular and can be compressed to less than a bit per triangle using EdgeBreaker or one of the other recently developed schemes. The locations of the vertices of M' ...

  17. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    International Nuclear Information System (INIS)

    Nedic, Vladimir; Despotovic, Danijela; Cvetanovic, Slobodan; Despotovic, Milan; Babic, Sasa

    2014-01-01

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L eq . Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model

  18. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    Energy Technology Data Exchange (ETDEWEB)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs [Faculty of Philology and Arts, University of Kragujevac, Jovana Cvijića bb, 34000 Kragujevac (Serbia); Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs [Faculty of Economics, University of Kragujevac, Djure Pucara Starog 3, 34000 Kragujevac (Serbia); Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs [Faculty of Economics, University of Niš, Trg kralja Aleksandra Ujedinitelja, 18000 Niš (Serbia); Despotovic, Milan, E-mail: mdespotovic@kg.ac.rs [Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac (Serbia); Babic, Sasa, E-mail: babicsf@yahoo.com [College of Applied Mechanical Engineering, Trstenik (Serbia)

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.

  19. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2016-06-03

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  20. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi

    2016-01-01

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  1. Long-time stability effects of quadrature and artificial viscosity on nodal discontinuous Galerkin methods for gas dynamics

    Science.gov (United States)

    Durant, Bradford; Hackl, Jason; Balachandar, Sivaramakrishnan

    2017-11-01

    Nodal discontinuous Galerkin schemes present an attractive approach to robust high-order solution of the equations of fluid mechanics, but remain accompanied by subtle challenges in their consistent stabilization. The effect of quadrature choices (full mass matrix vs spectral elements), over-integration to manage aliasing errors, and explicit artificial viscosity on the numerical solution of a steady homentropic vortex are assessed over a wide range of resolutions and polynomial orders using quadrilateral elements. In both stagnant and advected vortices in periodic and non-periodic domains the need arises for explicit stabilization beyond the numerical surface fluxes of discontinuous Galerkin spectral elements. Artificial viscosity via the entropy viscosity method is assessed as a stabilizing mechanism. It is shown that the regularity of the artificial viscosity field is essential to its use for long-time stabilization of small-scale features in nodal discontinuous Galerkin solutions of the Euler equations of gas dynamics. Supported by the Department of Energy Predictive Science Academic Alliance Program Contract DE-NA0002378.

  2. Minimal invasive stabilization of osteoporotic vertebral compression fractures. Methods and preinterventional diagnostics

    International Nuclear Information System (INIS)

    Grohs, J.G.; Krepler, P.

    2004-01-01

    Minimal invasive stabilizations represent a new alternative for the treatment of osteoporotic compression fractures. Vertebroplasty and balloon kyphoplasty are two methods to enhance the strength of osteoporotic vertebral bodies by the means of cement application. Vertebroplasty is the older and technically easier method. The balloon kyphoplasty is the newer and more expensive method which does not only improve pain but also restores the sagittal profile of the spine. By balloon kyphoplasty the height of 101 fractured vertebral bodies could be increased up to 90% and the wedge decreased from 12 to 7 degrees. Pain was reduced from 7,2 to 2,5 points. The Oswestry disability index decreased from 60 to 26 points. This effects persisted over a period of two years. Cement leakage occurred in only 2% of vertebral bodies. Fractures of adjacent vertebral bodies were found in 11%. Good preinterventional diagnostics and intraoperative imaging are necessary to make the balloon kyphoplasty a successful application. (orig.) [de

  3. Bystander fatigue and CPR quality by older bystanders: a randomized crossover trial comparing continuous chest compressions and 30:2 compressions to ventilations.

    Science.gov (United States)

    Liu, Shawn; Vaillancourt, Christian; Kasaboski, Ann; Taljaard, Monica

    2016-11-01

    This study sought to measure bystander fatigue and cardiopulmonary resuscitation (CPR) quality after five minutes of CPR using the continuous chest compression (CCC) versus the 30:2 chest compression to ventilation method in older lay persons, a population most likely to perform CPR on cardiac arrest victims. This randomized crossover trial took place at three tertiary care hospitals and a seniors' center. Participants were aged ≥55 years without significant physical limitations (frailty score ≤3/7). They completed two 5-minute CPR sessions (using 30:2 and CCC) on manikins; sessions were separated by a rest period. We used concealed block randomization to determine CPR method order. Metronome feedback maintained a compression rate of 100/minute. We measured heart rate (HR), mean arterial pressure (MAP), and Borg Exertion Scale. CPR quality measures included total number of compressions and number of adequate compressions (depth ≥5 cm). Sixty-three participants were enrolled: mean age 70.8 years, female 66.7%, past CPR training 60.3%. Bystander fatigue was similar between CPR methods: mean difference in HR -0.59 (95% CI -3.51-2.33), MAP 1.64 (95% CI -0.23-3.50), and Borg 0.46 (95% CI 0.07-0.84). Compared to 30:2, participants using CCC performed more chest compressions (480.0 v. 376.3, mean difference 107.7; pCPR quality decreased significantly faster when performing CCC compared to 30:2. However, performing CCC produced more adequate compressions overall with a similar level of fatigue compared to the 30:2 method.

  4. Analysis of the microstructure and mechanical performance of composite resins after accelerated artificial aging.

    Science.gov (United States)

    De Oliveira Daltoé, M; Lepri, C Penazzo; Wiezel, J Guilherme G; Tornavoi, D Cremonezzi; Agnelli, J A Marcondes; Reis, A Cândido Dos

    2013-03-01

    Researches that assess the behavior of dental materials are important for scientific and industrial development especially when they are tested under conditions that simulate the oral environment, so this work analyzed the compressive strength and microstructure of three composite resins subjected to accelerated artificial aging (AAA). Three composites resins of 3M (P90, P60 and Z100) were analyzed and were obtained 16 specimens for each type (N.=48). Half of each type were subjected to UV-C system AAA and then were analyzed the surfaces of three aged specimens and three not aged of each type through the scanning electron microscope (SEM). After, eight specimens of each resin, aged and not aged, were subjected to compression test. After statistical analysis of compressive strength values, it was found that there was difference between groups (α aged P60 presented lower values of compressive strength statistically significant when compared to the not subject to the AAA. For the other composite resins, there was no difference, regardless of aging, a fact confirmed by SEM. The results showed that the AAA influenced the compressive strength of the resin aged P60; confirmed by surface analysis by SEM, which showed greater structural disarrangement on surface material.

  5. Space Environment Modelling with the Use of Artificial Intelligence Methods

    Science.gov (United States)

    Lundstedt, H.; Wintoft, P.; Wu, J.-G.; Gleisner, H.; Dovheden, V.

    1996-12-01

    Space based technological systems are affected by the space weather in many ways. Several severe failures of satellites have been reported at times of space storms. Our society also increasingly depends on satellites for communication, navigation, exploration, and research. Predictions of the conditions in the satellite environment have therefore become very important. We will here present predictions made with the use of artificial intelligence (AI) techniques, such as artificial neural networks (ANN) and hybrids of AT methods. We are developing a space weather model based on intelligence hybrid systems (IHS). The model consists of different forecast modules, each module predicts the space weather on a specific time-scale. The time-scales range from minutes to months with the fundamental time-scale of 1-5 minutes, 1-3 hours, 1-3 days, and 27 days. Solar and solar wind data are used as input data. From solar magnetic field measurements, either made on the ground at Wilcox Solar Observatory (WSO) at Stanford, or made from space by the satellite SOHO, solar wind parameters can be predicted and modelled with ANN and MHD models. Magnetograms from WSO are available on a daily basis. However, from SOHO magnetograms will be available every 90 minutes. SOHO magnetograms as input to ANNs will therefore make it possible to even predict solar transient events. Geomagnetic storm activity can today be predicted with very high accuracy by means of ANN methods using solar wind input data. However, at present real-time solar wind data are only available during part of the day from the satellite WIND. With the launch of ACE in 1997, solar wind data will on the other hand be available during 24 hours per day. The conditions of the satellite environment are not only disturbed at times of geomagnetic storms but also at times of intense solar radiation and highly energetic particles. These events are associated with increased solar activity. Predictions of these events are therefore

  6. ChIPWig: a random access-enabling lossless and lossy compression method for ChIP-seq data.

    Science.gov (United States)

    Ravanmehr, Vida; Kim, Minji; Wang, Zhiying; Milenkovic, Olgica

    2018-03-15

    Chromatin immunoprecipitation sequencing (ChIP-seq) experiments are inexpensive and time-efficient, and result in massive datasets that introduce significant storage and maintenance challenges. To address the resulting Big Data problems, we propose a lossless and lossy compression framework specifically designed for ChIP-seq Wig data, termed ChIPWig. ChIPWig enables random access, summary statistics lookups and it is based on the asymptotic theory of optimal point density design for nonuniform quantizers. We tested the ChIPWig compressor on 10 ChIP-seq datasets generated by the ENCODE consortium. On average, lossless ChIPWig reduced the file sizes to merely 6% of the original, and offered 6-fold compression rate improvement compared to bigWig. The lossy feature further reduced file sizes 2-fold compared to the lossless mode, with little or no effects on peak calling and motif discovery using specialized NarrowPeaks methods. The compression and decompression speed rates are of the order of 0.2 sec/MB using general purpose computers. The source code and binaries are freely available for download at https://github.com/vidarmehr/ChIPWig-v2, implemented in C ++. milenkov@illinois.edu. Supplementary data are available at Bioinformatics online.

  7. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  8. Development of modifications to the material point method for the simulation of thin membranes, compressible fluids, and their interactions

    Energy Technology Data Exchange (ETDEWEB)

    York, A.R. II [Sandia National Labs., Albuquerque, NM (United States). Engineering and Process Dept.

    1997-07-01

    The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.

  9. Beyond AI: Artificial Dreams Conference

    CERN Document Server

    Zackova, Eva; Kelemen, Jozef; Beyond Artificial Intelligence : The Disappearing Human-Machine Divide

    2015-01-01

    This book is an edited collection of chapters based on the papers presented at the conference “Beyond AI: Artificial Dreams” held in Pilsen in November 2012. The aim of the conference was to question deep-rooted ideas of artificial intelligence and cast critical reflection on methods standing at its foundations.  Artificial Dreams epitomize our controversial quest for non-biological intelligence, and therefore the contributors of this book tried to fully exploit such a controversy in their respective chapters, which resulted in an interdisciplinary dialogue between experts from engineering, natural sciences and humanities.   While pursuing the Artificial Dreams, it has become clear that it is still more and more difficult to draw a clear divide between human and machine. And therefore this book tries to portrait such an image of what lies beyond artificial intelligence: we can see the disappearing human-machine divide, a very important phenomenon of nowadays technological society, the phenomenon which i...

  10. Compressible Convection Experiment using Xenon Gas in a Centrifuge

    Science.gov (United States)

    Menaut, R.; Alboussiere, T.; Corre, Y.; Huguet, L.; Labrosse, S.; Deguen, R.; Moulin, M.

    2017-12-01

    We present here an experiment especially designed to study compressible convection in the lab. For significant compressible convection effects, the parameters of the experiment have to be optimized: we use xenon gaz in a cubic cell. This cell is placed in a centrifuge to artificially increase the apparent gravity and heated from below. With these choices, we are able to reach a dissipation number close to Earth's outer core value. We will present our results for different heating fluxes and rotation rates. We success to observe an adiabatic gradient of 3K/cm in the cell. Studies of pressure and temperature fluctuations lead us to think that the convection takes place under the form of a single roll in the cell for high heating flux. Moreover, these fluctuations show that the flow is geostrophic due to the high rotation speed. This important role of rotation, via Coriolis force effects, in our experimental setup leads us to develop a 2D quasigeostrophic compressible model in the anelastic liquid approximation. We test numerically this model with the finite element solver FreeFem++ and compare its results with our experimental data. In conclusion, we will present our project for the next experiment in which the cubic cell will be replace by a annulus cell. We will discuss the new expected effects due to this geometry as Rossby waves and zonal flows.

  11. Exploring compression techniques for ROOT IO

    Science.gov (United States)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  12. Heterogeneous Compression of Large Collections of Evolutionary Trees.

    Science.gov (United States)

    Matthews, Suzanne J

    2015-01-01

    Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.

  13. Artificial Satellites Observations Using the Complex of Telescopes of RI "MAO"

    Science.gov (United States)

    Sybiryakova, Ye. S.; Shulga, O. V.; Vovk, V. S.; Kaliuzny, M. P.; Bushuev, F. I.; Kulichenko, M. O.; Haloley, M. I.; Chernozub, V. M.

    2017-02-01

    Special methods, means and software for cosmic objects' observation and processing of obtained results were developed. Combined method, which consists in separated accumulation of images of reference stars and artificial objects, is the main method used in observations of artificial cosmic objects. It is used for observations of artificial objects at all types of orbits.

  14. Mammography parameters: compression, dose, and discomfort

    International Nuclear Information System (INIS)

    Blanco, S.; Di Risio, C.; Andisco, D.; Rojas, R.R.; Rojas, R.M.

    2017-01-01

    Objective: To confirm the importance of compression in mammography and relate it to the discomfort expressed by the patients. Materials and methods: Two samples of 402 and 268 mammographies were obtained from two diagnostic centres that use the same mammographic equipment, but different compression techniques. The patient age range was from 21 to 50 years old. (authors) [es

  15. Linearly decoupled energy-stable numerical methods for multi-component two-phase compressible flow

    KAUST Repository

    Kou, Jisheng

    2017-12-06

    In this paper, for the first time we propose two linear, decoupled, energy-stable numerical schemes for multi-component two-phase compressible flow with a realistic equation of state (e.g. Peng-Robinson equation of state). The methods are constructed based on the scalar auxiliary variable (SAV) approaches for Helmholtz free energy and the intermediate velocities that are designed to decouple the tight relationship between velocity and molar densities. The intermediate velocities are also involved in the discrete momentum equation to ensure a consistency relationship with the mass balance equations. Moreover, we propose a component-wise SAV approach for a multi-component fluid, which requires solving a sequence of linear, separate mass balance equations. We prove that the methods have the unconditional energy-dissipation feature. Numerical results are presented to verify the effectiveness of the proposed methods.

  16. Real-time and encryption efficiency improvements of simultaneous fusion, compression and encryption method based on chaotic generators

    Science.gov (United States)

    Jridi, Maher; Alfalou, Ayman

    2018-03-01

    In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.

  17. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian; Dutta, Aritra; Sun, Qiyu; Foroosh, Hassan

    2017-01-01

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  18. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian

    2017-05-02

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  19. Applicability of higher-order TVD method to low mach number compressible flows

    International Nuclear Information System (INIS)

    Akamatsu, Mikio

    1995-01-01

    Steep gradients of fluid density are the influential factor of spurious oscillation in numerical solutions of low Mach number (M<<1) compressible flows. The total variation diminishing (TVD) scheme is a promising remedy to overcome this problem and obtain accurate solutions. TVD schemes for high-speed flows are, however, not compatible with commonly used methods in low Mach number flows using pressure-based formulation. In the present study a higher-order TVD scheme is constructed on a modified form of each individual scalar equation of primitive variables. It is thus clarified that the concept of TVD is applicable to low Mach number flows within the framework of the existing numerical method. Results of test problems of the moving interface of two-component gases with the density ratio ≥ 4, demonstrate the accurate and robust (wiggle-free) profile of the scheme. (author)

  20. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  1. Natural - synthetic - artificial!

    DEFF Research Database (Denmark)

    Nielsen, Peter E

    2010-01-01

    The terms "natural," "synthetic" and "artificial" are discussed in relation to synthetic and artificial chromosomes and genomes, synthetic and artificial cells and artificial life.......The terms "natural," "synthetic" and "artificial" are discussed in relation to synthetic and artificial chromosomes and genomes, synthetic and artificial cells and artificial life....

  2. An efficient compression scheme for bitmap indices

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  3. Investigation of the influence of different surface regularization methods for cylindrical concrete specimens in axial compression tests

    Directory of Open Access Journals (Sweden)

    R. MEDEIROS

    Full Text Available ABSTRACT This study was conducted with the aim of evaluating the influence of different methods for end surface preparation of compressive strength test specimens. Four different methods were compared: a mechanical wear method through grinding using a diamond wheel established by NBR 5738; a mechanical wear method using a diamond saw which is established by NM 77; an unbonded system using neoprene pads in metal retainer rings established by C1231 and a bonded capping method with sulfur mortar established by NBR 5738 and by NM 77. To develop this research, 4 concrete mixes were determined with different strength levels, 2 of group 1 and 2 of group 2 strength levels established by NBR 8953. Group 1 consists of classes C20 to C50, 5 in 5MPa, also known as normal strength concrete. Group 2 is comprised of class C55, C60 to C100, 10 in 10 MPa, also known as high strength concrete. Compression tests were carried out at 7 and 28 days for the 4 surface preparation methods. The results of this study indicate that the method established by NBR 5738 is the most effective among the 4 strengths considered, once it presents lower dispersion of values obtained from the tests, measured by the coefficient of variation and, in almost all cases, it demonstrates the highest mean of rupture test. The method described by NBR 5738 achieved the expected strength level in all tests.

  4. Prevention of deep vein thrombosis in potential neurosurgical patients. A randomized trial comparing graduated compression stockings alone or graduated compression stockings plus intermittent pneumatic compression with control

    International Nuclear Information System (INIS)

    Turpie, A.G.; Hirsh, J.; Gent, M.; Julian, D.; Johnson, J.

    1989-01-01

    In a randomized trial of neurosurgical patients, groups wearing graduated compression stockings alone (group 1) or graduated compression stockings plus intermittent pneumatic compression (IPC) (group 2) were compared with an untreated control group in the prevention of deep vein thrombosis (DVT). In both active treatment groups, the graduated compression stockings were continued for 14 days or until hospital discharge, if earlier. In group 2, IPC was continued for seven days. All patients underwent DVT surveillance with iodine 125-labeled fibrinogen leg scanning and impedance plethysmography. Venography was carried out if either test became abnormal. Deep vein thrombosis occurred in seven (8.8%) of 80 patients in group 1, in seven (9.0%) of 78 patients in group 2, and in 16 (19.8%) of 81 patients in the control group. The observed differences among these rates are statistically significant. The results of this study indicate that graduated compression stockings alone or in combination with IPC are effective methods of preventing DVT in neurosurgical patients

  5. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation.

    Science.gov (United States)

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-07-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects were positioned in a 30° lateral recumbent position, and a 2-kgf compression was applied. For expiratory rib cage compression, the rib cage was compressed unilaterally; for expiratory abdominal compression, the area directly above the navel was compressed. Tidal volume values were the actual measured values divided by body weight. [Results] Tidal volume values were as follows: at rest, 7.2 ± 1.7 mL/kg; during expiratory rib cage compression, 8.3 ± 2.1 mL/kg; during expiratory abdominal compression, 9.1 ± 2.2 mL/kg. There was a significant difference between the tidal volume during expiratory abdominal compression and that at rest. The tidal volume in expiratory rib cage compression was strongly correlated with that in expiratory abdominal compression. [Conclusion] These results indicate that expiratory abdominal compression may be an effective alternative to the manual breathing assist procedure.

  6. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  7. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  8. Rotary compression process for producing toothed hollow shafts

    Directory of Open Access Journals (Sweden)

    J. Tomczak

    2014-10-01

    Full Text Available The paper presents the results of numerical analyses of the rotary compression process for hollow stepped shafts with herringbone teeth. The numerical simulations were performed by Finite Element Method (FEM, using commercial software package DEFORM-3D. The results of numerical modelling aimed at determining the effect of billet wall thickness on product shape and the rotary compression process are presented. The distributions of strains, temperatures, damage criterion and force parameters of the process determined in the simulations are given, too. The numerical results obtained confirm the possibility of producing hollow toothed shafts from tube billet by rotary compression methods.

  9. Fundamental study of compression for movie files of coronary angiography

    Science.gov (United States)

    Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.

  10. Artificial sweetener; Jinko kanmiryo

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-08-01

    The patents related to the artificial sweetener that it is introduced to the public in 3 years from 1996 until 1998 are 115 cases. The sugar quality which makes an oligosaccharide and sugar alcohol the subject is greatly over 28 cases of the non-sugar quality in the one by the kind as a general tendency of these patents at 73 cases in such cases as the Aspartame. The method of manufacture patent, which included new material around other peptides, the oligosaccharide and sugar alcohol isn`t inferior to 56 cases of the formation thing patent at 43 cases, and pays attention to the thing, which is many by the method of manufacture, formation. There is most improvement of the quality of sweetness with 31 cases in badness of the aftertaste which is characteristic of the artificial sweetener and so on, and much stability including the improvement in the flavor of food by the artificial sweetener, a long time and dissolution, fluid nature and productivity and improvement of the economy such as a cost are seen with effect on a purpose. (NEDO)

  11. A hybrid data compression approach for online backup service

    Science.gov (United States)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  12. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  13. Integer Set Compression and Statistical Modeling

    DEFF Research Database (Denmark)

    Larsson, N. Jesper

    2014-01-01

    enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities......Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...

  14. Advanced Applications of Neural Networks and Artificial Intelligence: A Review

    OpenAIRE

    Koushal Kumar; Gour Sundar Mitra Thakur

    2012-01-01

    Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is c...

  15. Lossless Geometry Compression Through Changing 3D Coordinates into 1D

    Directory of Open Access Journals (Sweden)

    Yongkui Liu

    2013-08-01

    Full Text Available A method of lossless geometry compression on the coordinates of the vertexes for grid model is presented. First, the 3D coordinates are pre-processed to be transformed into a specific form. Then these 3D coordinates are changed into 1D data by making the three coordinates of a vertex represented by only a position number, which is made of a large integer. To minimize the integers, they are sorted and the differences between two adjacent vertexes are stored in a vertex table. In addition to the technique of geometry compression on coordinates, an improved method for storing the compressed topological data in a facet table is proposed to make the method more complete and efficient. The experimental results show that the proposed method has a better compression rate than the latest method of lossless geometry compression, the Isenburg-Lindstrom-Snoeyink method. The theoretical analysis and the experiment results also show that the important decompression time of the new method is short. Though the new method is explained in the case of a triangular grid, it can also be used in other forms of grid model.

  16. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Design of alluvial Egyptian irrigation canals using artificial neural networks method

    Directory of Open Access Journals (Sweden)

    Hassan Ibrahim Mohamed

    2013-06-01

    Full Text Available In the present study, artificial neural networks method (ANNs is used to estimate the main parameters which used in design of stable alluvial channels. The capability of ANN models to predict the stable alluvial channels dimensions is investigated, where the flow rate and sediment mean grain size were considered as input variables and wetted perimeter, hydraulic radius, and water surface slope were considered as output variables. The used ANN models are based on a back propagation algorithm to train a multi-layer feed-forward network (Levenberg Marquardt algorithm. The proposed models were verified using 311 data sets of field data collected from 61 manmade canals and drains. Several statistical measures and graphical representation are used to check the accuracy of the models in comparison with previous empirical equations. The results of the developed ANN model proved that this technique is reliable in such field compared with previously developed methods.

  18. Application of artificial intelligence to the management of urological cancer.

    Science.gov (United States)

    Abbod, Maysam F; Catto, James W F; Linkens, Derek A; Hamdy, Freddie C

    2007-10-01

    Artificial intelligence techniques, such as artificial neural networks, Bayesian belief networks and neuro-fuzzy modeling systems, are complex mathematical models based on the human neuronal structure and thinking. Such tools are capable of generating data driven models of biological systems without making assumptions based on statistical distributions. A large amount of study has been reported of the use of artificial intelligence in urology. We reviewed the basic concepts behind artificial intelligence techniques and explored the applications of this new dynamic technology in various aspects of urological cancer management. A detailed and systematic review of the literature was performed using the MEDLINE and Inspec databases to discover reports using artificial intelligence in urological cancer. The characteristics of machine learning and their implementation were described and reports of artificial intelligence use in urological cancer were reviewed. While most researchers in this field were found to focus on artificial neural networks to improve the diagnosis, staging and prognostic prediction of urological cancers, some groups are exploring other techniques, such as expert systems and neuro-fuzzy modeling systems. Compared to traditional regression statistics artificial intelligence methods appear to be accurate and more explorative for analyzing large data cohorts. Furthermore, they allow individualized prediction of disease behavior. Each artificial intelligence method has characteristics that make it suitable for different tasks. The lack of transparency of artificial neural networks hinders global scientific community acceptance of this method but this can be overcome by neuro-fuzzy modeling systems.

  19. Microvascular Decompression for Classical Trigeminal Neuralgia Caused by Venous Compression: Novel Anatomic Classifications and Surgical Strategy.

    Science.gov (United States)

    Wu, Min; Fu, Xianming; Ji, Ying; Ding, Wanhai; Deng, Dali; Wang, Yehan; Jiang, Xiaofeng; Niu, Chaoshi

    2018-05-01

    Microvascular decompression of the trigeminal nerve is the most effective treatment for trigeminal neuralgia. However, when encountering classical trigeminal neuralgia caused by venous compression, the procedure becomes much more difficult, and failure or recurrence because of incomplete decompression may become frequent. This study aimed to investigate the anatomic variation of the culprit veins and discuss the surgical strategy for different types. We performed a retrospective analysis of 64 consecutive cases in whom veins were considered as responsible vessels alone or combined with other adjacent arteries. The study classified culprit veins according to operative anatomy and designed personalized approaches and decompression management according to different forms of compressive veins. Curative effects were assessed by the Barrow Neurological Institute (BNI) pain intensity score and BNI facial numbness score. The most commonly encountered veins were the superior petrosal venous complex (SPVC), which was artificially divided into 4 types according to both venous tributary distribution and empty point site. We synthetically considered these factors and selected an approach to expose the trigeminal root entry zone, including the suprafloccular transhorizontal fissure approach and infratentorial supracerebellar approach. The methods of decompression consist of interposing and transposing by using Teflon, and sometimes with the aid of medical adhesive. Nerve combing (NC) of the trigeminal root was conducted in situations of extremely difficult neurovascular compression, instead of sacrificing veins. Pain completely disappeared in 51 patients, and the excellent outcome rate was 79.7%. There were 13 patients with pain relief treated with reoperation. Postoperative complications included 10 cases of facial numbness, 1 case of intracranial infection, and 1 case of high-frequency hearing loss. The accuracy recognition of anatomic variation of the SPVC is crucial for the

  20. Task-oriented lossy compression of magnetic resonance images

    Science.gov (United States)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  1. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Zhehuang Huang

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  2. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  3. Usefulness of injecting local anesthetic before compression in stereotactic vacuum-assisted breast biopsy

    International Nuclear Information System (INIS)

    Matsuura, Akiko; Urashima, Masaki; Nishihara, Reisuke

    2009-01-01

    Stereotactic vacuum-assisted breast biopsy is a useful method of breast biopsy. However, some patients feel unbearable breast pain due to compression. Breast pain due to compression caused the fact that the breast cannot be compressed sufficiently. Sufficient compression is important to fix the breast in this method. Breast pain during this procedure is problematic from the perspectives of both stress and fixing the breast. We performed biopsy in the original manner by injecting local anesthetic before compression, in order to relieve breast pain due to compression. This was only slightly different in order from the standard method, and there was no need for any special technique or device. This way allowed for even higher breast compression, and all of the most recent 30 cases were compressed at levels greater than 120N. This approach is useful not only for relieving pain, but also for fixing the breast. (author)

  4. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  5. High-speed and high-ratio referential genome compression.

    Science.gov (United States)

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. Efficient two-dimensional compressive sensing in MIMO radar

    Science.gov (United States)

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  7. The deconvolution of complex spectra by artificial immune system

    Science.gov (United States)

    Galiakhmetova, D. I.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.

    2017-11-01

    An application of the artificial immune system method for decomposition of complex spectra is presented. The results of decomposition of the model contour consisting of three components, Gaussian contours, are demonstrated. The method of artificial immune system is an optimization method, which is based on the behaviour of the immune system and refers to modern methods of search for the engine optimization.

  8. High intensity pulse self-compression in short hollow core capillaries

    OpenAIRE

    Butcher, Thomas J.; Anderson, Patrick N.; Horak, Peter; Frey, Jeremy G.; Brocklesby, William S.

    2011-01-01

    The drive for shorter pulses for use in techniques such as high harmonic generation and laser wakefield acceleration requires continual improvement in post-laser pulse compression techniques. The two most commonly used methods of pulse compression for high intensity pulses are hollow capillary compression via self-phase modulation (SPM) [1] and the more recently developed filamentation [2]. Both of these methods can require propagation distances of 1-3 m to achieve spectral broadening and com...

  9. Hybrid Modeling and Optimization of Manufacturing Combining Artificial Intelligence and Finite Element Method

    CERN Document Server

    Quiza, Ramón; Davim, J Paulo

    2012-01-01

    Artificial intelligence (AI) techniques and the finite element method (FEM) are both powerful computing tools, which are extensively used for modeling and optimizing manufacturing processes. The combination of these tools has resulted in a new flexible and robust approach as several recent studies have shown. This book aims to review the work already done in this field as well as to expose the new possibilities and foreseen trends. The book is expected to be useful for postgraduate students and researchers, working in the area of modeling and optimization of manufacturing processes.

  10. Data compression considerations for detectors with local intelligence

    International Nuclear Information System (INIS)

    Garcia-Sciveres, M; Wang, X

    2014-01-01

    This note summarizes the outcome of discussions about how data compression considerations apply to tracking detectors with local intelligence. The method for analyzing data compression efficiency is taken from a previous publication and applied to module characteristics from the WIT2014 workshop. We explore local intelligence and coupled layer structures in the language of data compression. In this context the original intelligent tracker concept of correlating hits to find matches of interest and discard others is just a form of lossy data compression. We now explore how these features (intelligence and coupled layers) can be exploited for lossless compression, which could enable full readout at higher trigger rates than previously envisioned, or even triggerless

  11. Experimental Study on the Compressive Strength of Big Mobility Concrete with Nondestructive Testing Method

    Directory of Open Access Journals (Sweden)

    Huai-Shuai Shang

    2012-01-01

    Full Text Available An experimental study of C20, C25, C30, C40, and C50 big mobility concrete cubes that came from laboratory and construction site was completed. Nondestructive testing (NDT was carried out using impact rebound hammer (IRH techniques to establish a correlation between the compressive strengths and the rebound number. The local curve for measuring strength of the regression method is set up and its superiority is proved. The rebound method presented is simple, quick, and reliable and covers wide ranges of concrete strengths. The rebound method can be easily applied to concrete specimens as well as existing concrete structures. The final results were compared with previous ones from the literature and also with actual results obtained from samples extracted from existing structures.

  12. Artificial Intelligence Techniques and Methodology

    OpenAIRE

    Carbonell, Jaime G.; Sleeman, Derek

    1982-01-01

    Two closely related aspects of artificial intelligence that have received comparatively little attention in the recent literature are research methodology, and the analysis of computational techniques that span multiple application areas. We believe both issues to be increasingly significant as Artificial Intelligence matures into a science and spins off major application efforts. It is imperative to analyze the repertoire of AI methods with respect to past experience, utility in new domains,...

  13. Sizing of Compression Coil Springs Gas Regulators Using Modern Methods CAD and CAE

    Directory of Open Access Journals (Sweden)

    Adelin Ionel Tuţă

    2010-10-01

    Full Text Available This paper presents a method for compression coil springs sizing by gas regulators composition, using CAD techniques (Computer Aided Design and CAE (Computer Aided Engineering. Sizing is to optimize the functioning of the regulators under dynamic industrial and house-hold. Gas regulator is a device that automatically and continuously adjusted to maintain pre-set limits on output gas pressure at varying flow and input pressure. The performances of the pressure regulators like automatic systems depend on their behaviour under dynamic opera-tion. Time constant optimization of pneumatic actuators, which drives gas regulators, leads to a better functioning under their dynamic.

  14. Demand Forecasting Methods in Accommodation Establishments: A Research with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ebru ULUCAN

    2018-05-01

    Full Text Available As it being seen in every sector, demand forecasting in tourism is been conducted with various qualitative and quantitative methods. In recent years, artificial neural network models, which have been developed as an alternative to these forecasting methods, give the nearest values in forecasting with the smallest failure percentage. This study aims to reveal that accomodation establishments can use the neural network models as an alternative while forecasting their demand. With this aim, neural network models have been tested by using the sold room values between the period of 2013-2016 of a five star hotel in Istanbul and it is found that the results acquired from the testing models are the nearest values comparing the realized figures. In the light of these results, tourism demand of the hotel for 2017 and 2018 has been forecasted.

  15. Coding Strategies and Implementations of Compressive Sensing

    Science.gov (United States)

    Tsai, Tsung-Han

    information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  16. Analysis of some meteorological parameters using artificial neural ...

    African Journals Online (AJOL)

    Analysis of some meteorological parameters using artificial neural network method for ... The mean daily data for sunshine hours, maximum temperature, cloud cover and ... The study used artificial neural networks (ANN) for the estimation.

  17. Compression of Short Text on Embedded Systems

    DEFF Research Database (Denmark)

    Rein, S.; Gühmann, C.; Fitzek, Frank

    2006-01-01

    The paper details a scheme for lossless compression of a short data series larger than 50 bytes. The method uses arithmetic coding and context modelling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes...

  18. Iterative dictionary construction for compression of large DNA data sets.

    Science.gov (United States)

    Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin

    2012-01-01

    Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.

  19. Ethico-epistemological implications of artificial intelligence for ...

    African Journals Online (AJOL)

    We argued for a re-direction of AI. research and suggested a humanization of Artificial Intelligence that cloaks technoscientific innovations with humanistic life jackets for man‟s preservation. The textual analysis method is adopted for this research. Key words: Ethics, Epistemology, Artificial Intelligence, Humanity.

  20. Artificial intelligence methods applied in the controlled synthesis of polydimethilsiloxane - poly (methacrylic acid) copolymer networks with imposed properties

    Science.gov (United States)

    Rusu, Teodora; Gogan, Oana Marilena

    2016-05-01

    This paper describes the use of artificial intelligence method in copolymer networks design. In the present study, we pursue a hybrid algorithm composed from two research themes in the genetic design framework: a Kohonen neural network (KNN), path (forward problem) combined with a genetic algorithm path (backward problem). The Tabu Search Method is used to improve the performance of the genetic algorithm path.

  1. Application of complex discrete wavelet transform in classification of Doppler signals using complex-valued artificial neural network.

    Science.gov (United States)

    Ceylan, Murat; Ceylan, Rahime; Ozbay, Yüksel; Kara, Sadik

    2008-09-01

    In biomedical signal classification, due to the huge amount of data, to compress the biomedical waveform data is vital. This paper presents two different structures formed using feature extraction algorithms to decrease size of feature set in training and test data. The proposed structures, named as wavelet transform-complex-valued artificial neural network (WT-CVANN) and complex wavelet transform-complex-valued artificial neural network (CWT-CVANN), use real and complex discrete wavelet transform for feature extraction. The aim of using wavelet transform is to compress data and to reduce training time of network without decreasing accuracy rate. In this study, the presented structures were applied to the problem of classification in carotid arterial Doppler ultrasound signals. Carotid arterial Doppler ultrasound signals were acquired from left carotid arteries of 38 patients and 40 healthy volunteers. The patient group included 22 males and 16 females with an established diagnosis of the early phase of atherosclerosis through coronary or aortofemoropopliteal (lower extremity) angiographies (mean age, 59 years; range, 48-72 years). Healthy volunteers were young non-smokers who seem to not bear any risk of atherosclerosis, including 28 males and 12 females (mean age, 23 years; range, 19-27 years). Sensitivity, specificity and average detection rate were calculated for comparison, after training and test phases of all structures finished. These parameters have demonstrated that training times of CVANN and real-valued artificial neural network (RVANN) were reduced using feature extraction algorithms without decreasing accuracy rate in accordance to our aim.

  2. Density ratios in compressions driven by radiation pressure

    International Nuclear Information System (INIS)

    Lee, S.

    1988-01-01

    It has been suggested that in the cannonball scheme of laser compression the pellet may be considered to be compressed by the 'brute force' of the radiation pressure. For such a radiation-driven compression, an energy balance method is applied to give an equation fixing the radius compression ratio K which is a key parameter for such intense compressions. A shock model is used to yield specific results. For a square-pulse driving power compressing a spherical pellet with a specific heat ratio of 5/3, a density compression ratio Γ of 27 is computed. Double (stepped) pulsing with linearly rising power enhances Γ to 1750. The value of Γ is not dependent on the absolute magnitude of the piston power, as long as this is large enough. Further enhancement of compression by multiple (stepped) pulsing becomes obvious. The enhanced compression increases the energy gain factor G for a 100 μm DT pellet driven by radiation power of 10 16 W from 6 for a square pulse power with 0.5 MJ absorbed energy to 90 for a double (stepped) linearly rising pulse with absorbed energy of 0.4 MJ assuming perfect coupling efficiency. (author)

  3. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  4. Signal Compression in Automatic Ultrasonic testing of Rails

    Directory of Open Access Journals (Sweden)

    Tomasz Ciszewski

    2007-01-01

    Full Text Available Full recording of the most important information carried by the ultrasonic signals allows realizing statistical analysis of measurement data. Statistical analysis of the results gathered during automatic ultrasonic tests gives data which lead, together with use of features of measuring method, differential lossy coding and traditional method of lossless data compression (Huffman’s coding, dictionary coding, to a comprehensive, efficient data compression algorithm. The subject of the article is to present the algorithm and the benefits got by using it in comparison to alternative compression methods. Storage of large amount  of data allows to create an electronic catalogue of ultrasonic defects. If it is created, the future qualification system training in the new solutions of the automat for test in rails will be possible.

  5. Artificial intelligence

    CERN Document Server

    Hunt, Earl B

    1975-01-01

    Artificial Intelligence provides information pertinent to the fundamental aspects of artificial intelligence. This book presents the basic mathematical and computational approaches to problems in the artificial intelligence field.Organized into four parts encompassing 16 chapters, this book begins with an overview of the various fields of artificial intelligence. This text then attempts to connect artificial intelligence problems to some of the notions of computability and abstract computing devices. Other chapters consider the general notion of computability, with focus on the interaction bet

  6. Spatial capture-recapture: a promising method for analyzing data collected using artificial cover objects

    Science.gov (United States)

    Sutherland, Chris; Munoz, David; Miller, David A.W.; Grant, Evan H. Campbell

    2016-01-01

    Spatial capture–recapture (SCR) is a relatively recent development in ecological statistics that provides a spatial context for estimating abundance and space use patterns, and improves inference about absolute population density. SCR has been applied to individual encounter data collected noninvasively using methods such as camera traps, hair snares, and scat surveys. Despite the widespread use of capture-based surveys to monitor amphibians and reptiles, there are few applications of SCR in the herpetological literature. We demonstrate the utility of the application of SCR for studies of reptiles and amphibians by analyzing capture–recapture data from Red-Backed Salamanders, Plethodon cinereus, collected using artificial cover boards. Using SCR to analyze spatial encounter histories of marked individuals, we found evidence that density differed little among four sites within the same forest (on average, 1.59 salamanders/m2) and that salamander detection probability peaked in early October (Julian day 278) reflecting expected surface activity patterns of the species. The spatial scale of detectability, a measure of space use, indicates that the home range size for this population of Red-Backed Salamanders in autumn was 16.89 m2. Surveying reptiles and amphibians using artificial cover boards regularly generates spatial encounter history data of known individuals, which can readily be analyzed using SCR methods, providing estimates of absolute density and inference about the spatial scale of habitat use.

  7. A spectral element-FCT method for the compressible Euler equations

    International Nuclear Information System (INIS)

    Giannakouros, J.; Karniadakis, G.E.

    1994-01-01

    A new algorithm based on spectral element discretizations and flux-corrected transport concepts is developed for the solution of the Euler equations of inviscid compressible fluid flow. A conservative formulation is proposed based on one- and two-dimensional cell-averaging and reconstruction procedures, which employ a staggered mesh of Gauss-Chebyshev and Gauss-Lobatto-Chebyshev collocation points. Particular emphasis is placed on the construction of robust boundary and interfacial conditions in one- and two-dimensions. It is demonstrated through shock-tube problems and two-dimensional simulations that the proposed algorithm leads to stable, non-oscillatory solutions of high accuracy. Of particular importance is the fact that dispersion errors are minimal, as show through experiments. From the operational point of view, casting the method in a spectral element formulation provides flexibility in the discretization, since a variable number of macro-elements or collocation points per element can be employed to accomodate both accuracy and geometric requirements

  8. Simulation of 2-D Compressible Flows on a Moving Curvilinear Mesh with an Implicit-Explicit Runge-Kutta Method

    KAUST Repository

    AbuAlSaud, Moataz

    2012-07-01

    The purpose of this thesis is to solve unsteady two-dimensional compressible Navier-Stokes equations for a moving mesh using implicit explicit (IMEX) Runge- Kutta scheme. The moving mesh is implemented in the equations using Arbitrary Lagrangian Eulerian (ALE) formulation. The inviscid part of the equation is explicitly solved using second-order Godunov method, whereas the viscous part is calculated implicitly. We simulate subsonic compressible flow over static NACA-0012 airfoil at different angle of attacks. Finally, the moving mesh is examined via oscillating the airfoil between angle of attack = 0 and = 20 harmonically. It is observed that the numerical solution matches the experimental and numerical results in the literature to within 20%.

  9. smallWig: parallel compression of RNA-seq WIG files.

    Science.gov (United States)

    Wang, Zhiying; Weissman, Tsachy; Milenkovic, Olgica

    2016-01-15

    We developed a new lossless compression method for WIG data, named smallWig, offering the best known compression rates for RNA-seq data and featuring random access functionalities that enable visualization, summary statistics analysis and fast queries from the compressed files. Our approach results in order of magnitude improvements compared with bigWig and ensures compression rates only a fraction of those produced by cWig. The key features of the smallWig algorithm are statistical data analysis and a combination of source coding methods that ensure high flexibility and make the algorithm suitable for different applications. Furthermore, for general-purpose file compression, the compression rate of smallWig approaches the empirical entropy of the tested WIG data. For compression with random query features, smallWig uses a simple block-based compression scheme that introduces only a minor overhead in the compression rate. For archival or storage space-sensitive applications, the method relies on context mixing techniques that lead to further improvements of the compression rate. Implementations of smallWig can be executed in parallel on different sets of chromosomes using multiple processors, thereby enabling desirable scaling for future transcriptome Big Data platforms. The development of next-generation sequencing technologies has led to a dramatic decrease in the cost of DNA/RNA sequencing and expression profiling. RNA-seq has emerged as an important and inexpensive technology that provides information about whole transcriptomes of various species and organisms, as well as different organs and cellular communities. The vast volume of data generated by RNA-seq experiments has significantly increased data storage costs and communication bandwidth requirements. Current compression tools for RNA-seq data such as bigWig and cWig either use general-purpose compressors (gzip) or suboptimal compression schemes that leave significant room for improvement. To substantiate

  10. An artificial neural network ensemble method for fault diagnosis of proton exchange membrane fuel cell system

    International Nuclear Information System (INIS)

    Shao, Meng; Zhu, Xin-Jian; Cao, Hong-Fei; Shen, Hai-Feng

    2014-01-01

    The commercial viability of PEMFC (proton exchange membrane fuel cell) systems depends on using effective fault diagnosis technologies in PEMFC systems. However, many researchers have experimentally studied PEMFC (proton exchange membrane fuel cell) systems without considering certain fault conditions. In this paper, an ANN (artificial neural network) ensemble method is presented that improves the stability and reliability of the PEMFC systems. In the first part, a transient model giving it flexibility in application to some exceptional conditions is built. The PEMFC dynamic model is built and simulated using MATLAB. In the second, using this model and experiments, the mechanisms of four different faults in PEMFC systems are analyzed in detail. Third, the ANN ensemble for the fault diagnosis is built and modeled. This model is trained and tested by the data. The test result shows that, compared with the previous method for fault diagnosis of PEMFC systems, the proposed fault diagnosis method has higher diagnostic rate and generalization ability. Moreover, the partial structure of this method can be altered easily, along with the change of the PEMFC systems. In general, this method for diagnosis of PEMFC has value for certain applications. - Highlights: • We analyze the principles and mechanisms of the four faults in PEMFC (proton exchange membrane fuel cell) system. • We design and model an ANN (artificial neural network) ensemble method for the fault diagnosis of PEMFC system. • This method has high diagnostic rate and strong generalization ability

  11. Force balancing in mammographic compression

    International Nuclear Information System (INIS)

    Branderhorst, W.; Groot, J. E. de; Lier, M. G. J. T. B. van; Grimbergen, C. A.; Neeter, L. M. F. H.; Heeten, G. J. den; Neeleman, C.

    2016-01-01

    Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess its feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast

  12. Development of classification and prediction methods of critical heat flux using fuzzy theory and artificial neural networks

    International Nuclear Information System (INIS)

    Moon, Sang Ki

    1995-02-01

    This thesis applies new information techniques, artificial neural networks, (ANNs) and fuzzy theory, to the investigation of the critical heat flux (CHF) phenomenon for water flow in vertical round tubes. The work performed are (a) classification and prediction of CHF based on fuzzy clustering and ANN, (b) prediction and parametric trends analysis of CHF using ANN with the introduction of dimensionless parameters, and (c) detection of CHF occurrence using fuzzy rule and spatiotemporal neural network (STN). Fuzzy clustering and ANN are used for classification and prediction of the CHF using primary system parameters. The fuzzy clustering classifies the experimental CHF data into a few data clusters (data groups) according to the data characteristics. After classification of the experimental data, the characteristics of the resulted clusters are discussed with emphasis on the distribution of the experimental conditions and physical mechanisms. The CHF data in each group are trained in an artificial neural network to predict the CHF. The artificial neural network adjusts the weight so as to minimize the prediction error within the corresponding cluster. Application of the proposed method to the KAIST CHF data bank shows good prediction capability of the CHF, better than other existing methods. Parametric trends of the CHF are analyzed by applying artificial neural networks to a CHF data base for water flow in uniformly heated vertical round tubes. The analyses are performed from three viewpoints, i.e., for fixed inlet conditions, for fixed exit conditions, and based on local conditions hypothesis. In order to remove the necessity of data classification, Katto and Groeneveld et al.'s dimensionless parameters are introduced in training the ANNs with the experimental CHF data. The trained ANNs predict the CHF better than any other conventional correlations, showing RMS error of 8.9%, 13.1%, and 19.3% for fixed inlet conditions, for fixed exit conditions, and for local

  13. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    Science.gov (United States)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  14. Bit-wise arithmetic coding for data compression

    Science.gov (United States)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  15. Spatially Resolved Artificial Chemistry

    DEFF Research Database (Denmark)

    Fellermann, Harold

    2009-01-01

    Although spatial structures can play a crucial role in chemical systems and can drastically alter the outcome of reactions, the traditional framework of artificial chemistry is a well-stirred tank reactor with no spatial representation in mind. Advanced method development in physical chemistry has...... made a class of models accessible to the realms of artificial chemistry that represent reacting molecules in a coarse-grained fashion in continuous space. This chapter introduces the mathematical models of Brownian dynamics (BD) and dissipative particle dynamics (DPD) for molecular motion and reaction...

  16. Spatially Resolved Artificial Chemistry

    DEFF Research Database (Denmark)

    Fellermann, Harold

    2009-01-01

    made a class of models accessible to the realms of artificial chemistry that represent reacting molecules in a coarse-grained fashion in continuous space. This chapter introduces the mathematical models of Brownian dynamics (BD) and dissipative particle dynamics (DPD) for molecular motion and reaction......Although spatial structures can play a crucial role in chemical systems and can drastically alter the outcome of reactions, the traditional framework of artificial chemistry is a well-stirred tank reactor with no spatial representation in mind. Advanced method development in physical chemistry has...

  17. Correlation and image compression for limited-bandwidth CCD.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  18. Artificial Consciousness or Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Spanache Florin

    2017-05-01

    Full Text Available Artificial intelligence is a tool designed by people for the gratification of their own creative ego, so we can not confuse conscience with intelligence and not even intelligence in its human representation with conscience. They are all different concepts and they have different uses. Philosophically, there are differences between autonomous people and automatic artificial intelligence. This is the difference between intelligence and artificial intelligence, autonomous versus automatic. But conscience is above these differences because it is neither conditioned by the self-preservation of autonomy, because a conscience is something that you use to help your neighbor, nor automatic, because one’s conscience is tested by situations which are not similar or subject to routine. So, artificial intelligence is only in science-fiction literature similar to an autonomous conscience-endowed being. In real life, religion with its notions of redemption, sin, expiation, confession and communion will not have any meaning for a machine which cannot make a mistake on its own.

  19. An Examination of a Music Appreciation Method Incorporating Tactile Sensations from Artificial Vibrations

    Science.gov (United States)

    Ideguchi, Tsuyoshi; Yoshida, Ryujyu; Ooshima, Keita

    We examined how test subject impressions of music changed when artificial vibrations were incorporated as constituent elements of a musical composition. In this study, test subjects listened to several music samples in which different types of artificial vibration had been incorporated and then subjectively evaluated any resulting changes to their impressions of the music. The following results were obtained: i) Even if rhythm vibration is added to a silent component of a musical composition, it can effectively enhance musical fitness. This could be readily accomplished when actual sounds that had been synchronized with the vibration components were provided beforehand. ii) The music could be listened to more comfortably by adding not only a natural vibration extracted from percussion instruments but also artificial vibration as tactile stimulation according to intentional timing. Furthermore, it was found that the test subjects' impression of the music was affected by a characteristic of the artificial vibration. iii) Adding vibration to high-frequency areas can offer an effective and practical way of enhancing the appeal of a musical composition. iv) The movement sensations of sound and vibration could be experienced when the strength of the sound and vibration are modified in turn. These results suggest that the intentional application of artificial vibration could result in a sensitivity amplification factor on the part of a listener.

  20. Simulation of a pulsatile total artificial heart: Development of a partitioned Fluid Structure Interaction model

    Science.gov (United States)

    Sonntag, Simon J.; Kaufmann, Tim A. S.; Büsen, Martin R.; Laumen, Marco; Linde, Torsten; Schmitz-Rode, Thomas; Steinseifer, Ulrich

    2013-04-01

    Heart disease is one of the leading causes of death in the world. Due to a shortage in donor organs artificial hearts can be a bridge to transplantation or even serve as a destination therapy for patients with terminal heart insufficiency. A pusher plate driven pulsatile membrane pump, the Total Artificial Heart (TAH) ReinHeart, is currently under development at the Institute of Applied Medical Engineering of RWTH Aachen University.This paper presents the methodology of a fully coupled three-dimensional time-dependent Fluid Structure Interaction (FSI) simulation of the TAH using a commercial partitioned block-Gauss-Seidel coupling package. Partitioned coupling of the incompressible fluid with the slender flexible membrane as well as a high fluid/structure density ratio of about unity led inherently to a deterioration of the stability (‘artificial added mass instability’). The objective was to conduct a stable simulation with high accuracy of the pumping process. In order to achieve stability, a combined resistance and pressure outlet boundary condition as well as the interface artificial compressibility method was applied. An analysis of the contact algorithm and turbulence condition is presented. Independence tests are performed for the structural and the fluid mesh, the time step size and the number of pulse cycles. Because of the large deformation of the fluid domain, a variable mesh stiffness depending on certain mesh properties was specified for the fluid elements. Adaptive remeshing was avoided. Different approaches for the mesh stiffness function are compared with respect to convergence, preservation of mesh topology and mesh quality. The resulting mesh aspect ratios, mesh expansion factors and mesh orthogonalities are evaluated in detail. The membrane motion and flow distribution of the coupled simulations are compared with a top-view recording and stereo Particle Image Velocimetry (PIV) measurements, respectively, of the actual pump.

  1. Using artificial intelligence methods to design new conducting polymers

    Directory of Open Access Journals (Sweden)

    Ronaldo Giro

    2003-12-01

    Full Text Available In the last years the possibility of creating new conducting polymers exploring the concept of copolymerization (different structural monomeric units has attracted much attention from experimental and theoretical points of view. Due to the rich carbon reactivity an almost infinite number of new structures is possible and the procedure of trial and error has been the rule. In this work we have used a methodology able of generating new structures with pre-specified properties. It combines the use of negative factor counting (NFC technique with artificial intelligence methods (genetic algorithms - GAs. We present the results for a case study for poly(phenylenesulfide phenyleneamine (PPSA, a copolymer formed by combination of homopolymers: polyaniline (PANI and polyphenylenesulfide (PPS. The methodology was successfully applied to the problem of obtaining binary up to quinternary disordered polymeric alloys with a pre-specific gap value or exhibiting metallic properties. It is completely general and can be in principle adapted to the design of new classes of materials with pre-specified properties.

  2. Data compression techniques and the ACR-NEMA digital interface communications standard

    International Nuclear Information System (INIS)

    Zielonka, J.S.; Blume, H.; Hill, D.; Horil, S.C.; Lodwick, G.S.; Moore, J.; Murphy, L.L.; Wake, R.; Wallace, G.

    1987-01-01

    Data compression offers the possibility of achieving high, effective information transfer rates between devices and of efficient utilization of digital storge devices in meeting department-wide archiving needs. Accordingly, the ARC-NEMA Digital Imaging and Communications Standards Committee established a Working Group to develop a means to incorporate the optimal use of a wide variety of current compression techniques while remaining compatible with the standard. This proposed method allows the use of public domain techniques, predetermined methods between devices already aware of the selected algorithm, and the ability for the originating device to specify algorithms and parameters prior to transmitting compressed data. Because of the latter capability, the technique has the potential for supporting many compression algorithms not yet developed or in common use. Both lossless and lossy methods can be implemented. In addition to description of the overall structure of this proposal, several examples using current compression algorithms are given

  3. Hardware compression using common portions of data

    Science.gov (United States)

    Chang, Jichuan; Viswanathan, Krishnamurthy

    2015-03-24

    Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.

  4. Cognitive Artificial Intelligence Method for Interpreting Transformer Condition Based on Maintenance Data

    Directory of Open Access Journals (Sweden)

    Karel Octavianus Bachri

    2017-07-01

    Full Text Available A3S(Arwin-Adang-Aciek-Sembiring is a method of information fusion at a single observation and OMA3S(Observation Multi-time A3S is a method of information fusion for time-series data. This paper proposes OMA3S-based Cognitive Artificial-Intelligence method for interpreting Transformer Condition, which is calculated based on maintenance data from Indonesia National Electric Company (PLN. First, the proposed method is tested using the previously published data, and then followed by implementation on maintenance data. Maintenance data are fused to obtain part condition, and part conditions are fused to obtain transformer condition. Result shows proposed method is valid for DGA fault identification with the average accuracy of 91.1%. The proposed method not only can interpret the major fault, it can also identify the minor fault occurring along with the major fault, allowing early warning feature. Result also shows part conditions can be interpreted using information fusion on maintenance data, and the transformer condition can be interpreted using information fusion on part conditions. The future works on this research is to gather more data, to elaborate more factors to be fused, and to design a cognitive processor that can be used to implement this concept of intelligent instrumentation.

  5. Traceable calibration of impedance heads and artificial mastoids

    International Nuclear Information System (INIS)

    Scott, D A; Dickinson, L P; Bell, T J

    2015-01-01

    Artificial mastoids are devices which simulate the mechanical characteristics of the human head, and in particular of the bony structure behind the ear. They are an essential tool in the calibration of bone-conduction hearing aids and audiometers. With the emergence of different types of artificial mastoids in the market, and the realisation that the visco-elastic part of these instruments changes over time, the development of a method of traceable calibration of these devices without relying on commercial software has become important for national metrology institutes. This paper describes commercially available calibration methods, and the development of a traceable calibration method including the traceable calibration of the impedance head used to measure the mechanical impedance of the artificial mastoid. (paper)

  6. Compressive buckling of black phosphorene nanotubes: an atomistic study

    Science.gov (United States)

    Nguyen, Van-Trang; Le, Minh-Quy

    2018-04-01

    We investigate through molecular dynamics finite element method with Stillinger-Weber potential the uniaxial compression of armchair and zigzag black phosphorene nanotubes. We focus especially on the effects of the tube’s diameter with fixed length-diameter ratio, effects of the tube’s length for a pair of armchair and zigzag tubes of equal diameters, and effects of the tube’s diameter with fixed lengths. Their Young’s modulus, critical compressive stress and critical compressive strain are studied and discussed for these 3 case studies. Compressive buckling was clearly observed in the armchair nanotubes. Local bond breaking near the boundary occurred in the zigzag ones under compression.

  7. An efficient algorithm for MR image reconstruction and compression

    International Nuclear Information System (INIS)

    Wang, Hang; Rosenfeld, D.; Braun, M.; Yan, Hong

    1992-01-01

    In magnetic resonance imaging (MRI), the original data are sampled in the spatial frequency domain. The sampled data thus constitute a set of discrete Fourier transform (DFT) coefficients. The image is usually reconstructed by taking inverse DFT. The image data may then be efficiently compressed using the discrete cosine transform (DCT). A method of using DCT to treat the sampled data is presented which combines two procedures, image reconstruction and data compression. This method may be particularly useful in medical picture archiving and communication systems where both image reconstruction and compression are important issues. 11 refs., 3 figs

  8. Spectrum recovery method based on sparse representation for segmented multi-Gaussian model

    Science.gov (United States)

    Teng, Yidan; Zhang, Ye; Ti, Chunli; Su, Nan

    2016-09-01

    Hyperspectral images can realize crackajack features discriminability for supplying diagnostic characteristics with high spectral resolution. However, various degradations may generate negative influence on the spectral information, including water absorption, bands-continuous noise. On the other hand, the huge data volume and strong redundancy among spectrums produced intense demand on compressing HSIs in spectral dimension, which also leads to the loss of spectral information. The reconstruction of spectral diagnostic characteristics has irreplaceable significance for the subsequent application of HSIs. This paper introduces a spectrum restoration method for HSIs making use of segmented multi-Gaussian model (SMGM) and sparse representation. A SMGM is established to indicating the unsymmetrical spectral absorption and reflection characteristics, meanwhile, its rationality and sparse property are discussed. With the application of compressed sensing (CS) theory, we implement sparse representation to the SMGM. Then, the degraded and compressed HSIs can be reconstructed utilizing the uninjured or key bands. Finally, we take low rank matrix recovery (LRMR) algorithm for post processing to restore the spatial details. The proposed method was tested on the spectral data captured on the ground with artificial water absorption condition and an AVIRIS-HSI data set. The experimental results in terms of qualitative and quantitative assessments demonstrate that the effectiveness on recovering the spectral information from both degradations and loss compression. The spectral diagnostic characteristics and the spatial geometry feature are well preserved.

  9. Three-dimensional imaging of artificial fingerprint by optical coherence tomography

    Science.gov (United States)

    Larin, Kirill V.; Cheng, Yezeng

    2008-03-01

    Fingerprint recognition is one of the popular used methods of biometrics. However, due to the surface topography limitation, fingerprint recognition scanners are easily been spoofed, e.g. using artificial fingerprint dummies. Thus, biometric fingerprint identification devices need to be more accurate and secure to deal with different fraudulent methods including dummy fingerprints. Previously, we demonstrated that Optical Coherence Tomography (OCT) images revealed the presence of the artificial fingerprints (made from different household materials, such as cement and liquid silicone rubber) at all times, while the artificial fingerprints easily spoofed the commercial fingerprint reader. Also we demonstrated that an analysis of the autocorrelation of the OCT images could be used in automatic recognition systems. Here, we exploited the three-dimensional (3D) imaging of the artificial fingerprint by OCT to generate vivid 3D image for both the artificial fingerprint layer and the real fingerprint layer beneath. With the reconstructed 3D image, it could not only point out whether there exists an artificial material, which is intended to spoof the scanner, above the real finger, but also could provide the hacker's fingerprint. The results of these studies suggested that Optical Coherence Tomography could be a powerful real-time noninvasive method for accurate identification of artificial fingerprints real fingerprints as well.

  10. A novel high-frequency encoding algorithm for image compression

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  11. MacCormack's technique-based pressure reconstruction approach for PIV data in compressible flows with shocks

    Science.gov (United States)

    Liu, Shun; Xu, Jinglei; Yu, Kaikai

    2017-06-01

    This paper proposes an improved approach for extraction of pressure fields from velocity data, such as obtained by particle image velocimetry (PIV), especially for steady compressible flows with strong shocks. The principle of this approach is derived from Navier-Stokes equations, assuming adiabatic condition and neglecting viscosity of flow field boundaries measured by PIV. The computing method is based on MacCormack's technique in computational fluid dynamics. Thus, this approach is called the MacCormack method. Moreover, the MacCormack method is compared with several approaches proposed in previous literature, including the isentropic method, the spatial integration and the Poisson method. The effects of velocity error level and PIV spatial resolution on these approaches are also quantified by using artificial velocity data containing shock waves. The results demonstrate that the MacCormack method has higher reconstruction accuracy than other approaches, and its advantages become more remarkable with shock strengthening. Furthermore, the performance of the MacCormack method is also validated by using synthetic PIV images with an oblique shock wave, confirming the feasibility and advantage of this approach in real PIV experiments. This work is highly significant for the studies on aerospace engineering, especially the outer flow fields of supersonic aircraft and the internal flow fields of ramjets.

  12. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2009-01-01

    This book introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. For the computation of turbulent compressible flows, current methods of averaging and filtering are presented so that the reader is exposed to a consistent development of applicable equation sets for both the mean or resolved fields as well as the transport equations for the turbulent stress field. For the measurement of turbulent compressible flows, current techniques ranging from hot-wire anemometry to PIV are evaluated and limitations assessed. Characterizing dynamic features of free shear flows, including jets, mixing layers and wakes, and wall-bounded flows, including shock-turbulence and shock boundary-layer interactions, obtained from computations, experiments and simulations are discussed. Key features: * Describes prediction methodologies in...

  13. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  14. Fluvial facies reservoir productivity prediction method based on principal component analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Pengyu Gao

    2016-03-01

    Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.

  15. Method and apparatus for optimizing operation of a power generating plant using artificial intelligence techniques

    Science.gov (United States)

    Wroblewski, David [Mentor, OH; Katrompas, Alexander M [Concord, OH; Parikh, Neel J [Richmond Heights, OH

    2009-09-01

    A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.

  16. Compression and fast retrieval of SNP data.

    Science.gov (United States)

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. An analysis of the efficacy of bag-valve-mask ventilation and chest compression during different compression-ventilation ratios in manikin-simulated paediatric resuscitation.

    Science.gov (United States)

    Kinney, S B; Tibballs, J

    2000-01-01

    The ideal chest compression and ventilation ratio for children during performance of cardiopulmonary resuscitation (CPR) has not been determined. The efficacy of chest compression and ventilation during compression ventilation ratios of 5:1, 10:2 and 15:2 was examined. Eighteen nurses, working in pairs, were instructed to provide chest compression and bag-valve-mask ventilation for 1 min with each ratio in random on a child-sized manikin. The subjects had been previously taught paediatric CPR within the last 3 or 5 months. The efficacy of ventilation was assessed by measurement of the expired tidal volume and the number of breaths provided. The rate of chest compression was guided by a metronome set at 100/min. The efficacy of chest compressions was assessed by measurement of the rate and depth of compression. There was no significant difference in the mean tidal volume or the percentage of effective chest compressions delivered for each compression-ventilation ratio. The number of breaths delivered was greatest with the ratio of 5:1. The percentage of effective chest compressions was equal with all three methods but the number of effective chest compressions was greatest with a ratio of 5:1. This study supports the use of a compression-ventilation ratio of 5:1 during two-rescuer paediatric cardiopulmonary resuscitation.

  18. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  19. Compressive Sampling of EEG Signals with Finite Rate of Innovation

    Directory of Open Access Journals (Sweden)

    Poh Kok-Kiong

    2010-01-01

    Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.

  20. Application of Minimally Invasive Treatment of Locking Compression Plate in Schatzker Ⅰ-Ⅲ Tibial Plateau Fracture

    Directory of Open Access Journals (Sweden)

    Guohui Zhao

    2014-06-01

    Full Text Available Objective: To investigate the clinical effect of minimally invasive treatment of locking compression plate (LCP in Schatzker Ⅰ-Ⅲ tibial plateau fracture. Methods: Thirty-eight patients with Schatzker Ⅰ-Ⅲ tibial plateau fracture in our hospital were given minimally invasive treatment of LCP, and the artificial bone was transplanted to the depressed bone. Adverse responses, wound healing time and clinical efficacy were observed. Results: All patients were followed-up for 14- 20 months, and the mean duration was 16 months. Within 1 week after operation, 1 patient suffered from short-term rejection reaction to artificial bone, but he healed after corresponding measures were taken. There were no complications like skin necrosis and externally-exposed steel plate among the patients. In addition, all fractures were recovered, and the recovery time was 2.6 - 4.1 months, with the mean duration being 3.4 months. The recovery of knee function was favorable, in which 20 cases were excellent, 14 were good, and 4 were general. The excellent and good rate was 89.5%. Conclusion: Minimally invasive treatment of LCP for Schatzker Ⅰ - Ⅲ tibial plateau fracture can reduce the postoperative relocation loss, and has small trauma and stable fixation.

  1. Introducing micrometer-sized artificial objects into live cells: a method for cell-giant unilamellar vesicle electrofusion.

    Directory of Open Access Journals (Sweden)

    Akira C Saito

    Full Text Available Here, we report a method for introducing large objects of up to a micrometer in diameter into cultured mammalian cells by electrofusion of giant unilamellar vesicles. We prepared GUVs containing various artificial objects using a water-in-oil (w/o emulsion centrifugation method. GUVs and dispersed HeLa cells were exposed to an alternating current (AC field to induce a linear cell-GUV alignment, and then a direct current (DC pulse was applied to facilitate transient electrofusion. With uniformly sized fluorescent beads as size indexes, we successfully and efficiently introduced beads of 1 µm in diameter into living cells along with a plasmid mammalian expression vector. Our electrofusion did not affect cell viability. After the electrofusion, cells proliferated normally until confluence was reached, and the introduced fluorescent beads were inherited during cell division. Analysis by both confocal microscopy and flow cytometry supported these findings. As an alternative approach, we also introduced a designed nanostructure (DNA origami into live cells. The results we report here represent a milestone for designing artificial symbiosis of functionally active objects (such as micro-machines in living cells. Moreover, our technique can be used for drug delivery, tissue engineering, and cell manipulation.

  2. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  3. Analysis Resilient Algorithm on Artificial Neural Network Backpropagation

    Science.gov (United States)

    Saputra, Widodo; Tulus; Zarlis, Muhammad; Widia Sembiring, Rahmat; Hartama, Dedy

    2017-12-01

    Prediction required by decision makers to anticipate future planning. Artificial Neural Network (ANN) Backpropagation is one of method. This method however still has weakness, for long training time. This is a reason to improve a method to accelerate the training. One of Artificial Neural Network (ANN) Backpropagation method is a resilient method. Resilient method of changing weights and bias network with direct adaptation process of weighting based on local gradient information from every learning iteration. Predicting data result of Istanbul Stock Exchange training getting better. Mean Square Error (MSE) value is getting smaller and increasing accuracy.

  4. An example of the use of the DELPHI method: future prospects of artificial heart techniques in France

    International Nuclear Information System (INIS)

    Derian, Jean-Claude; Morize, Francoise; Vernejoul, Pierre de; Vial, Renee

    1971-01-01

    The artificial heart is still only a research project surrounded by numerous uncertainties which make it very difficult to estimate, at the moment, the possibilities for future development of this technique in France. A systematic analysis of the hazards which characterize this project has been undertaken in the following report: restricting these uncertainties has required a taking into account of opinions of specialists concerned with type of research or its upshot. We have achieved this by adapting an investigation technique which is still unusual in France, the DELPHI method. This adaptation has allowed the confrontation and statistical aggregation of the opinions given by a body of a hundred experts who were consulted through a program of sequential interrogations which studied in particular, the probable date of the research issue, the clinical cases which require the use of an artificial heart, as well as the probable future needs. After having taken into account the economic constraints, we can deduce from these results the probable amount of plutonium 238 needed in the hypothesis where isotopic generator would be retained for the energetics feeding of the artificial heart [fr

  5. Effects of flashlight guidance on chest compression performance in cardiopulmonary resuscitation in a noisy environment.

    Science.gov (United States)

    You, Je Sung; Chung, Sung Phil; Chang, Chul Ho; Park, Incheol; Lee, Hye Sun; Kim, SeungHo; Lee, Hahn Shick

    2013-08-01

    In real cardiopulmonary resuscitation (CPR), noise can arise from instructional voices and environmental sounds in places such as a battlefield and industrial and high-traffic areas. A feedback device using a flashing light was designed to overcome noise-induced stimulus saturation during CPR. This study was conducted to determine whether 'flashlight' guidance influences CPR performance in a simulated noisy setting. We recruited 30 senior medical students with no previous experience of using flashlight-guided CPR to participate in this prospective, simulation-based, crossover study. The experiment was conducted in a simulated noisy situation using a cardiac arrest model without ventilation. Noise such as patrol car and fire engine sirens was artificially generated. The flashlight guidance device emitted light pulses at the rate of 100 flashes/min. Participants also received instructions to achieve the desired rate of 100 compressions/min. CPR performances were recorded with a Resusci Anne mannequin with a computer skill-reporting system. There were significant differences between the control and flashlight groups in mean compression rate (MCR), MCR/min and visual analogue scale. However, there were no significant differences in correct compression depth, mean compression depth, correct hand position, and correctly released compression. The flashlight group constantly maintained the pace at the desired 100 compressions/min. Furthermore, the flashlight group had a tendency to keep the MCR constant, whereas the control group had a tendency to decrease it after 60 s. Flashlight-guided CPR is particularly advantageous for maintaining a desired MCR during hands-only CPR in noisy environments, where metronome pacing might not be clearly heard.

  6. Compressed Air Production Using Vehicle Suspension

    Directory of Open Access Journals (Sweden)

    Ninad Arun Malpure

    2015-08-01

    Full Text Available Abstract Generally compressed air is produced using different types of air compressors which consumes lot of electric energy and is noisy. In this paper an innovative idea is put forth for production of compressed air using movement of vehicle suspension which normal is wasted. The conversion of the force energy into the compressed air is carried out by the mechanism which consists of the vehicle suspension system hydraulic cylinder Non-return valve air compressor and air receiver. We are collecting air in the cylinder and store this energy into the tank by simply driving the vehicle. This method is non-conventional as no fuel input is required and is least polluting.

  7. Packet Header Compression for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Pekka KOSKELA

    2016-01-01

    Full Text Available Due to the extensive growth of Internet of Things (IoT, the number of wireless devices connected to the Internet is forecasted to grow to 26 billion units installed in 2020. This will challenge both the energy efficiency of wireless battery powered devices and the bandwidth of wireless networks. One solution for both challenges could be to utilize packet header compression. This paper reviews different packet compression, and especially packet header compression, methods and studies the performance of Robust Header Compression (ROHC in low speed radio networks such as XBEE, and in high speed radio networks such as LTE and WLAN. In all networks, the compressing and decompressing processing causes extra delay and power consumption, but in low speed networks, energy can still be saved due to the shorter transmission time.

  8. Superconductivity under uniaxial compression in β-(BDA-TTP) salts

    International Nuclear Information System (INIS)

    Suzuki, T.; Onari, S.; Ito, H.; Tanaka, Y.

    2009-01-01

    In order to clarify the mechanism of organic superconductor β-(BDA-TTP) salts. We study the superconductivity under uniaxial compression with non-dimerized two-band Hubbard model. We have calculated the uniaxial compression dependence of T c by solving the Eliashberg's equation using the fluctuation exchange (FLEX) approximation. The transfer integral under the uniaxial compression was estimated by the extended Huckel method. We have found that non-monotonic behaviors of T c in experimental results under uniaxial compression are understood taking the spin frustration and spin fluctuation into account.

  9. Superconductivity under uniaxial compression in β-(BDA-TTP) salts

    Science.gov (United States)

    Suzuki, T.; Onari, S.; Ito, H.; Tanaka, Y.

    2009-10-01

    In order to clarify the mechanism of organic superconductor β-(BDA-TTP) salts. We study the superconductivity under uniaxial compression with non-dimerized two-band Hubbard model. We have calculated the uniaxial compression dependence of T c by solving the Eliashberg’s equation using the fluctuation exchange (FLEX) approximation. The transfer integral under the uniaxial compression was estimated by the extended Huckel method. We have found that non-monotonic behaviors of T c in experimental results under uniaxial compression are understood taking the spin frustration and spin fluctuation into account.

  10. Superconductivity under uniaxial compression in beta-(BDA-TTP) salts

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, T., E-mail: suzuki@rover.nuap.nagoya-u.ac.j [Department of Applied Physics and JST, TRIP, Nagoya University, Chikusa, Nagoya 464-8603 (Japan); Onari, S.; Ito, H.; Tanaka, Y. [Department of Applied Physics and JST, TRIP, Nagoya University, Chikusa, Nagoya 464-8603 (Japan)

    2009-10-15

    In order to clarify the mechanism of organic superconductor beta-(BDA-TTP) salts. We study the superconductivity under uniaxial compression with non-dimerized two-band Hubbard model. We have calculated the uniaxial compression dependence of T{sub c} by solving the Eliashberg's equation using the fluctuation exchange (FLEX) approximation. The transfer integral under the uniaxial compression was estimated by the extended Huckel method. We have found that non-monotonic behaviors of T{sub c} in experimental results under uniaxial compression are understood taking the spin frustration and spin fluctuation into account.

  11. The influence of kind of coating additive on the compressive strength of RCA-based concrete prepared by triple-mixing method

    Science.gov (United States)

    Urban, K.; Sicakova, A.

    2017-10-01

    The paper deals with the use of alternative powder additives (fly ash and fine fraction of recycled concrete) to improve the recycled concrete aggregate and this occurs directly in the concrete mixing process. Specific mixing process (triple mixing method) is applied as it is favourable for this goal. Results of compressive strength after 2 and 28 days of hardening are given. Generally, using powder additives for coating the coarse recycled concrete aggregate in the first stage of triple mixing resulted in decrease of compressive strength, comparing the cement. There is no very important difference between samples based on recycled concrete aggregate and those based on natural aggregate as far as the cement is used for coating. When using both the fly ash and recycled concrete powder, the kind of aggregate causes more significant differences in compressive strength, with the values of those based on the recycled concrete aggregate being worse.

  12. Excavation and drying of compressed peat; Tiivistetyn turpeen nosto ja kuivaus

    Energy Technology Data Exchange (ETDEWEB)

    Erkkilae, A.; Frilander, P.; Hillebrand, K.; Nurmi, H.

    1996-12-31

    The target of this three year (1993 - 1995) project was to improve the peat product-ion efficiency by developing an energy economical excavation method for compressed peat, by which it is possible to obtain best possible degree of compression and load from the DS-production point of view. It is possible to improve the degree of utilization of solar radiation in drying from 30 % to 40 %. The main research areas were drying of the compressed peat and peat compression. The third sub-task for 1995 was demonstration of the main parts of the method in laboratory scale. Experimental compressed peat (Compeat) drying models were made for peats Carex-peat H7, Carex-peat H5 and Carex-Sphagnum-peat H7. Compeat dried without turning in best circumstances in 34 % shorter time than milled layer made of the same peat turned twice, the initial moisture content being 4 kgH2OkgDS-1. In the tests carried out in 1995 with Carex-peat the compression had not corresponding effect on intensifying of the drying of peat. Compression of Carex-Sphagnum peat H7 increased the drying speed by about 10 % compared with the drying time of uncompressed milled layer. In the sprinkling test about 30-50 % of the sprinkled water was sucked into the compressed peat layer, while about 70 % of the rain is sucked into the corresponding uncompressed milled layer. Use of vibration decreased the energy consumption of the steel-surfaced nozzles about 20 % in the maximum, but the effect depend on the rotation speed of the macerator and the vibration power. In the new Compeat method (production method for compressed peat), developed in the research, the peat is loosened from the field surface by milling 3-5 cm thick layer of peat of moisture content 75-80 %

  13. Free-beam soliton self-compression in air

    Science.gov (United States)

    Voronin, A. A.; Mitrofanov, A. V.; Sidorov-Biryukov, D. A.; Fedotov, A. B.; Pugžlys, A.; Panchenko, V. Ya; Shumakova, V.; Ališauskas, S.; Baltuška, A.; Zheltikov, A. M.

    2018-02-01

    We identify a physical scenario whereby soliton transients generated in freely propagating laser beams within the regions of anomalous dispersion in air can be compressed as a part of their free-beam spatiotemporal evolution to yield few-cycle mid- and long-wavelength-infrared field waveforms, whose peak power is substantially higher than the peak power of the input pulses. We show that this free-beam soliton self-compression scenario does not require ionization or laser-induced filamentation, enabling high-throughput self-compression of mid- and long-wavelength-infrared laser pulses within a broad range of peak powers from tens of gigawatts up to the terawatt level. We also demonstrate that this method of pulse compression can be extended to long-range propagation, providing self-compression of high-peak-power laser pulses in atmospheric air within propagation ranges as long as hundreds of meters, suggesting new ways towards longer-range standoff detection and remote sensing.

  14. Compression ratio of municipal solid waste simulation using artificial neural network and adaptive neurofuzzy system

    Directory of Open Access Journals (Sweden)

    Maryam Mokhtari

    2014-07-01

    Full Text Available The compression ratio of Municipal Solid Waste (MSW is an essential parameter for evaluation of waste settlement. Since it is relatively time-consuming to determine compression ratio from oedometer tests and there exist difficulties associated with working on waste materials, it will be useful to develop models based on waste physical properties. Therefore, present research attempts to develop proper prediction models using ANFIS and ANN models. The compression ratio was modeled as a function of the physical properties of waste including dry unit weight, water content, and biodegradable organic content. A reliable experimental database of oedometer tests, taken from the literature, was employed to train and test the ANN and ANFIS models. The performance of the developed models was investigated according to different statistical criteria (i.e. correlation coefficient, root mean squared error, and mean absolute error recommended by researchers. The final models have demonstrated the correlation coefficients higher than 90% and low error values; so, they have capability for acceptable prediction of municipal solid waste compression ratio. Furthermore, the values of performance measures obtained for ANN and ANFIS models indicate that the ANFIS model performs better than ANN model.   Resumen El índice de compresión de residuos sólidos es un parámetro esencial para la evaluación del asentamiento de un basurero municipal. Debido al desgaste de tiempo para determinar el índice de compresión a partir de pruebas edométricas y debido a las dificultades asociadas al trabajo con materiales desechados es necesario desarrollar modelos basados en las propiedades físicas de los desechos solidos. Además, la presente investigación pretende  desarrollar modelos de predicción apropiados a partir de los esquemas ANFIS y ANN. El índice de comprensión se modeló como una función de propiedades físicas de desechos que incluyen el peso seco de una

  15. Expansion and compression shock wave calculation in pipes with the C.V.M. numerical method

    International Nuclear Information System (INIS)

    Raymond, P.; Caumette, P.; Le Coq, G.; Libmann, M.

    1983-03-01

    The Control Variables Method for fluid transients computations has been used to compute expansion and compression shock waves propagations. In this paper, first analytical solutions for shock wave and rarefaction wave propagation are detailed. Then after a rapid description of the C.V.M. technique and its stability and monotonicity properties, we will present some results about standard shock tube problem, reflection of shock wave, finally a comparison between experimental results obtained on the ELF facility and calculations is given

  16. Automated information-analytical system for thunderstorm monitoring and early warning alarms using modern physical sensors and information technologies with elements of artificial intelligence

    Science.gov (United States)

    Boldyreff, Anton S.; Bespalov, Dmitry A.; Adzhiev, Anatoly Kh.

    2017-05-01

    Methods of artificial intelligence are a good solution for weather phenomena forecasting. They allow to process a large amount of diverse data. Recirculation Neural Networks is implemented in the paper for the system of thunderstorm events prediction. Large amounts of experimental data from lightning sensors and electric field mills networks are received and analyzed. The average recognition accuracy of sensor signals is calculated. It is shown that Recirculation Neural Networks is a promising solution in the forecasting of thunderstorms and weather phenomena, characterized by the high efficiency of the recognition elements of the sensor signals, allows to compress images and highlight their characteristic features for subsequent recognition.

  17. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  18. JPEG2000 vs. full frame wavelet packet compression for smart card medical records.

    Science.gov (United States)

    Leehan, Joaquín Azpirox; Lerallut, Jean-Francois

    2006-01-01

    This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.

  19. Linearly and nonlinearly optimized weighted essentially non-oscillatory methods for compressible turbulence

    Science.gov (United States)

    Taylor, Ellen Meredith

    Weighted essentially non-oscillatory (WENO) methods have been developed to simultaneously provide robust shock-capturing in compressible fluid flow and avoid excessive damping of fine-scale flow features such as turbulence. This is accomplished by constructing multiple candidate numerical stencils that adaptively combine so as to provide high order of accuracy and high bandwidth-resolving efficiency in continuous flow regions while averting instability-provoking interpolation across discontinuities. Under certain conditions in compressible turbulence, however, numerical dissipation remains unacceptably high even after optimization of the linear optimal stencil combination that dominates in smooth regions. The remaining nonlinear error arises from two primary sources: (i) the smoothness measurement that governs the application of adaptation away from the optimal stencil and (ii) the numerical properties of individual candidate stencils that govern numerical accuracy when adaptation engages. In this work, both of these sources are investigated, and corrective modifications to the WENO methodology are proposed and evaluated. Excessive nonlinear error due to the first source is alleviated through two separately considered procedures appended to the standard smoothness measurement technique that are designated the "relative smoothness limiter" and the "relative total variation limiter." In theory, appropriate values of their associated parameters should be insensitive to flow configuration, thereby sidestepping the prospect of costly parameter tuning; and this expectation of broad effectiveness is assessed in direct numerical simulations (DNS) of one-dimensional inviscid test problems, three-dimensional compressible isotropic turbulence of varying Reynolds and turbulent Mach numbers, and shock/isotropic-turbulence interaction (SITI). In the process, tools for efficiently comparing WENO adaptation behavior in smooth versus shock-containing regions are developed. The

  20. Low-Complexity Spatial-Temporal Filtering Method via Compressive Sensing for Interference Mitigation in a GNSS Receiver

    Directory of Open Access Journals (Sweden)

    Chung-Liang Chang

    2014-01-01

    Full Text Available A compressive sensing based array processing method is proposed to lower the complexity, and computation load of array system and to maintain the robust antijam performance in global navigation satellite system (GNSS receiver. Firstly, the spatial and temporal compressed matrices are multiplied with array signal, which results in a small size array system. Secondly, the 2-dimensional (2D minimum variance distortionless response (MVDR beamformer is employed in proposed system to mitigate the narrowband and wideband interference simultaneously. The iterative process is performed to find optimal spatial and temporal gain vector by MVDR approach, which enhances the steering gain of direction of arrival (DOA of interest. Meanwhile, the null gain is set at DOA of interference. Finally, the simulated navigation signal is generated offline by the graphic user interface tool and employed in the proposed algorithm. The theoretical analysis results using the proposed algorithm are verified based on simulated results.

  1. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  2. Estimates of post-acceleration longitudinal bunch compression

    International Nuclear Information System (INIS)

    Judd, D.L.

    1977-01-01

    A simple analytic method is developed, based on physical approximations, for treating transient implosive longitudinal compression of bunches of heavy ions in an accelerator system for ignition of inertial-confinement fusion pellet targets. Parametric dependences of attainable compressions and of beam path lengths and times during compression are indicated for ramped pulsed-gap lines, rf systems in storage and accumulator rings, and composite systems, including sections of free drift. It appears that for high-confidence pellets in a plant producing 1000 MW of electric power the needed pulse lengths cannot be obtained with rings alone unless an unreasonably large number of them are used, independent of choice of rf harmonic number. In contrast, pulsed-gap lines alone can meet this need. The effects of an initial inward compressive drift and of longitudinal emittance are included

  3. Real-time lossless compression of depth streams

    KAUST Repository

    Schneider, Jens

    2017-08-17

    Various examples are provided for lossless compression of data streams. In one example, a Z-lossless (ZLS) compression method includes generating compacted depth information by condensing information of a depth image and a compressed binary representation of the depth image using histogram compaction and decorrelating the compacted depth information to produce bitplane slicing of residuals by spatial prediction. In another example, an apparatus includes imaging circuitry that can capture one or more depth images and processing circuitry that can generate compacted depth information by condensing information of a captured depth image and a compressed binary representation of the captured depth image using histogram compaction; decorrelate the compacted depth information to produce bitplane slicing of residuals by spatial prediction; and generate an output stream based upon the bitplane slicing.

  4. Real-time lossless compression of depth streams

    KAUST Repository

    Schneider, Jens

    2017-01-01

    Various examples are provided for lossless compression of data streams. In one example, a Z-lossless (ZLS) compression method includes generating compacted depth information by condensing information of a depth image and a compressed binary representation of the depth image using histogram compaction and decorrelating the compacted depth information to produce bitplane slicing of residuals by spatial prediction. In another example, an apparatus includes imaging circuitry that can capture one or more depth images and processing circuitry that can generate compacted depth information by condensing information of a captured depth image and a compressed binary representation of the captured depth image using histogram compaction; decorrelate the compacted depth information to produce bitplane slicing of residuals by spatial prediction; and generate an output stream based upon the bitplane slicing.

  5. Artificial neural network and classical least-squares methods for neurotransmitter mixture analysis.

    Science.gov (United States)

    Schulze, H G; Greek, L S; Gorzalka, B B; Bree, A V; Blades, M W; Turner, R F

    1995-02-01

    Identification of individual components in biological mixtures can be a difficult problem regardless of the analytical method employed. In this work, Raman spectroscopy was chosen as a prototype analytical method due to its inherent versatility and applicability to aqueous media, making it useful for the study of biological samples. Artificial neural networks (ANNs) and the classical least-squares (CLS) method were used to identify and quantify the Raman spectra of the small-molecule neurotransmitters and mixtures of such molecules. The transfer functions used by a network, as well as the architecture of a network, played an important role in the ability of the network to identify the Raman spectra of individual neurotransmitters and the Raman spectra of neurotransmitter mixtures. Specifically, networks using sigmoid and hyperbolic tangent transfer functions generalized better from the mixtures in the training data set to those in the testing data sets than networks using sine functions. Networks with connections that permit the local processing of inputs generally performed better than other networks on all the testing data sets. and better than the CLS method of curve fitting, on novel spectra of some neurotransmitters. The CLS method was found to perform well on noisy, shifted, and difference spectra.

  6. Measurement of compressed breast thickness by optical stereoscopic photogrammetry.

    Science.gov (United States)

    Tyson, Albert H; Mawdsley, Gordon E; Yaffe, Martin J

    2009-02-01

    The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of the breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.

  7. [A method of recognizing biology surface spectrum using cascade-connection artificial neural nets].

    Science.gov (United States)

    Shi, Wei-Jie; Yao, Yong; Zhang, Tie-Qiang; Meng, Xian-Jiang

    2008-05-01

    A method of recognizing the visible spectrum of micro-areas on the biological surface with cascade-connection artificial neural nets is presented in the present paper. The visible spectra of spots on apples' pericarp, ranging from 500 to 730 nm, were obtained with a fiber-probe spectrometer, and a new spectrum recognition system consisting of three-level cascade-connection neural nets was set up. The experiments show that the spectra of rotten, scar and bumped spot on an apple's pericarp can be recognized by the spectrum recognition system, and the recognition accuracy is higher than 85% even when noise level is 15%. The new recognition system overcomes the disadvantages of poor accuracy and poor anti-noise with the traditional system based on single cascade neural nets. Finally, a new method of expression of recognition results was proved. The method is based on the conception of degree of membership in fuzzing mathematics, and through it the recognition results can be expressed exactly and objectively.

  8. Large Eddy Simulation for Compressible Flows

    CERN Document Server

    Garnier, E; Sagaut, P

    2009-01-01

    Large Eddy Simulation (LES) of compressible flows is still a widely unexplored area of research. The authors, whose books are considered the most relevant monographs in this field, provide the reader with a comprehensive state-of-the-art presentation of the available LES theory and application. This book is a sequel to "Large Eddy Simulation for Incompressible Flows", as most of the research on LES for compressible flows is based on variable density extensions of models, methods and paradigms that were developed within the incompressible flow framework. The book addresses both the fundamentals and the practical industrial applications of LES in order to point out gaps in the theoretical framework as well as to bridge the gap between LES research and the growing need to use it in engineering modeling. After introducing the fundamentals on compressible turbulence and the LES governing equations, the mathematical framework for the filtering paradigm of LES for compressible flow equations is established. Instead ...

  9. SIMULATION OF MULTIPLEXING OF TWO PHASE SOIL IN CASE OF COMPRESSION COMPRESSION

    Directory of Open Access Journals (Sweden)

    G. E. Agakhanov

    2016-01-01

    Full Text Available Aim.The article is devoted to solving the problem of finding metodoa seal a two phase soil layer under compression compression uniformly distributed load.Methods.On estimated model of a continuous isotropic body with linear and hereditary creep in case of invariance of the environment and a persistence of coefficient of Poisson in time, and also taking into account different resilience of a skeleton of soil when multiplexing and demultiplexing the decision of the task of multiplexing of a layer of two-phase soil in case of compression is received by a uniformly distributed load. Special cases of the intense deformed status are considered.Results.The analysis of the received decision shows that in case of a persistence in time of coefficient of Poisson of the environment, creep doesn't influence tension, and only affects deformation or relocation (settling that corresponds to earlier set provisions. In case of a persistence of coefficient of Poisson the intense deformed status of the environment can be determined also by method of elastic analogy, solving the appropriate uprugomgnovenny problem. The solution of the equation for pore pressure is executed by Fourier method. According to the received analytical decision the flowchart and the program in Matlab packet with use of the built-in programming language of the Matlab system is made.Conclusion. For two options of conditions of drainage calculation of function of pore pressure, function of a side raspor and level of consolidation of a layer taking into account and without creep is executed and their surfaces of distribution and a graphics of change are constructed.

  10. Medical image compression and its application to TDIS-FILE equipment

    International Nuclear Information System (INIS)

    Tsubura, Shin-ichi; Nishihara, Eitaro; Iwai, Shunsuke

    1990-01-01

    In order to compress medical images for filing and communication, we have developed a compression algorithm which compresses images with remarkable quality using a high-pass filtering method. Hardware for this compression algorithm was also developed and applied to TDIS (total digital imaging system)-FILE equipment. In the future, hardware based on this algorithm will be developed for various types of diagnostic equipment and PACS. This technique has the following characteristics: (1) significant reduction of artifacts; (2) acceptable quality for clinical evaluation at 15:1 to 20:1 compression ratio; and (3) high-speed processing and compact hardware. (author)

  11. Dynamic mode decomposition for compressive system identification

    Science.gov (United States)

    Bai, Zhe; Kaiser, Eurika; Proctor, Joshua L.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data. In this work, we integrate and unify two recent innovations that extend DMD to systems with actuation and systems with heavily subsampled measurements. When combined, these methods yield a novel framework for compressive system identification, where it is possible to identify a low-order model from limited input-output data and reconstruct the associated full-state dynamic modes with compressed sensing, providing interpretability of the state of the reduced-order model. When full-state data is available, it is possible to dramatically accelerate downstream computations by first compressing the data. We demonstrate this unified framework on simulated data of fluid flow past a pitching airfoil, investigating the effects of sensor noise, different types of measurements (e.g., point sensors, Gaussian random projections, etc.), compression ratios, and different choices of actuation (e.g., localized, broadband, etc.). This example provides a challenging and realistic test-case for the proposed method, and results indicate that the dominant coherent structures and dynamics are well characterized even with heavily subsampled data.

  12. Artificial intelligence applications in information and communication technologies

    CERN Document Server

    Bouguila, Nizar

    2015-01-01

    This book presents various recent applications of Artificial Intelligence in Information and Communication Technologies such as Search and Optimization methods, Machine Learning, Data Representation and Ontologies, and Multi-agent Systems. The main aim of this book is to help Information and Communication Technologies (ICT) practitioners in managing efficiently their platforms using AI tools and methods and to provide them with sufficient Artificial Intelligence background to deal with real-life problems.  .

  13. Nonthrombotic artificial mass in right ventricle and pulmonary circulation as a sequence of vertebroplasty

    International Nuclear Information System (INIS)

    Monovska, T.; Kirova, G.; Bojinov, D.; Kichukov, K.

    2013-01-01

    Full text: Introduction: Percutaneous vertebroplasty for the treatment of the vertebral body fractures is considered to be relatively safe therapeutic procedure. Nevertheless there is a potential risk of spread of emboli from artificial material through external vertebral venous plexus. What you will learn: This is a 60 -year-old patient with primary diagnosis of multiple myeloma and conducted vertebroplasty due to the vertebral bodies fractures. Accompanying symptoms are: coughing up blood and pain in the right the chest with medication for micro thromboembolism form of Pulmonary thromboembolism (PTE). There are echocardiographic data on hospitalization for a formation in the right ventricle. Based on additionally performed CT study, a ‘foreign body’ - artificial material in right ventricle and subsegmentary branches of the pulmonary arteries as a complication of previous vertebroplasty has been recorded. Paravertebral venous vessels in the area of the thoracic section filled with cement have been noticed. Discussion: non-thrombotic embolism of artificial material prior vertebroplasty can be asymptomatic, or condition may be associated with life-threatening symptoms - compression of the spinal cord resulting in paraplegia, emboli in the cerebral vessels, right ventricle, kidney arteries. The frequency of the topical flowing of the used material is relatively high (80-90 %) to the para-vertebral vein (over 24%), with subsequent pulmonary emboli (4.6 to 6.8 %). The path of the embolization material dissemination is in the course of the para-vertebral veins, v. azygos and v. cafa inf., with the end goal pulmonary circulation. Conclusion: Follow-up of patients after the therapeutic vertebroplasty and integrated diagnostic approach with appropriate imaging methods allow timely diagnosis and treatment of this unusual form of non-thrombotic embolism

  14. Improved Artificial Fish Algorithm for Parameters Optimization of PID Neural Network

    OpenAIRE

    Jing Wang; Yourui Huang

    2013-01-01

    In order to solve problems such as initial weights are difficult to be determined, training results are easy to trap in local minima in optimization process of PID neural network parameters by traditional BP algorithm, this paper proposed a new method based on improved artificial fish algorithm for parameters optimization of PID neural network. This improved artificial fish algorithm uses a composite adaptive artificial fish algorithm based on optimal artificial fish and nearest artificial fi...

  15. Development of a hybrid system of artificial neural networks and ...

    African Journals Online (AJOL)

    Development of a hybrid system of artificial neural networks and artificial bee colony algorithm for prediction and modeling of customer choice in the market. ... attempted to present a new method for the modeling and prediction of customer choice in the market using the combination of artificial intelligence and data mining.

  16. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  17. Shock compression of synthetic opal

    International Nuclear Information System (INIS)

    Inoue, A; Okuno, M; Okudera, H; Mashimo, T; Omurzak, E; Katayama, S; Koyano, M

    2010-01-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO 4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO 2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  18. Shock compression of synthetic opal

    Science.gov (United States)

    Inoue, A.; Okuno, M.; Okudera, H.; Mashimo, T.; Omurzak, E.; Katayama, S.; Koyano, M.

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  19. Shock compression of synthetic opal

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, A; Okuno, M; Okudera, H [Department of Earth Sciences, Kanazawa University Kanazawa, Ishikawa, 920-1192 (Japan); Mashimo, T; Omurzak, E [Shock Wave and Condensed Matter Research Center, Kumamoto University, Kumamoto, 860-8555 (Japan); Katayama, S; Koyano, M, E-mail: okuno@kenroku.kanazawa-u.ac.j [JAIST, Nomi, Ishikawa, 923-1297 (Japan)

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO{sub 4} tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO{sub 2} glass. However, internal silanole groups still remain even at 38.1 GPa.

  20. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.