WorldWideScience

Sample records for artificial compressibility method

  1. The artificial compression method for computation of shocks and contact discontinuities. I - Single conservation laws

    Science.gov (United States)

    Harten, A.

    1977-01-01

    The paper discusses the use of the artificial compression method for the computation of discontinuous solutions of a single conservation law by finite difference methods. The single conservation law has either a shock or a contact discontinuity. Any monotone finite difference scheme applied to the original equation smears the discontinuity, while the same scheme applied to the equation modified by an artificial compression flux produces steady progressing profiles. If L is any finite difference scheme in conservation form and C is an artificial compressor, the split flux artificial compression method CL is a corrective scheme: L smears the discontinuity while propagating it; C compresses the smeared transition toward a sharp discontinuity. Numerical implementation of artificial compression is described.

  2. An artificial compressibility CBS method for modelling heat transfer and fluid flow in heterogeneous porous materials

    CSIR Research Space (South Africa)

    Malan, AG

    2011-08-01

    Full Text Available This work is concerned with the development of an artificial compressibility version of the characteristicbased split (CBS) method proposed by Zienkiewicz and Codina (Int. J. Numer. Meth. Fluids 1995; 20:869–885). The technique is applied...

  3. Preconditioned characteristic boundary conditions based on artificial compressibility method for solution of incompressible flows

    Science.gov (United States)

    Hejranfar, Kazem; Parseh, Kaveh

    2017-09-01

    The preconditioned characteristic boundary conditions based on the artificial compressibility (AC) method are implemented at artificial boundaries for the solution of two- and three-dimensional incompressible viscous flows in the generalized curvilinear coordinates. The compatibility equations and the corresponding characteristic variables (or the Riemann invariants) are mathematically derived and then applied as suitable boundary conditions in a high-order accurate incompressible flow solver. The spatial discretization of the resulting system of equations is carried out by the fourth-order compact finite-difference (FD) scheme. In the preconditioning applied here, the value of AC parameter in the flow field and also at the far-field boundary is automatically calculated based on the local flow conditions to enhance the robustness and performance of the solution algorithm. The code is fully parallelized using the Concurrency Runtime standard and Parallel Patterns Library (PPL) and its performance on a multi-core CPU is analyzed. The incompressible viscous flows around a 2-D circular cylinder, a 2-D NACA0012 airfoil and also a 3-D wavy cylinder are simulated and the accuracy and performance of the preconditioned characteristic boundary conditions applied at the far-field boundaries are evaluated in comparison to the simplified boundary conditions and the non-preconditioned characteristic boundary conditions. It is indicated that the preconditioned characteristic boundary conditions considerably improve the convergence rate of the solution of incompressible flows compared to the other boundary conditions and the computational costs are significantly decreased.

  4. A relaxation-projection method for compressible flows. Part II: Artificial heat exchanges for multiphase shocks

    International Nuclear Information System (INIS)

    Petitpas, Fabien; Franquet, Erwin; Saurel, Richard; Le Metayer, Olivier

    2007-01-01

    The relaxation-projection method developed in Saurel et al. [R. Saurel, E. Franquet, E. Daniel, O. Le Metayer, A relaxation-projection method for compressible flows. Part I: The numerical equation of state for the Euler equations, J. Comput. Phys. (2007) 822-845] is extended to the non-conservative hyperbolic multiphase flow model of Kapila et al. [A.K. Kapila, Menikoff, J.B. Bdzil, S.F. Son, D.S. Stewart, Two-phase modeling of deflagration to detonation transition in granular materials: reduced equations, Physics of Fluids 13(10) (2001) 3002-3024]. This model has the ability to treat multi-temperatures mixtures evolving with a single pressure and velocity and is particularly interesting for the computation of interface problems with compressible materials as well as wave propagation in heterogeneous mixtures. The non-conservative character of this model poses however computational challenges in the presence of shocks. The first issue is related to the Riemann problem resolution that necessitates shock jump conditions. Thanks to the Rankine-Hugoniot relations proposed and validated in Saurel et al. [R. Saurel, O. Le Metayer, J. Massoni, S. Gavrilyuk, Shock jump conditions for multiphase mixtures with stiff mechanical relaxation, Shock Waves 16 (3) (2007) 209-232] exact and approximate 2-shocks Riemann solvers are derived. However, the Riemann solver is only a part of a numerical scheme and non-conservative variables pose extra difficulties for the projection or cell average of the solution. It is shown that conventional Godunov schemes are unable to converge to the exact solution for strong multiphase shocks. This is due to the incorrect partition of the energies or entropies in the cell averaged mixture. To circumvent this difficulty a specific Lagrangian scheme is developed. The correct partition of the energies is achieved by using an artificial heat exchange in the shock layer. With the help of an asymptotic analysis this heat exchange takes a similar form as

  5. A relaxation-projection method for compressible flows. Part II: Artificial heat exchanges for multiphase shocks

    Science.gov (United States)

    Petitpas, Fabien; Franquet, Erwin; Saurel, Richard; Le Metayer, Olivier

    2007-08-01

    The relaxation-projection method developed in Saurel et al. [R. Saurel, E. Franquet, E. Daniel, O. Le Metayer, A relaxation-projection method for compressible flows. Part I: The numerical equation of state for the Euler equations, J. Comput. Phys. (2007) 822-845] is extended to the non-conservative hyperbolic multiphase flow model of Kapila et al. [A.K. Kapila, Menikoff, J.B. Bdzil, S.F. Son, D.S. Stewart, Two-phase modeling of deflagration to detonation transition in granular materials: reduced equations, Physics of Fluids 13(10) (2001) 3002-3024]. This model has the ability to treat multi-temperatures mixtures evolving with a single pressure and velocity and is particularly interesting for the computation of interface problems with compressible materials as well as wave propagation in heterogeneous mixtures. The non-conservative character of this model poses however computational challenges in the presence of shocks. The first issue is related to the Riemann problem resolution that necessitates shock jump conditions. Thanks to the Rankine-Hugoniot relations proposed and validated in Saurel et al. [R. Saurel, O. Le Metayer, J. Massoni, S. Gavrilyuk, Shock jump conditions for multiphase mixtures with stiff mechanical relaxation, Shock Waves 16 (3) (2007) 209-232] exact and approximate 2-shocks Riemann solvers are derived. However, the Riemann solver is only a part of a numerical scheme and non-conservative variables pose extra difficulties for the projection or cell average of the solution. It is shown that conventional Godunov schemes are unable to converge to the exact solution for strong multiphase shocks. This is due to the incorrect partition of the energies or entropies in the cell averaged mixture. To circumvent this difficulty a specific Lagrangian scheme is developed. The correct partition of the energies is achieved by using an artificial heat exchange in the shock layer. With the help of an asymptotic analysis this heat exchange takes a similar form as

  6. [Electroneurographic monitoring during the test of artificial compression as a method of early diagnosis of carpal tunnel syndrome].

    Science.gov (United States)

    Bahtereva, E V; Shirokov, V A; Leiderman, E L; Varaksin, A N; Panov, V G

    To develop the algorithm of early diagnosis of carpal tunnel syndrome (CTS) at the stage of functional neurological disturbances by expanding diagnostic possibilities of electroneuromyography using artificial compression test. Parameters of conductivity of the median nerve in 54 patients with finger numbness were analyzed during 3 months before and after compression of the forearm (blood pressure was measured for 1 min). An increase in the latency in motor fibers and a decrease in the amplitude of sensory response were identified in patients with CTS signs and normal electroneuromyographical parameters at baseline. The use of additional electroneuromyographical monitoring during the provocative artificial compression test expands the possibilities of this method and improves early diagnosis of CTS.

  7. Testing framework for compression methods

    OpenAIRE

    Štoček, Ondřej

    2008-01-01

    There are many algorithms for data compression. These compression methods often achieve different compression rate and also use computer resources differently. In practice a combination of compression is usually used instead of standalone compression methods. The software tool can be evolved, where we can easily combine existing compression methods to new one and test it consequently. Main goal of this work is to propound such tool and implement it. Further goal is to implement basic library ...

  8. Artificial Neural Network Model for Predicting Compressive

    OpenAIRE

    Salim T. Yousif; Salwa M. Abdullah

    2013-01-01

      Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum...

  9. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  10. Survey of numerical methods for compressible fluids

    Energy Technology Data Exchange (ETDEWEB)

    Sod, G A

    1977-06-01

    The finite difference methods of Godunov, Hyman, Lax-Wendroff (two-step), MacCormack, Rusanov, the upwind scheme, the hybrid scheme of Harten and Zwas, the antidiffusion method of Boris and Book, and the artificial compression method of Harten are compared with the random choice known as Glimm's method. The methods are used to integrate the one-dimensional equations of gas dynamics for an inviscid fluid. The results are compared and demonstrate that Glimm's method has several advantages. 16 figs., 4 tables.

  11. Artificial neural network does better spatiotemporal compressive sampling

    Science.gov (United States)

    Lee, Soo-Young; Hsu, Charles; Szu, Harold

    2012-06-01

    Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.

  12. Estimation of concrete compressive strength using artificial neural network

    Directory of Open Access Journals (Sweden)

    Kostić Srđan

    2015-01-01

    Full Text Available In present paper, concrete compressive strength is evaluated using back propagation feed-forward artificial neural network. Training of neural network is performed using Levenberg-Marquardt learning algorithm for four architectures of artificial neural networks, one, three, eight and twelve nodes in a hidden layer in order to avoid the occurrence of overfitting. Training, validation and testing of neural network is conducted for 75 concrete samples with distinct w/c ratio and amount of superplasticizer of melamine type. These specimens were exposed to different number of freeze/thaw cycles and their compressive strength was determined after 7, 20 and 32 days. The obtained results indicate that neural network with one hidden layer and twelve hidden nodes gives reasonable prediction accuracy in comparison to experimental results (R=0.965, MSE=0.005. These results of the performed analysis are further confirmed by calculating the standard statistical errors: the chosen architecture of neural network shows the smallest value of mean absolute percentage error (MAPE=, variance absolute relative error (VARE and median absolute error (MEDAE, and the highest value of variance accounted for (VAF.

  13. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  14. Krylov methods for compressible flows

    Science.gov (United States)

    Tidriri, M. D.

    1995-01-01

    We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.

  15. High speed inviscid compressible flow by the finite element method

    Science.gov (United States)

    Zienkiewicz, O. C.; Loehner, R.; Morgan, K.

    1984-01-01

    The finite element method and an explicit time stepping algorithm which is based on Taylor-Galerkin schemes with an appropriate artificial viscosity is combined with an automatic mesh refinement process which is designed to produce accurate steady state solutions to problems of inviscid compressible flow in two dimensions. The results of two test problems are included which demonstrate the excellent performance characteristics of the proposed procedures.

  16. Methods for Distributed Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Dennis Sundman

    2013-12-01

    Full Text Available Compressed sensing is a thriving research field covering a class of problems where a large sparse signal is reconstructed from a few random measurements. In the presence of several sensor nodes measuring correlated sparse signals, improvements in terms of recovery quality or the requirement for a fewer number of local measurements can be expected if the nodes cooperate. In this paper, we provide an overview of the current literature regarding distributed compressed sensing; in particular, we discuss aspects of network topologies, signal models and recovery algorithms.

  17. Prediction of compressibility parameters of the soils using artificial neural network.

    Science.gov (United States)

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  18. Comparative Study on Theoretical and Machine Learning Methods for Acquiring Compressed Liquid Densities of 1,1,1,2,3,3,3-Heptafluoropropane (R227ea via Song and Mason Equation, Support Vector Machine, and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Hao Li

    2016-01-01

    Full Text Available 1,1,1,2,3,3,3-Heptafluoropropane (R227ea is a good refrigerant that reduces greenhouse effects and ozone depletion. In practical applications, we usually have to know the compressed liquid densities at different temperatures and pressures. However, the measurement requires a series of complex apparatus and operations, wasting too much manpower and resources. To solve these problems, here, Song and Mason equation, support vector machine (SVM, and artificial neural networks (ANNs were used to develop theoretical and machine learning models, respectively, in order to predict the compressed liquid densities of R227ea with only the inputs of temperatures and pressures. Results show that compared with the Song and Mason equation, appropriate machine learning models trained with precise experimental samples have better predicted results, with lower root mean square errors (RMSEs (e.g., the RMSE of the SVM trained with data provided by Fedele et al. [1] is 0.11, while the RMSE of the Song and Mason equation is 196.26. Compared to advanced conventional measurements, knowledge-based machine learning models are proved to be more time-saving and user-friendly.

  19. Using an artificial neural network to predict carbon dioxide compressibility factor at high pressure and temperature

    Energy Technology Data Exchange (ETDEWEB)

    Mohagheghian, Erfan [Memorial University of Newfoundland, St. John' s (Canada); Zafarian-Rigaki, Habiballah; Motamedi-Ghahfarrokhi, Yaser; Hemmati-Sarapardeh, Abdolhossein [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of)

    2015-10-15

    Carbon dioxide injection, which is widely used as an enhanced oil recovery (EOR) method, has the potential of being coupled with CO{sub 2} sequestration and reducing the emission of greenhouse gas. Hence, knowing the compressibility factor of carbon dioxide is of a vital significance. Compressibility factor (Z-factor) is traditionally measured through time consuming, expensive and cumbersome experiments. Hence, developing a fast, robust and accurate model for its estimation is necessary. In this study, a new reliable model on the basis of feed forward artificial neural networks is presented to predict CO{sub 2} compressibility factor. Reduced temperature and pressure were selected as the input parameters of the proposed model. To evaluate and compare the results of the developed model with pre-existing models, both statistical and graphical error analyses were employed. The results indicated that the proposed model is more reliable and accurate compared to pre-existing models in a wide range of temperature (up to 1,273.15 K) and pressure (up to 140MPa). Furthermore, by employing the relevancy factor, the effect of pressure and temprature on the Z-factor of CO{sub 2} was compared for below and above the critical pressure of CO{sub 2}, and the physcially expected trends were observed. Finally, to identify the probable outliers and applicability domain of the proposed ANN model, both numerical and graphical techniques based on Leverage approach were performed. The results illustrated that only 1.75% of the experimental data points were located out of the applicability domain of the proposed model. As a result, the developed model is reliable for the prediction of CO{sub 2} compressibility factor.

  20. Artificial intelligence methods for diagnostic

    International Nuclear Information System (INIS)

    Dourgnon-Hanoune, A.; Porcheron, M.; Ricard, B.

    1996-01-01

    To assist in diagnosis of its nuclear power plants, the Research and Development Division of Electricite de France has been developing skills in Artificial Intelligence for about a decade. Different diagnostic expert systems have been designed. Among them, SILEX for control rods cabinet troubleshooting, DIVA for turbine generator diagnosis, DIAPO for reactor coolant pump diagnosis. This know how in expert knowledge modeling and acquisition is direct result of experience gained during developments and of a more general reflection on knowledge based system development. We have been able to reuse this results for other developments such as a guide for auxiliary rotating machines diagnosis. (authors)

  1. Characteristic method for isentropic compression simulation

    Directory of Open Access Journals (Sweden)

    Quanxi Xue

    2014-05-01

    Full Text Available A characteristic method has been developed using a Murnaghan-form isentropic equation and characteristics, which has been verified by example uses. General information of two ramp compression experiments was calculated, which matched experimental ones well except for some tiny distinctions. Finally, the factors influencing the precision of this model were discussed and other practical applications were presented.

  2. Applications of Taylor-Galerkin finite element method to compressible internal flow problems

    Science.gov (United States)

    Sohn, Jeong L.; Kim, Yongmo; Chung, T. J.

    1989-01-01

    A two-step Taylor-Galerkin finite element method with Lapidus' artificial viscosity scheme is applied to several test cases for internal compressible inviscid flow problems. Investigations for the effect of supersonic/subsonic inlet and outlet boundary conditions on computational results are particularly emphasized.

  3. Methods for Sampling and Measurement of Compressed Air Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Stroem, L.

    1976-10-15

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  4. Prediction of compression strength of high performance concrete using artificial neural networks

    Science.gov (United States)

    Torre, A.; Garcia, F.; Moromi, I.; Espinoza, P.; Acuña, L.

    2015-01-01

    High-strength concrete is undoubtedly one of the most innovative materials in construction. Its manufacture is simple and is carried out starting from essential components (water, cement, fine and aggregates) and a number of additives. Their proportions have a high influence on the final strength of the product. This relations do not seem to follow a mathematical formula and yet their knowledge is crucial to optimize the quantities of raw materials used in the manufacture of concrete. Of all mechanical properties, concrete compressive strength at 28 days is most often used for quality control. Therefore, it would be important to have a tool to numerically model such relationships, even before processing. In this aspect, artificial neural networks have proven to be a powerful modeling tool especially when obtaining a result with higher reliability than knowledge of the relationships between the variables involved in the process. This research has designed an artificial neural network to model the compressive strength of concrete based on their manufacturing parameters, obtaining correlations of the order of 0.94.

  5. Prediction of compression strength of high performance concrete using artificial neural networks

    International Nuclear Information System (INIS)

    Torre, A; Moromi, I; Garcia, F; Espinoza, P; Acuña, L

    2015-01-01

    High-strength concrete is undoubtedly one of the most innovative materials in construction. Its manufacture is simple and is carried out starting from essential components (water, cement, fine and aggregates) and a number of additives. Their proportions have a high influence on the final strength of the product. This relations do not seem to follow a mathematical formula and yet their knowledge is crucial to optimize the quantities of raw materials used in the manufacture of concrete. Of all mechanical properties, concrete compressive strength at 28 days is most often used for quality control. Therefore, it would be important to have a tool to numerically model such relationships, even before processing. In this aspect, artificial neural networks have proven to be a powerful modeling tool especially when obtaining a result with higher reliability than knowledge of the relationships between the variables involved in the process. This research has designed an artificial neural network to model the compressive strength of concrete based on their manufacturing parameters, obtaining correlations of the order of 0.94

  6. Lagrangian transported MDF methods for compressible high speed flows

    Science.gov (United States)

    Gerlinger, Peter

    2017-06-01

    This paper deals with the application of thermochemical Lagrangian MDF (mass density function) methods for compressible sub- and supersonic RANS (Reynolds Averaged Navier-Stokes) simulations. A new approach to treat molecular transport is presented. This technique on the one hand ensures numerical stability of the particle solver in laminar regions of the flow field (e.g. in the viscous sublayer) and on the other hand takes differential diffusion into account. It is shown in a detailed analysis, that the new method correctly predicts first and second-order moments on the basis of conventional modeling approaches. Moreover, a number of challenges for MDF particle methods in high speed flows is discussed, e.g. high cell aspect ratio grids close to solid walls, wall heat transfer, shock resolution, and problems from statistical noise which may cause artificial shock systems in supersonic flows. A Mach 2 supersonic mixing channel with multiple shock reflection and a model rocket combustor simulation demonstrate the eligibility of this technique to practical applications. Both test cases are simulated successfully for the first time with a hybrid finite-volume (FV)/Lagrangian particle solver (PS).

  7. A review of medical image compression methods - general characterization

    Energy Technology Data Exchange (ETDEWEB)

    Przelaskowski, A.; Kazubek, M.; Jamrogiewicz, T. [Politechnika Warszawska, Warsaw (Poland). Inst. Radioelektroniki

    1995-12-31

    The general view of the popular and often applied lossless and lossy compression techniques is presented. The lossless methods of either single image (intraframe methods ) or sequence of correlated images (interframe methods) are shortly characterized. Often used lossy methods are also introduced. A class of medical images has not specific features which could be used for improving the compression efficiency. The effective natural image lossless compression techniques are also efficient in the applications to medical image systems. The limit of achievable compression ratios is about 4. Techniques based on linear prediction methods are largely the most effective in reduction of spatial redundancy. An optimisation of prediction model allows to decrease bit rates of about 10% (over standard DPCM method). there is strong dependence of a conception of compression technique optimum conditions on specific application and realisation possibilities of the technique. (author). 35 refs, 2 fig.

  8. Compressive properties of aluminum foams by gas injection method

    OpenAIRE

    Zhang Huiming; Chen Xiang; Fan Xueliu

    2012-01-01

    The compressive properties of aluminum foams by gas injection method are investigated under both quasi-static and dynamic compressive loads in this paper. The experimental results indicate that the deformation of the aluminum foams goes through three stages: elastic deforming, plastic deforming and densification stage, during both the quasi-static and dynamic compressions. The aluminum foams with small average cell size or low porosity have high yield strength. An increase in strain rate can ...

  9. Compression experiments on artificial, alpine and marine ice: implications for ice-shelf/continental interactions

    Science.gov (United States)

    Dierckx, Marie; Goossens, Thomas; Samyn, Denis; Tison, Jean-Louis

    2010-05-01

    Antarctic ice shelves are important components of continental ice dynamics, in that they control grounded ice flow towards the ocean. As such, Antarctic ice shelves are a key parameter to the stability of the Antarctic ice sheet in the context of global change. Marine ice, formed by sea water accretion beneath some ice shelves, displays distinct physical (grain textures, bubble content, ...) and chemical (salinity, isotopic composition, ...) characteristics as compared to glacier ice and sea ice. The aim is to refine Glen's flow relation (generally used for ice behaviour in deformation) under various parameters (temperature, salinity, debris, grain size ...) to improve deformation laws used in dynamic ice shelf models, which would then give more accurate and / or realistic predictions on ice shelf stability. To better understand the mechanical properties of natural ice, deformation experiments were performed on ice samples in laboratory, using a pneumatic compression device. To do so, we developed a custom built compression rig operated by pneumatic drives. It has been designed for performing uniaxial compression tests at constant load and under unconfined conditions. The operating pressure ranges from about 0.5 to 10 Bars. This allows modifying the experimental conditions to match the conditions found at the grounding zone (in the 1 Bar range). To maintain the ice at low temperature, the samples are immersed in a Silicone oil bath connected to an external refrigeration system. During the experiments, the vertical displacement of the piston and the applied force is measured by sensors which are connected to a digital acquisition system. We started our experiments with artificial ice and went on with continental ice samples from glaciers in the Alps. The first results allowed us to acquire realistic mechanical data for natural ice. Ice viscosity was calculated for different types of artificial ice, using Glen's flow law, and showed the importance of impurities

  10. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  11. A Survey on Data Compression Methods for Biological Sequences

    Directory of Open Access Journals (Sweden)

    Morteza Hosseini

    2016-10-01

    Full Text Available The ever increasing growth of the production of high-throughput sequencing data poses a serious challenge to the storage, processing and transmission of these data. As frequently stated, it is a data deluge. Compression is essential to address this challenge—it reduces storage space and processing costs, along with speeding up data transmission. In this paper, we provide a comprehensive survey of existing compression approaches, that are specialized for biological data, including protein and DNA sequences. Also, we devote an important part of the paper to the approaches proposed for the compression of different file formats, such as FASTA, as well as FASTQ and SAM/BAM, which contain quality scores and metadata, in addition to the biological sequences. Then, we present a comparison of the performance of several methods, in terms of compression ratio, memory usage and compression/decompression time. Finally, we present some suggestions for future research on biological data compression.

  12. Determination of deformation and strength characteristics of artificial geomaterial having step-shaped discontinuities under uniaxial compression

    Science.gov (United States)

    Tsoy, PA

    2018-03-01

    In order to determine the empirical relationship between the linear dimensions of step-shaped macrocracks in geomaterials as well as deformation and strength characteristics of geomaterials (ultimate strength, modulus of deformation) under uniaxial compression, the artificial flat alabaster specimens with the through discontinuities have been manufactured and subjected to a series of the related physical tests.

  13. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2006-01-01

    concrete beam. To illustrate the new design method, a specific example of a prestressed concrete beam is calcaluted. In the example it is shown, that the tradionel method with constant -values requires 23% more shear reinforcement than calculated by the new method using circular fan solutions.......In a concrete beam with transverse stirrups the shear forces are carried by inclined compression in the concrete. Along the tensile zone and the compression zone of the beam the transverse components of the inclined compressions are transferred to the stirrups, which are thus subjected to tension....... Since the eighties the diagonal compression field method has been used to design transverse shear reinforcement in concrete beams. The method is based on the lower-bound theorem of the theory of plasticity, and it has been adopted in Eurocode 2. The paper presents a new design method, which...

  14. A new near-lossless EEG compression method using ANN-based reconstruction technique.

    Science.gov (United States)

    Hejrati, Behzad; Fathi, Abdolhossein; Abdali-Mohammadi, Fardin

    2017-08-01

    Compression algorithm is an essential part of Telemedicine systems, to store and transmit large amount of medical signals. Most of existing compression methods utilize fixed transforms such as discrete cosine transform (DCT) and wavelet and usually cannot efficiently extract signal redundancy especially for non-stationary signals such as electroencephalogram (EEG). In this paper, we first propose learning-based adaptive transform using combination of DCT and artificial neural network (ANN) reconstruction technique. This adaptive ANN-based transform is applied to the DCT coefficients of EEG data to reduce its dimensionality and also to estimate the original DCT coefficients of EEG in the reconstruction phase. To develop a new near lossless compression method, the difference between the original DCT coefficients and estimated ones are also quantized. The quantized error is coded using Arithmetic coding and sent along with the estimated DCT coefficients as compressed data. The proposed method was applied to various datasets and the results show higher compression rate compared to the state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  16. Robust steganographic method utilizing properties of MJPEG compression standard

    Directory of Open Access Journals (Sweden)

    Jakub Oravec

    2015-06-01

    Full Text Available This article presents design of steganographic method, which uses video container as cover data. Video track was recorded by webcam and was further encoded by compression standard MJPEG. Proposed method also takes in account effects of lossy compression. The embedding process is realized by switching places of transform coefficients, which are computed by Discrete Cosine Transform. The article contains possibilities, used techniques, advantages and drawbacks of chosen solution. The results are presented at the end of the article.

  17. METHOD AND APPARATUS FOR INSPECTION OF COMPRESSED DATA PACKAGES

    DEFF Research Database (Denmark)

    2008-01-01

    A method for inspection of compressed data packages, which are transported over a data network, is provided. The data packages comprise a data package header containing control data for securing the correct delivery and interpretation of the package and a payload part containing data to be transf......A method for inspection of compressed data packages, which are transported over a data network, is provided. The data packages comprise a data package header containing control data for securing the correct delivery and interpretation of the package and a payload part containing data......, d) applying the determined compression scheme to at least one search pattern, which has previously been stored in a search key register, and e) comparing the compressed search pattern to the stream of data. The method can be carried out by dedicated hardware....

  18. Compressive sampling in computed tomography: Method and application

    International Nuclear Information System (INIS)

    Hu, Zhanli; Liang, Dong; Xia, Dan; Zheng, Hairong

    2014-01-01

    Since Donoho and Candes et al. published their groundbreaking work on compressive sampling or compressive sensing (CS), CS theory has attracted a lot of attention and become a hot topic, especially in biomedical imaging. Specifically, some CS based methods have been developed to enable accurate reconstruction from sparse data in computed tomography (CT) imaging. In this paper, we will review the progress in CS based CT from aspects of three fundamental requirements of CS: sparse representation, incoherent sampling and reconstruction algorithm. In addition, some potential applications of compressive sampling in CT are introduced

  19. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2005-01-01

    This paper presents a new design method, which is a modification of the diagonal compression field method, the modification consisting of the introduction of circular fan stress fields. The traditional method does not allow changes of the concrete compression direction throughout a given beam if ...... fields may be used whenever changes in the concrete compression direction are desired. To illustrate the new design method, a specific example of a prestressed concrete beam is calculated.......This paper presents a new design method, which is a modification of the diagonal compression field method, the modification consisting of the introduction of circular fan stress fields. The traditional method does not allow changes of the concrete compression direction throughout a given beam...... if equilibrium is strictly required. This is conservative, since it is not possible fully to utilize the concrete strength in regions with low shear stresses. The larger inclination (the smaller -value) of the uniaxial concrete stress the more transverse shear reinforcement is needed; hence it would be optimal...

  20. Investigating low-frequency compression using the Grid method

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Dau, Torsten; MacDonald, Ewen

    2016-01-01

    in literature. Moreover, slopes of the low-level portions of the BM I/O functions estimated at 500 Hz were examined, to determine whether the 500-Hz off-frequency forward masking curves were affected by compression. Overall, the collected data showed a trend confirming the compressive behaviour. However......There is an ongoing discussion about whether the amount of cochlear compression in humans at low frequencies (below 1 kHz) is as high as that at higher frequencies. It is controversial whether the compression affects the slope of the off-frequency forward masking curves at those frequencies. Here......, the Grid method with a 2-interval 1-up 3-down tracking rule was applied to estimate forward masking curves at two characteristic frequencies: 500 Hz and 4000 Hz. The resulting curves and the corresponding basilar membrane input-output (BM I/O) functions were found to be comparable to those reported...

  1. Compressive properties of aluminum foams by gas injection method

    Directory of Open Access Journals (Sweden)

    Zhang Huiming

    2012-08-01

    Full Text Available The compressive properties of aluminum foams by gas injection method are investigated under both quasi-static and dynamic compressive loads in this paper. The experimental results indicate that the deformation of the aluminum foams goes through three stages: elastic deforming, plastic deforming and densification stage, during both the quasi-static and dynamic compressions. The aluminum foams with small average cell size or low porosity have high yield strength. An increase in strain rate can lead to an increase of yield strength. The yield strength of the aluminum foams under the dynamic loading condition is much greater than that under the quasi-static loading condition. Dynamic compressive tests show that a higher strain rate can give rise to a higher energy absorption capacity, which demonstrates that the aluminum foams have remarkable strain rate sensitivity on the loading rate.

  2. A measurement method for piezoelectric material properties under longitudinal compressive stress–-a compression test method for thin piezoelectric materials

    International Nuclear Information System (INIS)

    Kang, Lae-Hyong; Lee, Dae-Oen; Han, Jae-Hung

    2011-01-01

    We introduce a new compression test method for piezoelectric materials to investigate changes in piezoelectric properties under the compressive stress condition. Until now, compression tests of piezoelectric materials have been generally conducted using bulky piezoelectric ceramics and pressure block. The conventional method using the pressure block for thin piezoelectric patches, which are used in unimorph or bimorph actuators, is prone to unwanted bending and buckling. In addition, due to the constrained boundaries at both ends, the observed piezoelectric behavior contains boundary effects. In order to avoid these problems, the proposed method employs two guide plates with initial longitudinal tensile stress. By removing the tensile stress after bonding a piezoelectric material between the guide layers, longitudinal compressive stress is induced in the piezoelectric layer. Using the compression test specimens, two important properties, which govern the actuation performance of the piezoelectric material, the piezoelectric strain coefficients and the elastic modulus, are measured to evaluate the effects of applied electric fields and re-poling. The results show that the piezoelectric strain coefficient d 31 increases and the elastic modulus decreases when high voltage is applied to PZT5A, and the compression in the longitudinal direction decreases the piezoelectric strain coefficient d 31 but does not affect the elastic modulus. We also found that the re-poling of the piezoelectric material increases the elastic modulus, but the piezoelectric strain coefficient d 31 is not changed much (slightly increased) by re-poling

  3. Word aligned bitmap compression method, data structure, and apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  4. Word aligned bitmap compression method, data structure, and apparatus

    Science.gov (United States)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  5. Technical note: New table look-up lossless compression method ...

    African Journals Online (AJOL)

    Technical note: New table look-up lossless compression method based on binary index archiving. ... International Journal of Engineering, Science and Technology ... This paper intends to present a common use archiver, made up following the dictionary technique and using the index archiving method as a simple and ...

  6. Convergence of a residual based artificial viscosity finite element method

    KAUST Repository

    Nazarov, Murtazo

    2013-02-01

    We present a residual based artificial viscosity finite element method to solve conservation laws. The Galerkin approximation is stabilized by only residual based artificial viscosity, without any least-squares, SUPG, or streamline diffusion terms. We prove convergence of the method, applied to a scalar conservation law in two space dimensions, toward an unique entropy solution for implicit time stepping schemes. © 2012 Elsevier B.V. All rights reserved.

  7. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  8. Method for artificially raising mule deer fawns

    Energy Technology Data Exchange (ETDEWEB)

    1978-10-01

    Eighteen captive Rocky Mountain mule deer fawns (Odocoileus hemionus hemionus), nine hand-raised and nine dam-raised, were used to evaluate an artificial rearing procedure. Hand-raised fawns were fed whole cow's milk supplemented with a daily addition of pediatric vitamins. Feeding intervals and quantities fed increased with increasing age of fawns. Blood values, body weight and mortality were used to determine nutritional and physiological status of fawns. Dam-raised fawns had significantly higher (P < 0.05) hemoglobin, hematocrit, total protein and cholesterol levels than hand-raised fawns. Mean body weight and growth rate were also significantly higher (P < 0.001) in dam-raised fawns. High mortality, 67%, occurred in dam-raised fawns as compared to 33% in hand-raised fawns. Resultant tameness in hand-raised fawns facilitated treatment of disease and handling of animals in experimental situations.

  9. Method of strengthening and stabilizing compressible soils

    Energy Technology Data Exchange (ETDEWEB)

    Casagrande, L.; Loughney, R.W.

    1968-06-04

    A method and means are described for stabilizing soil, consisting essentially of spacing holes about an area of the soil which is to be strengthened and stabilized. Each hole has placed therein a pipe which may be of approximately 2 to 4 in. in diam. Each pipe is provided with an expandable member capable of being expanded to a diameter of several feet. After the pipe with the expandable member fixed to it is placed in the hole, sand is placed around it, filling the sapce between the exterior walls of the expandable member and the walls of the hole, thus forming a sand drain. Thereafter the expandable member is put under pressure and expanded against the walls of the hole, placing pressure upon the soil and causing the water to be drained therefrom into the sand drains through which it rises to the surface and many be disposed of. (14 claims)

  10. Space-time discontinuous Galerkin method for compressible flow

    NARCIS (Netherlands)

    Klaij, C.M.

    2006-01-01

    The space-time discontinuous Galerkin method allows the simulation of compressible flow in complex aerodynamical applications requiring moving, deforming and locally refined meshes. This thesis contains the space-time discretization of the physical model, a fully explicit solver for the resulting

  11. Combustion engine variable compression ratio apparatus and method

    Science.gov (United States)

    Lawrence,; Keith, E [Peoria, IL; Strawbridge, Bryan E [Dunlap, IL; Dutart, Charles H [Washington, IL

    2006-06-06

    An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.

  12. Review of Artificial Abrasion Test Methods for PV Module Technology

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Muller, Matt T. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Simpson, Lin J. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-08-01

    This review is intended to identify the method or methods--and the basic details of those methods--that might be used to develop an artificial abrasion test. Methods used in the PV literature were compared with their closest implementation in existing standards. Also, meetings of the International PV Quality Assurance Task Force Task Group 12-3 (TG12-3, which is concerned with coated glass) were used to identify established test methods. Feedback from the group, which included many of the authors from the PV literature, included insights not explored within the literature itself. The combined experience and examples from the literature are intended to provide an assessment of the present industry practices and an informed path forward. Recommendations toward artificial abrasion test methods are then identified based on the experiences in the literature and feedback from the PV community. The review here is strictly focused on abrasion. Assessment methods, including optical performance (e.g., transmittance or reflectance), surface energy, and verification of chemical composition were not examined. Methods of artificially soiling PV modules or other specimens were not examined. The weathering of artificial or naturally soiled specimens (which may ultimately include combined temperature and humidity, thermal cycling and ultraviolet light) were also not examined. A sense of the purpose or application of an abrasion test method within the PV industry should, however, be evident from the literature.

  13. Simple numerical method for predicting steady compressible flows

    Science.gov (United States)

    Vonlavante, Ernst; Nelson, N. Duane

    1986-01-01

    A numerical method for solving the isenthalpic form of the governing equations for compressible viscous and inviscid flows was developed. The method was based on the concept of flux vector splitting in its implicit form. The method was tested on several demanding inviscid and viscous configurations. Two different forms of the implicit operator were investigated. The time marching to steady state was accelerated by the implementation of the multigrid procedure. Its various forms very effectively increased the rate of convergence of the present scheme. High quality steady state results were obtained in most of the test cases; these required only short computational times due to the relative efficiency of the basic method.

  14. A New Static and Fatigue Compression Test Method for Composites

    DEFF Research Database (Denmark)

    Bech, Jakob Ilsted; Goutianos, Stergios; Løgstrup Andersen, Tom

    2011-01-01

    A new test method to determine the compressive properties of composite materials under both static and fatigue loading was developed. The novel fixture is based on the concept of transmitting the load by a fixed ratio of end-to-shear loading. The end-to-shear load ratio is kept fixed during...... the test through a mechanical mechanism, which automatically maintains the gripping pressure. The combined loading method has proven very efficient in static loading and is used in the new fixture which is specially designed for fatigue testing. Optimum gripping (shear loading) and alignment of the test...... coupon are achieved throughout the fatigue life. The fatigue strength obtained is more reliable because bending of the specimen due to poor gripping and alignment is minimised. The application of the new fixture to static and fatigue compression is demonstrated by using unidirectional carbon...

  15. A new method of artificial latent fingerprint creation using artificial sweat and inkjet printer.

    Science.gov (United States)

    Hong, Sungwook; Hong, Ingi; Han, Aleum; Seo, Jin Yi; Namgung, Juyoung

    2015-12-01

    In order to study fingerprinting in the field of forensic science, it is very important to have two or more latent fingerprints with identical chemical composition and intensity. However, it is impossible to obtain identical fingerprints, in reality, because fingerprinting comes out slightly differently every time. A previous research study had proposed an artificial fingerprint creation method in which inkjet ink was replaced with amino acids and sodium chloride solution: the components of human sweat. But, this method had some drawbacks: divalent cations were not added while formulating the artificial sweat solution, and diluted solutions were used for creating weakly deposited latent fingerprint. In this study, a method was developed for overcoming the drawbacks of the methods used in the previous study. Several divalent cations were added in this study because the amino acid-ninhydrin (or some of its analogues) complex is known to react with divalent cations to produce a photoluminescent product; and, similarly, the amino acid-1,2-indanedione complex is known to be catalyzed by a small amount of zinc ions to produce a highly photoluminescent product. Also, in this study, a new technique was developed which enables to adjust the intensity when printing the latent fingerprint patterns. In this method, image processing software is used to control the intensity of the master fingerprint patterns, which adjusts the printing intensity of the latent fingerprints. This new method opened the way to produce a more realistic artificial fingerprint in various strengths with one artificial sweat working solution. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Implicit upwind methods for the compressible Navier-Stokes equations

    Science.gov (United States)

    Coakley, T. J.

    1983-01-01

    A class of implicit upwind-differencing methods for the compressible Navier-Stokes equations is described and applied. The methods are based on the use of local eigenvalues or wave speeds to control spatial differencing of inviscid terms and are aimed at increasing the level of accuracy and stability achievable in computation. Techniques for accelerating the rate of convergence to a steady-state solution are also used. Applications to inviscid and viscous transonic flows are discussed and compared with other methods and experimental measurements. It is shown that accurate and efficient transonic airfoil calculations can be made on the Cray-1 coomputer in less than 2 min.

  17. A GPU-accelerated implicit meshless method for compressible flows

    Science.gov (United States)

    Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng

    2018-05-01

    This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.

  18. A Vortex Particle-Mesh method for subsonic compressible flows

    Science.gov (United States)

    Parmentier, Philippe; Winckelmans, Grégoire; Chatelain, Philippe

    2018-02-01

    This paper presents the implementation and validation of a remeshed Vortex Particle-Mesh (VPM) method capable of simulating complex compressible and viscous flows. It is supplemented with a radiation boundary condition in order for the method to accommodate the radiating quantities of the flow. The efficiency of the methodology relies on the use of an underlying grid; it allows the use of a FFT-based Poisson solver to calculate the velocity field, and the use of high-order isotropic finite differences to evaluate the non-advective terms in the Lagrangian form of the conservation equations. The Möhring analogy is then also used to further obtain the far-field sound produced by two co-rotating Gaussian vortices. It is demonstrated that the method is in excellent quantitative agreement with reference results that were obtained using a high-order Eulerian method and using a high-order remeshed Vortex Particle (VP) method.

  19. On the estimation method of compressed air consumption during pneumatic caisson sinking

    OpenAIRE

    平川, 修治; ヒラカワ, シュウジ; Shuji, HIRAKAWA

    1990-01-01

    There are several methods in estimation of compressed air consumption during pneumatic caisson sinking. It is re uired in the estimation of compressed air consumption by the methods under the same conditions. In this paper, it is proposed the methods which is able to estimate accurately the compressed air consumption during pnbumatic caissons sinking at this moment.

  20. Biometric and Emotion Identification: An ECG Compression Based Method

    Directory of Open Access Journals (Sweden)

    Susana Brás

    2018-04-01

    Full Text Available We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG. The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1 conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2 conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3 identification of the ECG record class, using a 1-NN (nearest neighbor classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  1. Biometric and Emotion Identification: An ECG Compression Based Method.

    Science.gov (United States)

    Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J

    2018-01-01

    We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  2. Artificial-intelligence methods in decision and control systems

    Energy Technology Data Exchange (ETDEWEB)

    Lirov, Y.V.

    1987-01-01

    Artificial-intelligence methods were applied to the design and implementation of some decision and control systems. A so-called semantic approach to control and decisions was developed and artificial-intelligence methods were used to provide a realizable implementation. These concepts were tested using applications from robust identification and control of time-varying systems, intelligent navigation, and intelligent simulation of differential games. An aspect of a generalized traveling-salesman problem was solved, and intelligent simulation of differential games was implemented in Prolog using an example system for automated learning by tactical decision systems in air combat. These implementations were successful and provide several advantages over traditional approaches. The limitations of these concepts were identified and suggestions for future work are made.

  3. Iterative methods for compressible Navier-Stokes and Euler equations

    Energy Technology Data Exchange (ETDEWEB)

    Tang, W.P.; Forsyth, P.A.

    1996-12-31

    This workshop will focus on methods for solution of compressible Navier-Stokes and Euler equations. In particular, attention will be focused on the interaction between the methods used to solve the non-linear algebraic equations (e.g. full Newton or first order Jacobian) and the resulting large sparse systems. Various types of block and incomplete LU factorization will be discussed, as well as stability issues, and the use of Newton-Krylov methods. These techniques will be demonstrated on a variety of model transonic and supersonic airfoil problems. Applications to industrial CFD problems will also be presented. Experience with the use of C++ for solution of large scale problems will also be discussed. The format for this workshop will be four fifteen minute talks, followed by a roundtable discussion.

  4. Turbulence modeling methods for the compressible Navier-Stokes equations

    Science.gov (United States)

    Coakley, T. J.

    1983-01-01

    Turbulence modeling methods for the compressible Navier-Stokes equations, including several zero- and two-equation eddy-viscosity models, are described and applied. Advantages and disadvantages of the models are discussed with respect to mathematical simplicity, conformity with physical theory, and numerical compatibility with methods. A new two-equation model is introduced which shows advantages over other two-equation models with regard to numerical compatibility and the ability to predict low-Reynolds-number transitional phenomena. Calculations of various transonic airfoil flows are compared with experimental results. A new implicit upwind-differencing method is used which enhances numerical stability and accuracy, and leads to rapidly convergent steady-state solutions.

  5. Prediction of modulus of elasticity and compressive strength of concrete specimens by means of artificial neural networks

    Directory of Open Access Journals (Sweden)

    José Fernando Moretti

    2016-01-01

    Full Text Available Currently, artificial neural networks are being widely used in various fields of science and engineering. Neural networks have the ability to learn through experience and existing examples, and then generate solutions and answers to new problems, involving even the effects of non-linearity in their variables. The aim of this study is to use a feed-forward neural network with back-propagation technique, to predict the values of compressive strength and modulus of elasticity, at 28 days, of different concrete mixtures prepared and tested in the laboratory. It demonstrates the ability of the neural networks to quantify the strength and the elastic modulus of concrete specimens prepared using different mix proportions.

  6. Artificial urinary conduit construction using tissue engineering methods.

    Science.gov (United States)

    Kloskowski, Tomasz; Pokrywczyńska, Marta; Drewa, Tomasz

    2015-01-01

    Incontinent urinary diversion using an ileal conduit is the most popular method used by urologists after bladder cystectomy resulting from muscle invasive bladder cancer. The use of gastrointestinal tissue is related to a series of complications with the necessity of surgical procedure extension which increases the time of surgery. Regenerative medicine together with tissue engineering techniques gives hope for artificial urinary conduit construction de novo without affecting the ileum. In this review we analyzed history of urinary diversion together with current attempts in urinary conduit construction using tissue engineering methods. Based on literature and our own experience we presented future perspectives related to the artificial urinary conduit construction. A small number of papers in the field of tissue engineered urinary conduit construction indicates that this topic requires more attention. Three main factors can be distinguished to resolve this topic: proper scaffold construction along with proper regeneration of both the urothelium and smooth muscle layers. Artificial urinary conduit has a great chance to become the first commercially available product in urology constructed by regenerative medicine methods.

  7. Development of a diagnostic expert system for eddy current data analysis using applied artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Upadhyaya, B.R.; Yan, W. [Tennessee Univ., Knoxville, TN (United States). Dept. of Nuclear Engineering; Behravesh, M.M. [Electric Power Research Institute, Palo Alto, CA (United States); Henry, G. [EPRI NDE Center, Charlotte, NC (United States)

    1999-09-01

    A diagnostic expert system that integrates database management methods, artificial neural networks, and decision-making using fuzzy logic has been developed for the automation of steam generator eddy current test (ECT) data analysis. The new system, known as EDDYAI, considers the following key issues: (1) digital eddy current test data calibration, compression, and representation; (2) development of robust neural networks with low probability of misclassification for flaw depth estimation; (3) flaw detection using fuzzy logic; (4) development of an expert system for database management, compilation of a trained neural network library, and a decision module; and (5) evaluation of the integrated approach using eddy current data. The implementation to field test data includes the selection of proper feature vectors for ECT data analysis, development of a methodology for large eddy current database management, artificial neural networks for flaw depth estimation, and a fuzzy logic decision algorithm for flaw detection. A large eddy current inspection database from the Electric Power Research Institute NDE Center is being utilized in this research towards the development of an expert system for steam generator tube diagnosis. The integration of ECT data pre-processing as part of the data management, fuzzy logic flaw detection technique, and tube defect parameter estimation using artificial neural networks are the fundamental contributions of this research. (orig.)

  8. Hydrogeological Methods for Assessing Feasibility of Artificial Recharge

    Science.gov (United States)

    Kim, Y.; Koo, M.; Lee, K.; Moon, D.; Barry, J. M.

    2009-12-01

    This study presents the hydrogeological methods to assess the feasibility of artificial recharge in Jeju Island, Korea for securing both sustainable groundwater resources and severe floods. Jeju-friendly Aquifer Recharge Technology (J-ART) in this study is developing by capturing ephemeral stream water with no interference in the environments such as natural recharge or eco-system, storing the flood water in the reservoirs, recharging it through designed borehole after appropriate water treatment, and then making it to be used at down-gradient production wells. Many hydrogeological methods, including physico-chemical surface water and groundwater monitoring, geophysical survey, stable isotope analysis, and groundwater modeling have been employed to predict and assess the artificially recharged surface waters flow and circulation between recharge area and discharge area. In the study of physico-chemical water monitoring survey, the analyses of surface water level and velocity, of water qualities including turbidity, and of suspended soil settling velocity were performed. For understanding subsurface hydrogeologic characteristics the injection test was executed and the results are 118-336 m2/day of transmissivity and 4,367-11,032 m3/day of the maximum intake water capacity. Characterizing groundwater flow from recharge area to discharge area should be achieved to assess the efficiency of J-ART. The resistivity logging was carried out to predict water flow in unsaturated zone during artificial recharge based on the inverse modeling and resistivity change patterns. Stable isotopes of deuterium and oxygen-18 of surface waters and groundwaters have been determined to interpret mixing and flow in groundwaters impacted by artificial recharge. A numerical model simulating groundwater flow and heat transport to assess feasibility of artificial recharge has been developed using the hydraulic properties of aquifers, groundwater levels, borehole temperatures, and meteorological

  9. A guided wave dispersion compensation method based on compressed sensing

    Science.gov (United States)

    Xu, Cai-bin; Yang, Zhi-bo; Chen, Xue-feng; Tian, Shao-hua; Xie, Yong

    2018-03-01

    The ultrasonic guided wave has emerged as a promising tool for structural health monitoring (SHM) and nondestructive testing (NDT) due to their capability to propagate over long distances with minimal loss and sensitivity to both surface and subsurface defects. The dispersion effect degrades the temporal and spatial resolution of guided waves. A novel ultrasonic guided wave processing method for both single mode and multi-mode guided waves dispersion compensation is proposed in this work based on compressed sensing, in which a dispersion signal dictionary is built by utilizing the dispersion curves of the guided wave modes in order to sparsely decompose the recorded dispersive guided waves. Dispersion-compensated guided waves are obtained by utilizing a non-dispersion signal dictionary and the results of sparse decomposition. Numerical simulations and experiments are implemented to verify the effectiveness of the developed method for both single mode and multi-mode guided waves.

  10. Method for Calculation of Steam-Compression Heat Transformers

    Directory of Open Access Journals (Sweden)

    S. V. Zditovetckaya

    2012-01-01

    Full Text Available The paper considers a method for joint numerical analysis of cycle parameters and heatex-change equipment of steam-compression heat transformer contour that takes into account a non-stationary operational mode and irreversible losses in devices and pipeline contour. The method has been realized in the form of the software package and can be used while making design or selection of a heat transformer with due account of a coolant and actual equipment being included in its structure.The paper presents investigation results revealing influence of pressure loss in an evaporator and a condenser from the side of the coolant caused by a friction and local resistance on power efficiency of the heat transformer which is operating in the mode of refrigerating and heating installation and a thermal pump. Actually obtained operational parameters of the thermal pump in the nominal and off-design operatinal modes depend on the structure of the concrete contour equipment.

  11. A Stabilized Finite Element Method for Compressible Phase Change Problems

    Science.gov (United States)

    Zhang, Yu; Yang, Fan; Chandra, Anirban; Shams, Ehsan; Shephard, Mark; Sahni, Onkar; Oberai, Assad

    2017-11-01

    The numerical modeling of multi-phase interfacial phase change phenomena, such as evaporation of a liquid or combustion of a solid, is essential for several important applications. A mathematically consistent and robust computational approach to address challenges such as large density ratio across phases, discontinuous fields at the interface, rapidly evolving geometries, and compressible phases, is presented in this work. We use the stabilized finite element methods on unstructured grids for solving the compressible Navier-Stokes equations. The rate of phase change rate is predicted from thermodynamic variables on both sides of the interface. We enforce the continuity of temperature and velocity in the tangential direction by using a penalty approach, while appropriate jump conditions derived from conservation laws across the interface are handled by using discontinuous interpolations. The interface is explicitly tracked using the arbitrary Lagrangian-Eulerian (ALE) technique, wherein the grid at the interface is constrained to move with the interface. This work is supported by the U.S. Army Grants W911NF1410301 and W911NF16C0117.

  12. The production of fully deacetylated chitosan by compression method

    Directory of Open Access Journals (Sweden)

    Xiaofei He

    2016-03-01

    Full Text Available Chitosan’s activities are significantly affected by degree of deacetylation (DDA, while fully deacetylated chitosan is difficult to produce in a large scale. Therefore, this paper introduces a compression method for preparing 100% deacetylated chitosan with less environmental pollution. The product is characterized by XRD, FT-IR, UV and HPLC. The 100% fully deacetylated chitosan is produced in low-concentration alkali and high-pressure conditions, which only requires 15% alkali solution and 1:10 chitosan powder to NaOH solution ratio under 0.11–0.12 MPa for 120 min. When the alkali concentration varied from 5% to 15%, the chitosan with ultra-high DDA value (up to 95% is produced.

  13. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    Science.gov (United States)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  14. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Directory of Open Access Journals (Sweden)

    Shahoo Maleki

    2014-06-01

    Full Text Available Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR and Back-Propagation Neural Network (BPNN. Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  15. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Science.gov (United States)

    Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad

    2014-06-01

    Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  16. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  17. Artificial intelligence methods in deregulated power systems operations

    Science.gov (United States)

    Ilic, Jovan

    With the introduction of the power systems deregulation, many classical power transmission and distribution optimization tools became inadequate. Optimal Power Flow and Unit Commitment are common computer programs used in the regulated power industry. This work is addressing the Optimal Power Flow and Unit Commitment in the new deregulated environment. Optimal Power Flow is a high dimensional, non-linear, and non-convex optimization problem. As such, it is even now, after forty years since its introduction, a research topic without a widely accepted solution able to encompass all areas of interest. Unit Commitment is a high dimensional, combinatorial problem which should ideally include the Optimal Power Flow in its solution. The dimensionality of a typical Unit Commitment problem is so great that even the enumeration of all the combinations would take too much time for any practical purposes. This dissertation attacks the Optimal Power Flow problem using non-traditional tools from the Artificial Intelligence arena. Artificial Intelligence optimization methods are based on stochastic principles. Usually, stochastic optimization methods are successful where all other classical approaches fail. We will use Genetic Programming optimization for both Optimal Power Flow and Unit Commitment. Long processing times will also be addressed through supervised machine learning.

  18. Spectral Element Method for the Simulation of Unsteady Compressible Flows

    Science.gov (United States)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

  19. A Novel Method of Compression Based on Adaptive Encoding Technology for SoC Design

    OpenAIRE

    B.Sakthi Bharathi; S.Saravanan; R.Vijay Sai

    2013-01-01

    Test data compression is one of the main objectives to make reliable in testing System on chip design. The data compression mainly relies in the areas of hardware, time and power consumed. This paper presents an efficient test data compression method to achieve high compression ratio. The test patterns have 1, 0 and undefined (x) data. The test patterns are grouped such that to improve compression ratio. While grouping the undefined bits is also assigned as 1, 0 and conflict bit (c). The expe...

  20. Acti-Glide: a simple method of applying compression hosiery.

    Science.gov (United States)

    Hampton, Sylvie

    2005-05-01

    Compression hosiery is often worn to help prevent aching legs and swollen ankles, to prevent ulceration, to treat venous ulceration or to treat varicose veins. However, patients and nurses may experience problems applying hosiery and this can lead to non-concordance in patients and possibly reluctance from nurses to use compression hosiery. A simple solution to applying firm hosiery is Acti-Glide from Activa Healthcare.

  1. Analysis of a discrete element method and coupling with a compressible fluid flow method

    International Nuclear Information System (INIS)

    Monasse, L.

    2011-01-01

    This work aims at the numerical simulation of compressible fluid/deformable structure interactions. In particular, we have developed a partitioned coupling algorithm between a Finite Volume method for the compressible fluid and a Discrete Element method capable of taking into account fractures in the solid. A survey of existing fictitious domain methods and partitioned algorithms has led to choose an Embedded Boundary method and an explicit coupling scheme. We first showed that the Discrete Element method used for the solid yielded the correct macroscopic behaviour and that the symplectic time-integration scheme ensured the preservation of energy. We then developed an explicit coupling algorithm between a compressible inviscid fluid and an un-deformable solid. Mass, momentum and energy conservation and consistency properties were proved for the coupling scheme. The algorithm was then extended to the coupling with a deformable solid, in the form of a semi implicit scheme. Finally, we applied this method to unsteady inviscid flows around moving structures: comparisons with existing numerical and experimental results demonstrate the excellent accuracy of our method. (author) [fr

  2. Acceleration methods for multi-physics compressible flow

    Science.gov (United States)

    Peles, Oren; Turkel, Eli

    2018-04-01

    In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation

  3. Methods of compression of digital holograms, based on 1-level wavelet transform

    International Nuclear Information System (INIS)

    Kurbatova, E A; Cheremkhin, P A; Evtikhiev, N N

    2016-01-01

    To reduce the size of memory required for storing information about 3D-scenes and to decrease the rate of hologram transmission, digital hologram compression can be used. Compression of digital holograms by wavelet transforms is among most powerful methods. In the paper the most popular wavelet transforms are considered and applied to the digital hologram compression. Obtained values of reconstruction quality and hologram's diffraction efficiencies are compared. (paper)

  4. A Comparative Evaluation of Sorption, Solubility, and Compressive Strength of Three Different Glass Ionomer Cements in Artificial Saliva: Anin vitroStudy.

    Science.gov (United States)

    Bhatia, Hind P; Singh, Shivani; Sood, Shveta; Sharma, Naresh

    2017-01-01

    To evaluate and compare the sorption, solubility, and compressive strength of three different glass ionomer cements in artificial saliva - type IX glass ionomer cement, silver-reinforced glass ionomer cement, and zirconia-reinforced glass ionomer cement, so as to determine the material of choice for stress-bearing areas. A total of 90 cylindrical specimens (4 mm diameter and 6 mm height) were prepared for each material following the manufacturer's instructions. After subjecting the specimens to thermocycling, 45 specimens were immersed in artificial saliva for 24 hours for compressive strength testing under a universal testing machine, and the other 45 were evaluated for sorption and solubility, by first weighing them by a precision weighing scale (W1), then immersing them in artificial saliva for 28 days and weighing them (W2), and finally dehydrating in an oven for 24 hours and weighing them (W3). Group III (zirconomer) shows the highest compressive strength followed by group II (Miracle Mix) and least compressive strength is seen in group I (glass ionomer cement type IX-Extra) with statistically significant differences between the groups. The sorption and solubility values in artificial saliva were highest for glass ionomer cement type IX - Extra-GC (group I) followed by zirconomer-Shofu (group III), and the least value was seen for Miracle Mix-GC (group II). Zirconia-reinforced glass ionomer cement is a promising dental material and can be used as a restoration in stress-bearing areas due to its high strength and low solubility and sorption rate. It may be a substitute for silver-reinforced glass ionomer cement due to the added advantage of esthetics. This study provides vital information to pediatric dental surgeons on relatively new restorative materials as physical and mechanical properties of the new material are compared with conventional materials to determine the best suited material in terms of durability, strength and dimensional stability. This study

  5. Evaluation of the Giggenbach bottle method using artificial fumarolic gases

    Science.gov (United States)

    Lee, S.; Jeong, H. Y.

    2013-12-01

    Volcanic eruption is one of the most dangerous natural disasters. Mt. Baekdu, located on the border between North Korea and China, has been recently showing multiple signs of its eruption. The magmatic activity of a volcano strongly affects the composition of volcanic gases, which can provide a useful tool for predicting the eruption. Among various volcanic gas monitoring methods, the Giggenbach bottle method involves the on-site sampling of volcanic gases and the subsequent laboratory analysis, thus making it possible to detect a range of volcanic gases at low levels. In this study, we aim to evaluate the effectiveness of the Giggenbach bottle method and develop the associated analytical tools using artificial fumarolic gases with known compositions. The artificial fumarolic gases are generated by mixing CO2, CO, H2S, SO2, Ar, and H2 gas streams with a N2 stream sparged through an acidic medium containing HCl and HF. The target compositions of the fumarolic gases are selected to cover those reported for various volcanoes under different tectonic environments as follows: CO2 (2-12 mol %), CO (0.3-1 mol %), H2S (0.7-2 mol %), SO2 (0.6-4 mol %), Ar (0.3-0.7 mol %), H2 (0.3-0.7 mol %), HCl (0.2-1 mol %), and HF (HF dissolve into the alkaline solution. In case of H2S, it reacts with dissolved Cd2+ to precipitate as CdS(s). The gas accumulated in the headspace can be analyzed for CO, Ar, H2, and N2 on a gas chromatography. The alkaline solution is first separated from yellowish CdS precipitates by filtration, and then pretreated with hydrogen peroxide to oxidize dissolved SO2 (H2SO3) to SO42-. The resultant solution can be analyzed for SO2 as SO42-, HCl as Cl-, and HF as F- on an ion chromatography and CO2 on an ionic carbon analyzer. Also, the amount of H2S can be determined by measuring the remaining dissolved Cd2+ on an inductively coupled plasma-mass spectrometry.

  6. Conductivity enhancement of multiwalled carbon nanotube thin film via thermal compression method

    Science.gov (United States)

    Tsai, Wan-Lin; Wang, Kuang-Yu; Chang, Yao-Jen; Li, Yu-Ren; Yang, Po-Yu; Chen, Kuan-Neng; Cheng, Huang-Chung

    2014-08-01

    For the first time, the thermal compression method is applied to effectively enhance the electrical conductivity of carbon nanotube thin films (CNTFs). With the assistance of heat and pressure on the CNTFs, the neighbor multiwalled carbon nanotubes (CNTs) start to link with each other, and then these separated CNTs are twined into a continuous film while the compression force, duration, and temperature are quite enough for the reaction. Under the compression temperature of 400°C and the compression force of 100 N for 50 min, the sheet resistance can be reduced from 17 to 0.9 k Ω/sq for the CNTFs with a thickness of 230 nm. Moreover, the effects of compression temperature and the duration of thermal compression on the conductivity of CNTF are also discussed in this work.

  7. Space Environment Modelling with the Use of Artificial Intelligence Methods

    Science.gov (United States)

    Lundstedt, H.; Wintoft, P.; Wu, J.-G.; Gleisner, H.; Dovheden, V.

    1996-12-01

    Space based technological systems are affected by the space weather in many ways. Several severe failures of satellites have been reported at times of space storms. Our society also increasingly depends on satellites for communication, navigation, exploration, and research. Predictions of the conditions in the satellite environment have therefore become very important. We will here present predictions made with the use of artificial intelligence (AI) techniques, such as artificial neural networks (ANN) and hybrids of AT methods. We are developing a space weather model based on intelligence hybrid systems (IHS). The model consists of different forecast modules, each module predicts the space weather on a specific time-scale. The time-scales range from minutes to months with the fundamental time-scale of 1-5 minutes, 1-3 hours, 1-3 days, and 27 days. Solar and solar wind data are used as input data. From solar magnetic field measurements, either made on the ground at Wilcox Solar Observatory (WSO) at Stanford, or made from space by the satellite SOHO, solar wind parameters can be predicted and modelled with ANN and MHD models. Magnetograms from WSO are available on a daily basis. However, from SOHO magnetograms will be available every 90 minutes. SOHO magnetograms as input to ANNs will therefore make it possible to even predict solar transient events. Geomagnetic storm activity can today be predicted with very high accuracy by means of ANN methods using solar wind input data. However, at present real-time solar wind data are only available during part of the day from the satellite WIND. With the launch of ACE in 1997, solar wind data will on the other hand be available during 24 hours per day. The conditions of the satellite environment are not only disturbed at times of geomagnetic storms but also at times of intense solar radiation and highly energetic particles. These events are associated with increased solar activity. Predictions of these events are therefore

  8. A simple method for fabricating artificial kidney stones of different physical properties.

    Science.gov (United States)

    Esch, Eric; Simmons, Walter Neal; Sankin, Georgy; Cocks, Hadley F; Preminger, Glenn M; Zhong, Pei

    2010-08-01

    A simple method for preparing artificial kidney stones with varying physical properties is described. BegoStone was prepared with a powder-to-water ratio ranging from 15:3 to 15:6. The acoustic properties of the phantoms were characterized using an ultrasound transmission technique, from which the corresponding mechanical properties were calculated based on elastic wave theory. The measured parameters for BegoStone phantoms of different water contents are: longitudinal wave speed (3,148-4,159 m/s), transverse wave speed (1,813-2,319 m/s), density (1,563-1,995 kg/m(3)), longitudinal acoustic impedance (4.92-8.30 kg/m(2) s), transverse acoustic impedance (2.83-4.63 kg/m(2) s), Young's modulus (12.9-27.4 GPa), bulk modulus (8.6-20.2 GPa), and shear modulus (5.1-10.7 GPa), which cover the range of corresponding properties reported in natural kidney stones. In addition, diametral compression tests were carried out to determine tensile failure strength of the stone phantoms. BegoStone phantoms with varying water content at preparation have tensile failure strength from 6.9 to 16.3 MPa when tested dry and 3.2 to 7.1 MPa when tested in water-soaked condition. Overall, it is demonstrated that this new BegoStone preparation method can be used to fabricate artificial stones with physical properties matched with those of natural kidney stones of various chemical compositions.

  9. Using artificial intelligence methods to design new conducting polymers

    Directory of Open Access Journals (Sweden)

    Ronaldo Giro

    2003-12-01

    Full Text Available In the last years the possibility of creating new conducting polymers exploring the concept of copolymerization (different structural monomeric units has attracted much attention from experimental and theoretical points of view. Due to the rich carbon reactivity an almost infinite number of new structures is possible and the procedure of trial and error has been the rule. In this work we have used a methodology able of generating new structures with pre-specified properties. It combines the use of negative factor counting (NFC technique with artificial intelligence methods (genetic algorithms - GAs. We present the results for a case study for poly(phenylenesulfide phenyleneamine (PPSA, a copolymer formed by combination of homopolymers: polyaniline (PANI and polyphenylenesulfide (PPS. The methodology was successfully applied to the problem of obtaining binary up to quinternary disordered polymeric alloys with a pre-specific gap value or exhibiting metallic properties. It is completely general and can be in principle adapted to the design of new classes of materials with pre-specified properties.

  10. Alteration of blue pigment in artificial iris in ocular prosthesis: effect of paint, drying method and artificial aging.

    Science.gov (United States)

    Goiato, Marcelo Coelho; Fernandes, Aline Úrsula Rocha; dos Santos, Daniela Micheline; Hadadd, Marcela Filié; Moreno, Amália; Pesqueira, Aldiéris Alves

    2011-02-01

    The artificial iris is the structure responsible for the dissimulation and aesthetics of ocular prosthesis. The objective of the present study was to evaluate the color stability of artificial iris of microwaveable polymerized ocular prosthesis, as a function of paint type, drying method and accelerated aging. A total of 40 discs of microwaveable polymerized acrylic resin were fabricated, and divided according to the blue paint type (n = 5): hydrosoluble acrylic, nitrocellulose automotive, hydrosoluble gouache and oil paints. Paints where dried either at natural or at infrared light bulb method. Each specimen was constituted of one disc in colorless acrylic resin and another colored with a basic sclera pigment. Painting was performed in one surface of one of the discs. The specimens were submitted to an artificial aging chamber under ultraviolet light, during 1008 h. A reflective spectrophotometer was used to evaluate color changes. Data were evaluated by 3-way repeated-measures ANOVA and the Tukey HSD test (α = 0.05). All paints suffered color alteration. The oil paint presented the highest color resistance to artificial aging regardless of drying method. Copyright © 2010 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  11. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  12. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  13. Diffuse interface method for a compressible binary fluid.

    Science.gov (United States)

    Liu, Jiewei; Amberg, Gustav; Do-Quang, Minh

    2016-01-01

    Multicomponent, multiphase, compressible flows are very important in real life, as well as in scientific research, while their modeling is in an early stage. In this paper, we propose a diffuse interface model for compressible binary mixtures, based on the balance of mass, momentum, energy, and the second law of thermodynamics. We show both analytically and numerically that this model is able to describe the phase equilibrium for a real binary mixture (CO_{2} + ethanol is considered in this paper) very well by adjusting the parameter which measures the attraction force between molecules of the two components in the model. We also show that the calculated surface tension of the CO_{2} + ethanol mixture at different concentrations match measurements in the literature when the mixing capillary coefficient is taken to be the geometric mean of the capillary coefficient of each component. Three different cases of two droplets in a shear flow, with the same or different concentration, are simulated, showing that the higher concentration of CO_{2} the smaller the surface tension and the easier the drop deforms.

  14. The compression and storage method of the same kind of medical images: DPCM

    Science.gov (United States)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.

  15. Path Planning for Robot based on Chaotic Artificial Potential Field Method

    Science.gov (United States)

    Zhang, Cheng

    2018-03-01

    Robot path planning in unknown environments is one of the hot research topics in the field of robot control. Aiming at the shortcomings of traditional artificial potential field methods, we propose a new path planning for Robot based on chaotic artificial potential field method. The path planning adopts the potential function as the objective function and introduces the robot direction of movement as the control variables, which combines the improved artificial potential field method with chaotic optimization algorithm. Simulations have been carried out and the results demonstrate that the superior practicality and high efficiency of the proposed method.

  16. Soft computing methods for estimating the uniaxial compressive strength of intact rock from index tests

    Czech Academy of Sciences Publication Activity Database

    Mishra, A. Deepak; Srigyan, M.; Basu, A.; Rokade, P. J.

    2015-01-01

    Roč. 80, December 2015 (2015), s. 418-424 ISSN 1365-1609 Institutional support: RVO:68145535 Keywords : uniaxial compressive strength * rock indices * fuzzy inference system * artificial neural network * adaptive neuro-fuzzy inference system Subject RIV: DH - Mining, incl. Coal Mining Impact factor: 2.010, year: 2015 http://ac.els-cdn.com/S1365160915300708/1-s2.0-S1365160915300708-main.pdf?_tid=318a7cec-8929-11e5-a3b8-00000aacb35f&acdnat=1447324752_2a9d947b573773f88da353a16f850eac

  17. Control Systems for Hyper-Redundant Robots Based on Artificial Potential Method

    Directory of Open Access Journals (Sweden)

    Mihaela Florescu

    2015-06-01

    Full Text Available This paper presents the control method of hyper-redundant robots based on the artificial potential approach. The principles of this method are shown and a suggestive example is offered. Then, the artificial potential method is applied to the case of a tentacle robot starting from the dynamic model of the robot. In addition, a series of results that are obtained through simulation is presented.

  18. A novel full-field experimental method to measure the local compressibility of gas diffusion media

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Yeh-Hung; Li, Yongqiang [Electrochemical Energy Research Lab, GM R and D, Honeoye Falls, NY 14472 (United States); Rock, Jeffrey A. [GM Powertrain, Honeoye Falls, NY 14472 (United States)

    2010-05-15

    The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 {mu}m, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm x 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray trademark TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells. (author)

  19. A novel full-field experimental method to measure the local compressibility of gas diffusion media

    Science.gov (United States)

    Lai, Yeh-Hung; Li, Yongqiang; Rock, Jeffrey A.

    The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 μm, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm × 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray™ TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells.

  20. Methods of calculating the bearing capacity of eccentrically compressed concrete elements and suggestions for its improvement

    Directory of Open Access Journals (Sweden)

    Starishko Ivan Nikolaevich

    2014-03-01

    Full Text Available The proposed calculation method is specific in terms of determining the carrying capacity of eccentrically compressed concrete elements, in contrast to the calculation by error method, as in the existing regulations, where in the result of the calculation is not known what is the limit load the eccentric compression element can withstand. The proposed calculation method, the publication of which is expected in the next issue of the "Vestnik MGSU" the above mentioned shortcomings of the existing calculation methods, as well as the shortcomings listed in this article are eliminated, which results in the higher convergence of theoretical and experimental results of eccentrically compressed concrete elements strength and hence a high reliability of their operation.

  1. Artificial intelligence methods for predicting T-cell epitopes.

    Science.gov (United States)

    Zhao, Yingdong; Sung, Myong-Hee; Simon, Richard

    2007-01-01

    Identifying epitopes that elicit a major histocompatibility complex (MHC)-restricted T-cell response is critical for designing vaccines for infectious diseases and cancers. We have applied two artificial intelligence approaches to build models for predicting T-cell epitopes. We developed a support vector machine to predict T-cell epitopes for an MHC class I-restricted T-cell clone (TCC) using synthesized peptide data. For predicting T-cell epitopes for an MHC class II-restricted TCC, we built a shift model that integrated MHC-binding data and data from T-cell proliferation assay against a combinatorial library of peptide mixtures.

  2. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    DEFF Research Database (Denmark)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.

    2017-01-01

    effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce...... considerable promise and will be tested using more realistic simulations and experimental setups....

  3. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications.

    Science.gov (United States)

    Ma, JiaLi; Zhang, TanTan; Dong, MingChui

    2015-05-01

    This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

  4. High capacity image steganography method based on framelet and compressive sensing

    Science.gov (United States)

    Xiao, Moyan; He, Zhibiao

    2015-12-01

    To improve the capacity and imperceptibility of image steganography, a novel high capacity and imperceptibility image steganography method based on a combination of framelet and compressive sensing (CS) is put forward. Firstly, SVD (Singular Value Decomposition) transform to measurement values obtained by compressive sensing technique to the secret data. Then the singular values in turn embed into the low frequency coarse subbands of framelet transform to the blocks of the cover image which is divided into non-overlapping blocks. Finally, use inverse framelet transforms and combine to obtain the stego image. The experimental results show that the proposed steganography method has a good performance in hiding capacity, security and imperceptibility.

  5. Artificial Neural Network Method at PT Buana Intan Gemilang

    Directory of Open Access Journals (Sweden)

    Shadika

    2017-01-01

    Full Text Available The textile industry is one of the industries that provide high export value by occupying the third position in Indonesia. The process of inspection on traditional textile enterprises by relying on human vision that takes an average scanning time of 19.87 seconds. Each roll of cloth should be inspected twice to avoid missed defects. This inspection process causes the buildup at the inspection station. This study proposes the automation of inspection systems using the Artificial Neural Network (ANN. The input for ANN comes from GLCM extraction. The automation system on the defect inspection resulted in a detection time of 0.56 seconds. The degree of accuracy gained in classifying the three types of defects is 88.7%. Implementing an automated inspection system results in faster processing time.

  6. effect of curing methods on the compressive strength of concrete

    African Journals Online (AJOL)

    Department of Civil Engineering, Federal University of Technology Yola, Nigeria. aEmail: gadzymo@yahoo.com (corresponding author). Abstract. Different curing methods are .... ing materials are commonly used for concrete curing. [6] stated that as hydration progresses, the amount of water in mortar pores reduces and.

  7. Development and Characterization of Multifunctional Directly Compressible Co-processed Excipient by Spray Drying Method.

    Science.gov (United States)

    Chauhan, Sohil I; Nathwani, Sandeep V; Soniwala, Moinuddin M; Chavda, Jayant R

    2017-05-01

    The present investigation was carried out to develop and characterize a multifunctional co-processed excipient for improving the compressibility of poorly compressible drugs. Etodolac was used as a model drug. Microcrystalline cellulose (MCC), lactose monohydrate (lactose), and StarCap 1500 (StarCap) were selected as components of the co-processed excipient. The spray drying method was used for co-processing of excipients. D-optimal mixture design was applied to optimize the proportion of component excipients. Statistical analysis of the D-optimal mixture design revealed that all response variables were significantly affected by the independent variables (p value < 0.05). Optimized composition was obtained from the desirability function. The optimized composition of the co-processed excipient was found to be 30% MCC, 25% lactose, and 45% StarCap. This optimized batch was evaluated for flow properties, compressibility parameters such as Kawakita's and Kuno's equation and Heckel's equation, and dilution potential. Evaluation parameters for flow properties (angle of repose, Carr's index, and Hausner's ratio) suggested excellent flow character. The parameters of Kawakita's and Kuno's equation and Heckel's equation suggested improvement in the compressibility of the model drug. Dilution potential was found to be 40%, and based on that, tablets of the model drug were formulated and evaluated for general evaluation parameters of tablets. All the parameters were found to be within the acceptance criteria which concluded that the multifunctional directly compressible co-processed excipient was prepared successfully that improved the compressibility of the poorly compressible model drug etodolac along with spray drying as an efficient method for the preparation of co-processed excipient.

  8. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  9. An evaluation of the sandwich beam compression test method for composites

    Science.gov (United States)

    Shuart, M. J.

    1981-01-01

    The sandwich beam in a four-point bending compressive test method for advanced composites is evaluated. Young's modulus and Poisson's ratio were obtained for graphite/polyimide beam specimens tested at 117 K, room temperature, and 589 K. Tensile elastic properties obtained from the specimens were assumed to be equal to the compressive elastic properties and were used in the analysis. Strain gages were used to record strain data. A three-dimensional finite-element model was used to examine the effects of the honeycomb core on measured composite mechanical properties. Results of the analysis led to the following conclusions: (1) a near uniaxial compressive stress state existed in the top cover and essentially all the compressive load was carried by the top cover; (2) laminate orientation, test temperature, and type of honeycomb core material were shown to affect the type of beam failure; and (3) the test method can be used to obtain compressive elastic constants over the temperature range 117 to 589 K.

  10. An improved measurement method for large aviation part based on spatial constraint calibration and compression extraction

    Science.gov (United States)

    Zhang, Yang; Yang, Fan; Liu, Wei; Zhang, Zhiyuan; Zhao, Haiyang; Lan, Zhiguang; Gao, Peng; Jia, Zhenyuan

    2017-07-01

    An accurate measurement of large aviation part plays a key role in the assembly of aircraft. However, due to the limitation of spatial size, a calibration with large field of view and an accurate surface measurement of large part is hard to achieve. In this paper, an improved measurement method with spatial constraint calibration method and feature compression extraction method is proposed. Firstly, based on the proposed spatial constraint calibration method, the vision system is conveniently and precisely calibrated by using the designed SBA and SLT. Images of scanning laser stripes are captured by the calibrated cameras, simultaneously. Then the proposed feature compression extraction method is adopted to accurately extract centers of laser stripes. Finally, based on the binocular vision principle, the surface of part is reconstructed. The accuracy of proposed calibration method is verified in the lab. The results of the measurement of a standard part show the validity and precision of the proposed method.

  11. A Method For Producing Hollow Shafts By Rotary Compression Using A Specially Designed Forging Machine

    Directory of Open Access Journals (Sweden)

    Tomczak J.

    2015-09-01

    Full Text Available The paper presents a new method for manufacturing hollow shafts, where tubes are used as billet. First, the design of a specially designed forging machine for rotary compression is described. The machine is then numerically tested with regard to its strength, and the effect of elastic strains of the roll system on the quality of produced parts is determined. The machine’s strength is calculated by the finite element method using the NX Nastran program. Technological capabilities of the machine are determined, too. Next, the results of the modeling of the rotary compression process for a hollow stepped shafts by the finite element method are given. The process for manufacturing hollow shafts was modeled using the Simufact.Forming simulation program. The FEM results are then verified experimentally in the designed forging machine for rotary compression. The experimental results confirm that axisymmetric hollow shafts can be produced by the rotary compression method. It is also confirmed that numerical methods are suitable for investigating both machine design and metal forming processes.

  12. A Reconstructed Discontinuous Galerkin Method for the Compressible Navier-Stokes Equations on Arbitrary Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau

    2010-01-01

    A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi-Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier-Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier-Stokes equations.

  13. A novel method for estimating soil precompression stress from uniaxial confined compression tests

    DEFF Research Database (Denmark)

    Lamandé, Mathieu; Schjønning, Per; Labouriau, Rodrigo

    2017-01-01

    . Stress-strain curves were obtained by performing uniaxial, confined compression tests on undisturbed soil cores for three soil types at three soil water potentials. The new method performed better than the Gompertz fitting method in estimating precompression stress. The values of precompression stress...... obtained from the new method were linearly related to the maximum stress experienced by the soil samples prior to the uniaxial, confined compression test at each soil condition with a slope close to 1. Precompression stress determined with the new method was not related to soil type or dry bulk density......) Assessing the utility of the numerical method by comparison with the Gompertz method; (ii) Comparing the estimated precompression stress to the maximum preload of test samples; (iii) Determining the influence that soil type, bulk density and soil water potential have on the estimated precompression stress...

  14. Semi-implicit method for three-dimensional compressible MHD simulation

    International Nuclear Information System (INIS)

    Harned, D.S.; Kerner, W.

    1984-03-01

    A semi-implicit method for solving the full compressible MHD equations in three dimensions is presented. The method is unconditionally stable with respect to the fast compressional modes. The time step is instead limited by the slower shear Alfven motion. The computing time required for one time step is essentially the same as for explicit methods. Linear stability limits are derived and verified by three-dimensional tests on linear waves in slab geometry. (orig.)

  15. A lossless compression method for medical image sequences using JPEG-LS and interframe coding.

    Science.gov (United States)

    Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching

    2009-09-01

    Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

  16. Diagnostic methods and interpretation of the experiments on microtarget compression in the Iskra-4 device

    International Nuclear Information System (INIS)

    Kochemasov, G.G.

    1992-01-01

    Studies on the problem of laser fusion, which is mainly based on experiments conducted in the Iskra-4 device are reviewed. Different approaches to solution of the problem of DT-fuel ignition, methods of diagnostics of characteristics of laser radiation and plasma, occurring on microtarget heating and compression, are considered

  17. A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    T. Celine Therese Jenny

    2010-11-01

    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  18. The Effects of Different Curing Methods on the Compressive Strength of Terracrete

    Directory of Open Access Journals (Sweden)

    O. Alake

    2009-01-01

    Full Text Available This research evaluated the effects of different curing methods on the compressive strength of terracrete. Several tests that included sieve analysis were carried out on constituents of terracrete (granite and laterite to determine their particle size distribution and performance criteria tests to determine compressive strength of terracrete cubes for 7 to 35 days of curing. Sand, foam-soaked, tank and open methods of curing were used and the study was carried out under controlled temperature. Sixty cubes of 100 × 100 × 100mm sized cubes were cast using a mix ratio of 1 part of cement, 1½ part of latrite, and 3 part of coarse aggregate (granite proportioned by weight and water – cement ratio of 0.62. The result of the various compressive strengths of the cubes showed that out of the four curing methods, open method of curing was the best because the cubes gained the highest average compressive strength of 10.3N/mm2 by the 35th day.

  19. High Order Filter Methods for the Non-ideal Compressible MHD Equations

    Science.gov (United States)

    Yee, H. C.; Sjoegreen, Bjoern

    2003-01-01

    The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard divergence cleaning is not required by the present filter approach. For certain non-ideal MHD test cases, divergence free preservation of the magnetic fields has been achieved.

  20. Divergence Free High Order Filter Methods for the Compressible MHD Equations

    Science.gov (United States)

    Yea, H. C.; Sjoegreen, Bjoern

    2003-01-01

    The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard diver- gence cleaning is not required by the present filter approach. For certain MHD test cases, divergence free preservation of the magnetic fields has been achieved.

  1. Compressive strength evaluation of structural lightweight concrete by non-destructive ultrasonic pulse velocity method.

    Science.gov (United States)

    Bogas, J Alexandre; Gomes, M Glória; Gomes, Augusto

    2013-07-01

    In this paper the compressive strength of a wide range of structural lightweight aggregate concrete mixes is evaluated by the non-destructive ultrasonic pulse velocity method. This study involves about 84 different compositions tested between 3 and 180 days for compressive strengths ranging from about 30 to 80 MPa. The influence of several factors on the relation between the ultrasonic pulse velocity and compressive strength is examined. These factors include the cement type and content, amount of water, type of admixture, initial wetting conditions, type and volume of aggregate and the partial replacement of normal weight coarse and fine aggregates by lightweight aggregates. It is found that lightweight and normal weight concretes are affected differently by mix design parameters. In addition, the prediction of the concrete's compressive strength by means of the non-destructive ultrasonic pulse velocity test is studied. Based on the dependence of the ultrasonic pulse velocity on the density and elasticity of concrete, a simplified expression is proposed to estimate the compressive strength, regardless the type of concrete and its composition. More than 200 results for different types of aggregates and concrete compositions were analyzed and high correlation coefficients were obtained. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. About a method for compressing x-ray computed microtomography data

    Science.gov (United States)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  3. Interaction of high-speed compressible viscous flow and structure by adaptive finite element method

    International Nuclear Information System (INIS)

    Limtrakarn, Wiroj; Dechaumphai, Pramote

    2004-01-01

    Interaction behaviors of high-speed compressible viscous flow and thermal-structural response of structure are presented. The compressible viscous laminar flow behavior based on the Navier-Stokes equations is predicted by using an adaptive cell-centered finite-element method. The energy equation and the quasi-static structural equations for aerodynamically heated structures are solved by applying the Galerkin finite-element method. The finite-element formulation and computational procedure are described. The performance of the combined method is evaluated by solving Mach 4 flow past a flat plate and comparing with the solution from the finite different method. To demonstrate their interaction, the high-speed flow, structural heat transfer, and deformation phenomena are studied by applying the present method to Mach 10 flow past a flat plate

  4. Finite Element Analysis of Increasing Column Section and CFRP Reinforcement Method under Different Axial Compression Ratio

    Science.gov (United States)

    Jinghai, Zhou; Tianbei, Kang; Fengchi, Wang; Xindong, Wang

    2017-11-01

    Eight less stirrups in the core area frame joints are simulated by ABAQUS finite element numerical software. The composite reinforcement method is strengthened with carbon fiber and increasing column section, the axial compression ratio of reinforced specimens is 0.3, 0.45 and 0.6 respectively. The results of the load-displacement curve, ductility and stiffness are analyzed, and it is found that the different axial compression ratio has great influence on the bearing capacity of increasing column section strengthening method, and has little influence on carbon fiber reinforcement method. The different strengthening schemes improve the ultimate bearing capacity and ductility of frame joints in a certain extent, composite reinforcement joints strengthening method to improve the most significant, followed by increasing column section, reinforcement method of carbon fiber reinforced joints to increase the minimum.

  5. Comparison between two methods for diagnosis of trichinellosis: trichinoscopy and artificial digestion.

    Science.gov (United States)

    Vignau, M L; del Valle Guardis, M; Risso, M A; Eiras, D F

    1997-01-01

    Two direct methods for the diagnosis of trichinellosis were compared: trichinoscopy and artificial digestion. Muscles from 17 wistar rats, orally infected with 500 Trichinella spiralis encysted larvae were examined. From each of the following muscles: diaphragm, tongue, masseters, intercostals, triceps brachialis and cuadriceps femoralis, 648,440 larvae from 1 g samples were recovered. The linear correlation between trichinoscopy and artificial digestion was very high and significant (r = 0.94, p < 0.0001), showing that both methods for the detection of muscular larvae did not differ significantly. In both methods, significant differences were found in the distribution of larvae per gramme of muscle.

  6. Comparison between Two Methods for Diagnosis of Trichinellosis: Trichinoscopy and Artificial Digestion

    Directory of Open Access Journals (Sweden)

    María Laura Vignau

    1997-09-01

    Full Text Available Two direct methods for the diagnosis of trichinellosis were compared: trichinoscopy and artificial digestion. Muscles from 17 wistar rats, orally infected with 500 Trichinella spiralis encysted larvae were examined. From each of the following muscles: diaphragm, tongue, masseters, intercostals, triceps brachialis and cuadriceps femoralis, 648,440 larvae from 1 g samples were recovered. The linear correlation between trichinoscopy and artificial digestion was very high and significant (r=0.94, p< 0.0001, showing that both methods for the detection of muscular larvae did not differ significantly. In both methods, significant differences were found in the distribution of larvae per gramme of muscle

  7. A blended pressure/density based method for the computation of incompressible and compressible flows

    Science.gov (United States)

    Rossow, C.-C.

    2003-03-01

    An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining "compressible" contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation.

  8. High order accurate and low dissipation method for unsteady compressible viscous flow computation on helicopter rotor in forward flight

    Science.gov (United States)

    Xu, Li; Weng, Peifen

    2014-02-01

    An improved fifth-order weighted essentially non-oscillatory (WENO-Z) scheme combined with the moving overset grid technique has been developed to compute unsteady compressible viscous flows on the helicopter rotor in forward flight. In order to enforce periodic rotation and pitching of the rotor and relative motion between rotor blades, the moving overset grid technique is extended, where a special judgement standard is presented near the odd surface of the blade grid during search donor cells by using the Inverse Map method. The WENO-Z scheme is adopted for reconstructing left and right state values with the Roe Riemann solver updating the inviscid fluxes and compared with the monotone upwind scheme for scalar conservation laws (MUSCL) and the classical WENO scheme. Since the WENO schemes require a six point stencil to build the fifth-order flux, the method of three layers of fringes for hole boundaries and artificial external boundaries is proposed to carry out flow information exchange between chimera grids. The time advance on the unsteady solution is performed by the full implicit dual time stepping method with Newton type LU-SGS subiteration, where the solutions of pseudo steady computation are as the initial fields of the unsteady flow computation. Numerical results on non-variable pitch rotor and periodic variable pitch rotor in forward flight reveal that the approach can effectively capture vortex wake with low dissipation and reach periodic solutions very soon.

  9. Case report of deep vein thrombosis caused by artificial urinary sphincter reservoir compressing right external iliac vein

    Directory of Open Access Journals (Sweden)

    Marcus J Yip

    2015-01-01

    Full Text Available Artificial urinary sphincters (AUSs are commonly used after radical prostatectomy for those who are incontinent of urine. However, they are associated with complications, the most common being reservoir uprising or migration. We present a unique case of occlusive external iliac and femoral vein obstruction by the AUS reservoir causing thrombosis. Deflation of the reservoir and anticoagulation has, thus far, not been successful at decreasing thrombus burden. We present this case as a rare, but significant surgical complication; explore the risk factors that may have contributed, and other potential endovascular therapies to address this previously unreported AUS complication.

  10. Compression embedding

    Science.gov (United States)

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  11. A novel method for designing and fabricating custom-made artificial bones.

    Science.gov (United States)

    Saijo, H; Kanno, Y; Mori, Y; Suzuki, S; Ohkubo, K; Chikazu, D; Yonehara, Y; Chung, U-i; Takato, T

    2011-09-01

    Artificial bones are useful for tissue augmentation in patients with facial deformities or defects. Custom-made artificial bones, produced by mirroring the bone structure on the healthy side using computer-aided design, have been used. This method is simple, but has limited ability to recreate detailed structures. The authors have invented a new method for designing artificial bones, better customized for the needs of individual patients. Based on CT data, three-dimensional (3D) simulation models were prepared using an inkjet printer using plaster. The operators applied a special radiopaque paraffin wax to the models to create target structures. The wax contained a contrast medium to render it radiopaque. The concentration was adjusted to achieve easy manipulation and consistently good-quality images. After the radiopaque wax was applied, the 3D simulation models were reexamined by CT, and data on the target structures were obtained. Artificial bones were fabricated by the inkjet printer based on these data. Although this new technique for designing artificial bones is slightly more complex than the conventional methods, and the status of soft tissue should also be considered for an optimal aesthetic outcome, the results suggest that this method better meets the requirements of individual patients. Copyright © 2011. Published by Elsevier Ltd.

  12. Prediction of Human Vertebral Compressive Strength Using Quantitative Computed Tomography Based Nonlinear Finite Element Method

    Directory of Open Access Journals (Sweden)

    Ahad Zeinali

    2007-12-01

    Full Text Available Introduction: Because of the importance of vertebral compressive fracture (VCF role in increasing the patients’ death rate and reducing their quality of life, many studies have been conducted for a noninvasive prediction of vertebral compressive strength based on bone mineral density (BMD determination and recently finite element analysis. In this study, QCT-voxel based nonlinear finite element method is used for predicting vertebral compressive strength. Material and Methods: Four thoracolumbar vertebrae were excised from 3 cadavers with an average age of 42 years. They were then put in a water phantom and were scanned using the QCT. Using a computer program prepared in MATLAB, detailed voxel based geometry and mechanical characteristics of the vertebra were extracted from the CT images. The three dimensional finite element models of the samples were created using ANSYS computer program. The compressive strength of each vertebra body was calculated based on a linearly elastic-linearly plastic model and large deformation analysis in ANSYS and was compared to the value measured experimentally for that sample. Results: Based on the obtained results the QCT-voxel based nonlinear finite element method (FEM can predict vertebral compressive strength more effectively and accurately than the common QCT-voxel based linear FEM. The difference between the predicted strength values using this method and the measured ones was less than 1 kN for all the samples. Discussion and Conclusion: It seems that the QCT-voxel based nonlinear FEM used in this study can predict more effectively and accurately the vertebral strengths based on every vertebrae specification by considering their detailed geometric and densitometric characteristics.

  13. An infrared small target detection method based on nonnegative matrix factorization and compressed sensing

    Science.gov (United States)

    Chen, Qiwei; Wang, Yiming

    2017-07-01

    According to the low rank property of the background and the sparse features of the target in infrared image, a novel infrared small target detection method based on the nonnegative matrix factorization (NMF) and compressed sensing technology was presented in this paper. This method trained background model through NMF, and then sampled the infrared image sequences directly using the block compressed sensing technology. Through the alternating direction method of multipliers (ADMM), the infrared small target was extracted and the background was recovered from the image. At the same time, the background was updated by the update algorithm, to adapt to the changes in the background. The simulation results show that the proposed method can detect the infrared target precisely and efficiently.

  14. Numerical simulation of compressible two-phase flow using a diffuse interface method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Daramizadeh, A.

    2013-01-01

    Highlights: ► Compressible two-phase gas–gas and gas–liquid flows simulation are conducted. ► Interface conditions contain shock wave and cavitations. ► A high-resolution diffuse interface method is investigated. ► The numerical results exhibit very good agreement with experimental results. -- Abstract: In this article, a high-resolution diffuse interface method is investigated for simulation of compressible two-phase gas–gas and gas–liquid flows, both in the presence of shock wave and in flows with strong rarefaction waves similar to cavitations. A Godunov method and HLLC Riemann solver is used for discretization of the Kapila five-equation model and a modified Schmidt equation of state (EOS) is used to simulate the cavitation regions. This method is applied successfully to some one- and two-dimensional compressible two-phase flows with interface conditions that contain shock wave and cavitations. The numerical results obtained in this attempt exhibit very good agreement with experimental results, as well as previous numerical results presented by other researchers based on other numerical methods. In particular, the algorithm can capture the complex flow features of transient shocks, such as the material discontinuities and interfacial instabilities, without any oscillation and additional diffusion. Numerical examples show that the results of the method presented here compare well with other sophisticated modeling methods like adaptive mesh refinement (AMR) and local mesh refinement (LMR) for one- and two-dimensional problems

  15. Artificial intelligence methods applied for quantitative analysis of natural radioactive sources

    International Nuclear Information System (INIS)

    Medhat, M.E.

    2012-01-01

    Highlights: ► Basic description of artificial neural networks. ► Natural gamma ray sources and problem of detections. ► Application of neural network for peak detection and activity determination. - Abstract: Artificial neural network (ANN) represents one of artificial intelligence methods in the field of modeling and uncertainty in different applications. The objective of the proposed work was focused to apply ANN to identify isotopes and to predict uncertainties of their activities of some natural radioactive sources. The method was tested for analyzing gamma-ray spectra emitted from natural radionuclides in soil samples detected by a high-resolution gamma-ray spectrometry based on HPGe (high purity germanium). The principle of the suggested method is described, including, relevant input parameters definition, input data scaling and networks training. It is clear that there is satisfactory agreement between obtained and predicted results using neural network.

  16. Lossless compression of AVIRIS data: Comparison of methods and instrument constraints

    Science.gov (United States)

    Roger, R. E.; Arnold, J. F.; Cavenor, M. C.; Richards, J. A.

    1992-01-01

    A family of lossless compression methods, allowing exact image reconstruction, are evaluated for compressing Airborne Visible/Infrared Imaging Spectrometers (AVIRIS) image data. The methods are used on Differential Pulse Code Modulation (DPCM). The compressed data have an entropy of order 6 bits/pixel. A theoretical model indicates that significantly better lossless compression is unlikely to be achieved because of limits caused by the noise in the AVIRIS channels. AVIRIS data differ from data produced by other visible/near-infrared sensors, such as LANDSAT-TM or SPOT, in several ways. Firstly, the data are recorded at a greater resolution (12 bits, though packed into 16-bit words). Secondly, the spectral channels are relatively narrow and provide continuous coverage of the spectrum so that the data in adjacent channels are generally highly correlated. Thirdly, the noise characteristics of the AVIRIS are defined by the channels' Noise Equivalent Radiances (NER's), and these NER's show that, at some wavelengths, the least significant 5 or 6 bits of data are essentially noise.

  17. An Improved Ghost-cell Immersed Boundary Method for Compressible Inviscid Flow Simulations

    KAUST Repository

    Chi, Cheng

    2015-05-01

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. In addition, a shock sensor is in- troduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently. The improved ghost-cell method is validated against five test cases: (a) double Mach reflections on a ramp, (b) supersonic flows in a wind tunnel with a forward- facing step, (c) supersonic flows over a circular cylinder, (d) smooth Prandtl-Meyer expansion flows, and (e) steady shock-induced combustion over a wedge. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Implementation of the improved ghost-cell method in reacting Euler flows further validates its general applicability for compressible flow simulations.

  18. Application of artificial intelligence methods for prediction of steel mechanical properties

    Directory of Open Access Journals (Sweden)

    Z. Jančíková

    2008-10-01

    Full Text Available The target of the contribution is to outline possibilities of applying artificial neural networks for the prediction of mechanical steel properties after heat treatment and to judge their perspective use in this field. The achieved models enable the prediction of final mechanical material properties on the basis of decisive parameters influencing these properties. By applying artificial intelligence methods in combination with mathematic-physical analysis methods it will be possible to create facilities for designing a system of the continuous rationalization of existing and also newly developing industrial technologies.

  19. Computational three-dimensional imaging method of compressive LIDAR system with gain modulation

    Science.gov (United States)

    Zhang, Yan-mei; An, Yu-long

    2017-11-01

    The distance resolution of a 3D LIDAR imaging is largely decided by the pulse duration of the laser source and rise time of the detector. Considering that breaking these limits generates low-cost systems, we present a computational method of 3D imaging by a compressive LIDAR system. Based on the theory of compressive sensing, reflective pulses are obtained by single-pixel detector and intensity maps are reconstructed by TVAL3 algorithm. Moreover, the distance information of each pixel can be calculated from the reconstructed intensity maps with gain modulation technology. The simulations are accomplished to validate the effectiveness of our method. Convincing computational results shows that our method is capable to achieve 3D imaging with less budget.

  20. A time-domain method to generate artificial time history from a given reference response spectrum

    International Nuclear Information System (INIS)

    Shin, Gang Sik; Song, Oh Seop

    2016-01-01

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance

  1. Methods for determining the carrying capacity of eccentrically compressed concrete elements

    Directory of Open Access Journals (Sweden)

    Starishko Ivan Nikolaevich

    2014-04-01

    Full Text Available The author presents the results of calculations of eccentrically compressed elements in the ultimate limit state of bearing capacity, taking into account all possiblestresses in the longitudinal reinforcement from the R to the R , caused by different values of eccentricity longitudinal force. The method of calculation is based on the simultaneous solution of the equilibrium equations of the longitudinal forces and internal forces with the equilibrium equations of bending moments in the ultimate limit state of the normal sections. Simultaneous solution of these equations, as well as additional equations, reflecting the stress-strain limit state elements, leads to the solution of a cubic equation with respect to height of uncracked concrete, or with respect to the carrying capacity. According to the author it is a significant advantage over the existing methods, in which the equilibrium equations using longitudinal forces obtained one value of the height, and the equilibrium equations of bending moments - another. Theoretical studies of the author, in this article and the reasons to calculate specific examples showed that a decrease in the eccentricity of the longitudinal force in the limiting state of eccentrically compressed concrete elements height uncracked concrete height increases, the tension in the longitudinal reinforcement area gradually (not abruptly goes from a state of tension compression, and load-bearing capacity of elements it increases, which is also confirmed by the experimental results. Designed journalist calculations of eccentrically compressed elements for 4 cases of eccentric compression, instead of 2 - as set out in the regulations, fully cover the entire spectrum of possible cases of the stress-strain limit state elements that comply with the European standards for reinforced concrete, in particular Eurocode 2 (2003.

  2. A model and numerical method for compressible flows with capillary effects

    Energy Technology Data Exchange (ETDEWEB)

    Schmidmayer, Kevin, E-mail: kevin.schmidmayer@univ-amu.fr; Petitpas, Fabien, E-mail: fabien.petitpas@univ-amu.fr; Daniel, Eric, E-mail: eric.daniel@univ-amu.fr; Favrie, Nicolas, E-mail: nicolas.favrie@univ-amu.fr; Gavrilyuk, Sergey, E-mail: sergey.gavrilyuk@univ-amu.fr

    2017-04-01

    A new model for interface problems with capillary effects in compressible fluids is presented together with a specific numerical method to treat capillary flows and pressure waves propagation. This new multiphase model is in agreement with physical principles of conservation and respects the second law of thermodynamics. A new numerical method is also proposed where the global system of equations is split into several submodels. Each submodel is hyperbolic or weakly hyperbolic and can be solved with an adequate numerical method. This method is tested and validated thanks to comparisons with analytical solutions (Laplace law) and with experimental results on droplet breakup induced by a shock wave.

  3. Data Collection Method for Mobile Control Sink Node in Wireless Sensor Network Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Ling Yongfa

    2016-01-01

    Full Text Available The paper proposes a mobile control sink node data collection method in the wireless sensor network based on compressive sensing. This method, with regular track, selects the optimal data collection points in the monitoring area via the disc method, calcu-lates the shortest path by using the quantum genetic algorithm, and hence determines the data collection route. Simulation results show that this method has higher network throughput and better energy efficiency, capable of collecting a huge amount of data with balanced energy consumption in the network.

  4. A novel artificial intelligence method for weekly dietary menu planning.

    Science.gov (United States)

    Gaál, B; Vassányi, I; Kozmann, G

    2005-01-01

    Menu planning is an important part of personalized lifestyle counseling. The paper describes the results of an automated menu generator (MenuGene) of the web-based lifestyle counseling system Cordelia that provides personalized advice to prevent cardiovascular diseases. The menu generator uses genetic algorithms to prepare weekly menus for web users. The objectives are derived from personal medical data collected via forms in Cordelia, combined with general nutritional guidelines. The weekly menu is modeled as a multilevel structure. Results show that the genetic algorithm-based method succeeds in planning dietary menus that satisfy strict numerical constraints on every nutritional level (meal, daily basis, weekly basis). The rule-based assessment proved capable of manipulating the mean occurrence of the nutritional components thus providing a method for adjusting the variety and harmony of the menu plans. By splitting the problem into well determined sub-problems, weekly menu plans that satisfy nutritional constraints and have well assorted components can be generated with the same method that is for daily and meal plan generation.

  5. Three dimensional simulation of compressible and incompressible flows through the finite element method

    International Nuclear Information System (INIS)

    Costa, Gustavo Koury

    2004-11-01

    Although incompressible fluid flows can be regarded as a particular case of a general problem, numerical methods and the mathematical formulation aimed to solve compressible and incompressible flows have their own peculiarities, in such a way, that it is generally not possible to attain both regimes with a single approach. In this work, we start from a typically compressible formulation, slightly modified to make use of pressure variables and, through augmenting the stabilising parameters, we end up with a simplified model which is able to deal with a wide range of flow regimes, from supersonic to low speed gas flows. The resulting methodology is flexible enough to allow for the simulation of liquid flows as well. Examples using conservative and pressure variables are shown and the results are compared to those published in the literature, in order to validate the method. (author)

  6. Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation

    Science.gov (United States)

    Arotaritei, D.; Rotariu, C.

    2015-09-01

    In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).

  7. The moderate resolution imaging spectrometer: An EOS facility instrument candidate for application of data compression methods

    Science.gov (United States)

    Salomonson, Vincent V.

    1991-01-01

    The Moderate Resolution Imaging Spectrometer (MODIS) observing facility will operate on the Earth Observing System (EOS) in the late 1990's. It is estimated that this observing facility will produce over 200 gigabytes of data per day requiring a storage capability of just over 300 gigabytes per day. Archiving, browsing, and distributing the data associated with MODIS represents a rich opportunity for testing and applying both lossless and lossy data compression methods.

  8. Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies

    Science.gov (United States)

    2010-01-01

    Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant

  9. Survey of artificial intelligence methods for detection and identification of component faults in nuclear power plants

    International Nuclear Information System (INIS)

    Reifman, J.

    1997-01-01

    A comprehensive survey of computer-based systems that apply artificial intelligence methods to detect and identify component faults in nuclear power plants is presented. Classification criteria are established that categorize artificial intelligence diagnostic systems according to the types of computing approaches used (e.g., computing tools, computer languages, and shell and simulation programs), the types of methodologies employed (e.g., types of knowledge, reasoning and inference mechanisms, and diagnostic approach), and the scope of the system. The major issues of process diagnostics and computer-based diagnostic systems are identified and cross-correlated with the various categories used for classification. Ninety-five publications are reviewed

  10. [Evaluation of artificial digestion method on inspection of meat for Trichinella spiralis contamination and influence of the method on muscle larvae recovery].

    Science.gov (United States)

    Wang, Guo-Ying; Du, Jing-Fang; Dun, Guo-Qing; Sun, Wei-Li; Wang, Jin-Xi

    2011-04-01

    To evaluate the effect of artificial digestion method on inspection of meat for Trichinella spiralis contamination and its influence on activity and infectivity of muscle larvae. The mice were inoculated orally with 100 muscle larvae of T. spiralis and sacrificed on the 30th day following the infection. The muscle larvae of T. spiralis were recovered by three different test protocols employing variations of the artificial digestion method, i.e. the first test protocol evaluating digestion for 2 hours (magnetic stirrer method), the second test protocol evaluating digestion for 12 hours, and the third test protocol evaluating digestion for 20 hours. Each test group included ten samples, and each of which included 300 encapsulated larvae. Meanwhile, the activity of the recovered muscle larvae was also assessed. Forty mice were randomly divided into a control group and three digestion groups, so 4 groups (with 10 mice per group) in total. In the control group, each mouse was orally inoculated with 100 encapsulated larvae of T. spiralis. In all of the digestion test groups, each mouse was orally inoculated with 100 muscle larvae of T. spiralis. The larvae were then recovered from the different three test groups by the artificial digestion protocol variations. All the infected mice were sacrificed on the 30th day following the infection, and the muscle larvae of T. spiralis were examined respectively by the diaphragm compression method and the magnetic stirrer method. The muscle larvae detection rates were 78.47%, 76.73%, and 68.63%, the death rates were 0.59%, 4.60%, and 7.43%, and the reduction rates were 60.56%, 61.94%, and 73.07%, in the Test Group One (2-hour digestion), Test Group Two (12-hour digestion) and Test Group Three (20-hour digestion), respectively. The magnetic stirrer method (2-hour digestion method) is superior to both 12-hour digestion and 20-hour digestion methods when assessed by the detection rate, activity and infectivity of muscle larvae.

  11. Methodes spectrales paralleles et applications aux simulations de couches de melange compressibles

    OpenAIRE

    Male , Jean-Michel; Fezoui , Loula ,

    1993-01-01

    La resolution des equations de Navier-Stokes en methodes spectrales pour des ecoulements compressibles peut etre assez gourmande en temps de calcul. On etudie donc ici la parallelisation d'un tel algorithme et son implantation sur une machine massivement parallele, la connection-machine CM-2. La methode spectrale s'adapte bien aux exigences du parallelisme massif, mais l'un des outils de base de cette methode, la transformee de Fourier rapide (lorsqu'elle doit etre appliquee sur les deux dime...

  12. A Parallel Reconstructed Discontinuous Galerkin Method for the Compressible Flows on Aritrary Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hong Luo; Amjad Ali; Robert Nourgaliev; Vincent A. Mousseau

    2010-01-01

    A reconstruction-based discontinuous Galerkin method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. In this method, an in-cell reconstruction is used to obtain a higher-order polynomial representation of the underlying discontinuous Galerkin polynomial solution and an inter-cell reconstruction is used to obtain a continuous polynomial solution on the union of two neighboring, interface-sharing cells. The in-cell reconstruction is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. The inter-cell reconstruction is devised to remove an interface discontinuity of the solution and its derivatives and thus to provide a simple, accurate, consistent, and robust approximation to the viscous and heat fluxes in the Navier-Stokes equations. A parallel strategy is also devised for the resulting reconstruction discontinuous Galerkin method, which is based on domain partitioning and Single Program Multiple Data (SPMD) parallel programming model. The RDG method is used to compute a variety of compressible flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results demonstrate that this RDG method is third-order accurate at a cost slightly higher than its underlying second-order DG method, at the same time providing a better performance than the third order DG method, in terms of both computing costs and storage requirements.

  13. A Novel Coherence Reduction Method in Compressed Sensing for DOA Estimation

    Directory of Open Access Journals (Sweden)

    Jing Liu

    2013-01-01

    Full Text Available A novel method named as coherent column replacement method is proposed to reduce the coherence of a partially deterministic sensing matrix, which is comprised of highly coherent columns and random Gaussian columns. The proposed method is to replace the highly coherent columns with random Gaussian columns to obtain a new sensing matrix. The measurement vector is changed accordingly. It is proved that the original sparse signal could be reconstructed well from the newly changed measurement vector based on the new sensing matrix with large probability. This method is then extended to a more practical condition when highly coherent columns and incoherent columns are considered, for example, the direction of arrival (DOA estimation problem in phased array radar system using compressed sensing. Numerical simulations show that the proposed method succeeds in identifying multiple targets in a sparse radar scene, where the compressed sensing method based on the original sensing matrix fails. The proposed method also obtains more precise estimation of DOA using one snapshot compared with the traditional estimation methods such as Capon, APES, and GLRT, based on hundreds of snapshots.

  14. A compressed sensing based method with support refinement for impulse noise cancelation in DSL

    KAUST Repository

    Quadeer, Ahmed Abdul

    2013-06-01

    This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.

  15. Uniaxial Compressive Strength and Fracture Mode of Lake Ice at Moderate Strain Rates Based on a Digital Speckle Correlation Method for Deformation Measurement

    Directory of Open Access Journals (Sweden)

    Jijian Lian

    2017-05-01

    Full Text Available Better understanding of the complex mechanical properties of ice is the foundation to predict the ice fail process and avoid potential ice threats. In the present study, uniaxial compressive strength and fracture mode of natural lake ice are investigated over moderate strain-rate range of 0.4–10 s−1 at −5 °C and −10 °C. The digital speckle correlation method (DSCM is used for deformation measurement through constructing artificial speckle on ice sample surface in advance, and two dynamic load cells are employed to measure the dynamic load for monitoring the equilibrium of two ends’ forces under high-speed loading. The relationships between uniaxial compressive strength and strain-rate, temperature, loading direction, and air porosity are investigated, and the fracture mode of ice at moderate rates is also discussed. The experimental results show that there exists a significant difference between true strain-rate and nominal strain-rate derived from actuator displacement under dynamic loading conditions. Over the employed strain-rate range, the dynamic uniaxial compressive strength of lake ice shows positive strain-rate sensitivity and decreases with increasing temperature. Ice obtains greater strength values when it is with lower air porosity and loaded vertically. The fracture mode of ice seems to be a combination of splitting failure and crushing failure.

  16. Depicting mass flow rate of R134a /LPG refrigerant through straight and helical coiled adiabatic capillary tubes of vapor compression refrigeration system using artificial neural network approach

    Science.gov (United States)

    Gill, Jatinder; Singh, Jagdev

    2018-02-01

    In this work, an experimental investigation is carried out with R134a and LPG refrigerant mixture for depicting mass flow rate through straight and helical coil adiabatic capillary tubes in a vapor compression refrigeration system. Various experiments were conducted under steady-state conditions, by changing capillary tube length, inner diameter, coil diameter and degree of subcooling. The results showed that mass flow rate through helical coil capillary tube was found lower than straight capillary tube by about 5-16%. Dimensionless correlation and Artificial Neural Network (ANN) models were developed to predict mass flow rate. It was found that dimensionless correlation and ANN model predictions agreed well with experimental results and brought out an absolute fraction of variance of 0.961 and 0.988, root mean square error of 0.489 and 0.275 and mean absolute percentage error of 4.75% and 2.31% respectively. The results suggested that ANN model shows better statistical prediction than dimensionless correlation model.

  17. Demonstrating sensemaking emergence in artificial agents: A method and an example

    OpenAIRE

    GEORGEON , Olivier L.; Marshall , James

    2013-01-01

    International audience; We propose an experimental method to study the possible emergence of sensemaking in artificial agents. This method involves analyzing the agent’s behavior in a test bed environment that presents regularities in the possibilities of interaction afforded to the agent, while the agent has no presuppositions about the underlying functioning of the environment that explains such regularities. We propose a particular environment that permits such an experiment, called the Sm...

  18. A theoretical global optimization method for vapor-compression refrigeration systems based on entransy theory

    International Nuclear Information System (INIS)

    Xu, Yun-Chao; Chen, Qun

    2013-01-01

    The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases

  19. Determination of stresses in RC eccentrically compressed members using optimization methods

    Science.gov (United States)

    Lechman, Marek; Stachurski, Andrzej

    2018-01-01

    The paper presents an optimization method for determining the strains and stresses in reinforced concrete (RC) members subjected to the eccentric compression. The governing equations for strains in the rectangular cross-sections are derived by integrating the equilibrium equations of cross-sections, taking account of the effect of concrete softening in plastic range and the mean compressive strength of concrete. The stress-strain relationship for concrete in compression for short term uniaxial loading is assumed according to Eurocode 2 for nonlinear analysis. For reinforcing steel linear-elastic model with hardening in plastic range is applied. The task consists in the solving the set of the derived equations s.t. box constraints. The resulting problem was solved by means of fmincon function implemented from the Matlab's Optimization Toolbox. Numerical experiments have shown the existence of many points verifying the equations with a very good accuracy. Therefore, some operations from the global optimization were included: start of fmincon from many points and clusterization. The model is verified on the set of data encountered in the engineering practice.

  20. An Encoding Method for Compressing Geographical Coordinates in 3d Space

    Science.gov (United States)

    Qian, C.; Jiang, R.; Li, M.

    2017-09-01

    This paper proposed an encoding method for compressing geographical coordinates in 3D space. By the way of reducing the length of geographical coordinates, it helps to lessen the storage size of geometry information. In addition, the encoding algorithm subdivides the whole space according to octree rules, which enables progressive transmission and loading. Three main steps are included in this method: (1) subdividing the whole 3D geographic space based on octree structure, (2) resampling all the vertices in 3D models, (3) encoding the coordinates of vertices with a combination of Cube Index Code (CIC) and Geometry Code. A series of geographical 3D models were applied to evaluate the encoding method. The results showed that this method reduced the storage size of most test data by 90 % or even more under the condition of a speed of encoding and decoding. In conclusion, this method achieved a remarkable compression rate in vertex bit size with a steerable precision loss. It shall be of positive meaning to the web 3d map storing and transmission.

  1. An improved ghost-cell immersed boundary method for compressible flow simulations

    KAUST Repository

    Chi, Cheng

    2016-05-20

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. A sensor is introduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently in the Cartesian grid system. The improved ghost-cell method is validated against four test cases: (a) double Mach reflections on a ramp, (b) smooth Prandtl-Meyer expansion flows, (c) supersonic flows in a wind tunnel with a forward-facing step, and (d) supersonic flows over a circular cylinder. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Copyright © 2016 John Wiley & Sons, Ltd.

  2. The direct Discontinuous Galerkin method for the compressible Navier-Stokes equations on arbitrary grids

    Science.gov (United States)

    Yang, Xiaoquan; Cheng, Jian; Liu, Tiegang; Luo, Hong

    2015-11-01

    The direct discontinuous Galerkin (DDG) method based on a traditional discontinuous Galerkin (DG) formulation is extended and implemented for solving the compressible Navier-Stokes equations on arbitrary grids. Compared to the widely used second Bassi-Rebay (BR2) scheme for the discretization of diffusive fluxes, the DDG method has two attractive features: first, it is simple to implement as it is directly based on the weak form, and therefore there is no need for any local or global lifting operator; second, it can deliver comparable results, if not better than BR2 scheme, in a more efficient way with much less CPU time. Two approaches to perform the DDG flux for the Navier- Stokes equations are presented in this work, one is based on conservative variables, the other is based on primitive variables. In the implementation of the DDG method for arbitrary grid, the definition of mesh size plays a critical role as the formation of viscous flux explicitly depends on the geometry. A variety of test cases are presented to demonstrate the accuracy and efficiency of the DDG method for discretizing the viscous fluxes in the compressible Navier-Stokes equations on arbitrary grids.

  3. A reconstruction method based on AL0FGD for compressed sensing in border monitoring WSN system.

    Directory of Open Access Journals (Sweden)

    Yan Wang

    Full Text Available In this paper, to monitor the border in real-time with high efficiency and accuracy, we applied the compressed sensing (CS technology on the border monitoring wireless sensor network (WSN system and proposed a reconstruction method based on approximately l0 norm and fast gradient descent (AL0FGD for CS. In the frontend of the system, the measurement matrix was used to sense the border information in a compressed manner, and then the proposed reconstruction method was applied to recover the border information at the monitoring terminal. To evaluate the performance of the proposed method, the helicopter sound signal was used as an example in the experimental simulation, and three other typical reconstruction algorithms 1split Bregman algorithm, 2iterative shrinkage algorithm, and 3smoothed approximate l0 norm (SL0, were employed for comparison. The experimental results showed that the proposed method has a better performance in recovering the helicopter sound signal in most cases, which could be used as a basis for further study of the border monitoring WSN system.

  4. Compression-based distance (CBD): a simple, rapid, and accurate method for microbiota composition comparison.

    Science.gov (United States)

    Yang, Fang; Chia, Nicholas; White, Bryan A; Schook, Lawrence B

    2013-04-23

    Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets.

  5. A comparative analysis of the cryo-compression and cryo-adsorption hydrogen storage methods

    Energy Technology Data Exchange (ETDEWEB)

    Petitpas, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Benard, P [Universite du Quebec a Trois-Rivieres (Canada); Klebanoff, L E [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Xiao, J [Universite du Quebec a Trois-Rivieres (Canada); Aceves, S M [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-07-01

    While conventional low-pressure LH₂ dewars have existed for decades, advanced methods of cryogenic hydrogen storage have recently been developed. These advanced methods are cryo-compression and cryo-adsorption hydrogen storage, which operate best in the temperature range 30–100 K. We present a comparative analysis of both approaches for cryogenic hydrogen storage, examining how pressure and/or sorbent materials are used to effectively increase onboard H₂ density and dormancy. We start by reviewing some basic aspects of LH₂ properties and conventional means of storing it. From there we describe the cryo-compression and cryo-adsorption hydrogen storage methods, and then explore the relationship between them, clarifying the materials science and physics of the two approaches in trying to solve the same hydrogen storage task (~5–8 kg H₂, typical of light duty vehicles). Assuming that the balance of plant and the available volume for the storage system in the vehicle are identical for both approaches, the comparison focuses on how the respective storage capacities, vessel weight and dormancy vary as a function of temperature, pressure and type of cryo-adsorption material (especially, powder MOF-5 and MIL-101). By performing a comparative analysis, we clarify the science of each approach individually, identify the regimes where the attributes of each can be maximized, elucidate the properties of these systems during refueling, and probe the possible benefits of a combined “hybrid” system with both cryo-adsorption and cryo-compression phenomena operating at the same time. In addition the relationships found between onboard H₂ capacity, pressure vessel and/or sorbent mass and dormancy as a function of rated pressure, type of sorbent material and fueling conditions are useful as general designing guidelines in future engineering efforts using these two hydrogen storage approaches.

  6. Nuclear power plant monitoring and fault diagnosis methods based on the artificial intelligence technique

    International Nuclear Information System (INIS)

    Yoshikawa, S.; Saiki, A.; Ugolini, D.; Ozawa, K.

    1996-01-01

    The main objective of this paper is to develop an advanced diagnosis system based on the artificial intelligence technique to monitor the operation and to improve the operational safety of nuclear power plants. Three different methods have been elaborated in this study: an artificial neural network local diagnosis (NN ds ) scheme that acting at the component level discriminates between normal and abnormal transients, a model-based diagnostic reasoning mechanism that combines a physical causal network model-based knowledge compiler (KC) that generates applicable diagnostic rules from widely accepted physical knowledge compiler (KC) that generates applicable diagnostic rules from widely accepted physical knowledge. Although the three methods have been developed and verified independently, they are highly correlated and, when connected together, form a effective and robust diagnosis and monitoring tool. (authors)

  7. Study on synthesizing method of artificial ground motion that envelopes target power spectrum

    International Nuclear Information System (INIS)

    Zhang Yushan; Zhao Fengxin

    2010-01-01

    In this paper, a synthesizing method, which can generate the artificial ground motion that not only matches the target response spectrum but envelopes the corresponding target power spectrum, is proposed. With respect to every controlling frequency of the response spectrum: firstly, by superimposing the incremental narrow-band time history, the response spectrum of artificial ground motion is made to equal to the target value; and then, with its response spectrum unchanged, the time history is further modulated in the time domain to make its average power spectrum envelope the target one. A numerical example illustrates that not only the ground motion time history generated by this method possesses high matching precision to the target response spectrum, but also its average power spectrum envelopes the target power spectrum. (authors)

  8. A Simplified Method for predicting Ultimate Compressive Strength of Ship Panels

    DEFF Research Database (Denmark)

    Paik, Jeom Kee; Pedersen, Preben Terndrup

    1996-01-01

    A simplified method for predicting ultimate compressive strength of ship panels which have complex shape of the initial deflection is described. The procedure consist of the elastic large deflection theory and the rigid-plastic analysis based on the collapse mechanism taking into account large...... deformation effects. By taking only one component for the selected deflection function, the computer time for the elastic large deflection analysis will be drastically reduced. The validity of the procedure is checked by comparing the present solutions with the finite-element results for actual ship panels...

  9. Performance of Ruecking's Word-compression Method When Applied to Machine Retrieval from a Library Catalog

    Directory of Open Access Journals (Sweden)

    Ben-Ami Lipetz

    1969-12-01

    Full Text Available F. H. Ruecking's word-compression algorithm for retrieval of bibliographic data from computer stores was tested for performance in matching user-supplied, unedited bibliographic data to the bibliographic data contained in a library catalog. The algorithm was tested by manual simulation, using data derived from 126 case studies of successful manual searches of the card catalog at Sterling Memorial Library, Yale University. The algorithm achieved 70% recall in comparison to conventional searching. Its accepta- bility as a substitute for conventional catalog searching methods is ques- tioned unless recall performance can be improved, either by use of the algorithm alone or in combination with other algorithms.

  10. Sizing of Compression Coil Springs Gas Regulators Using Modern Methods CAD and CAE

    Directory of Open Access Journals (Sweden)

    Adelin Ionel Tuţă

    2010-10-01

    Full Text Available This paper presents a method for compression coil springs sizing by gas regulators composition, using CAD techniques (Computer Aided Design and CAE (Computer Aided Engineering. Sizing is to optimize the functioning of the regulators under dynamic industrial and house-hold. Gas regulator is a device that automatically and continuously adjusted to maintain pre-set limits on output gas pressure at varying flow and input pressure. The performances of the pressure regulators like automatic systems depend on their behaviour under dynamic opera-tion. Time constant optimization of pneumatic actuators, which drives gas regulators, leads to a better functioning under their dynamic.

  11. Method and apparatus for optimizing operation of a power generating plant using artificial intelligence techniques

    Science.gov (United States)

    Wroblewski, David [Mentor, OH; Katrompas, Alexander M [Concord, OH; Parikh, Neel J [Richmond Heights, OH

    2009-09-01

    A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.

  12. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  13. A Reconstructed Discontinuous Galerkin Method for the Compressible Euler Equations on Arbitrary Grids

    Energy Technology Data Exchange (ETDEWEB)

    Hong Luo; Luquing Luo; Robert Nourgaliev; Vincent Mousseau

    2009-06-01

    A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the same nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.

  14. Method for data compression by associating complex numbers with files of data values

    Science.gov (United States)

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  15. Compressed sensing MRI via fast linearized preconditioned alternating direction method of multipliers.

    Science.gov (United States)

    Chen, Shanshan; Du, Hongwei; Wu, Linna; Jin, Jiaquan; Qiu, Bensheng

    2017-04-27

    The challenge of reconstructing a sparse medical magnetic resonance image based on compressed sensing from undersampled k-space data has been investigated within recent years. As total variation (TV) performs well in preserving edge, one type of approach considers TV-regularization as a sparse structure to solve a convex optimization problem. Nevertheless, this convex optimization problem is both nonlinear and nonsmooth, and thus difficult to handle, especially for a large-scale problem. Therefore, it is essential to develop efficient algorithms to solve a very broad class of TV-regularized problems. In this paper, we propose an efficient algorithm referred to as the fast linearized preconditioned alternating direction method of multipliers (FLPADMM), to solve an augmented TV-regularized model that adds a quadratic term to enforce image smoothness. Because of the separable structure of this model, FLPADMM decomposes the convex problem into two subproblems. Each subproblem can be alternatively minimized by augmented Lagrangian function. Furthermore, a linearized strategy and multistep weighted scheme can be easily combined for more effective image recovery. The method of the present study showed improved accuracy and efficiency, in comparison to other methods. Furthermore, the experiments conducted on in vivo data showed that our algorithm achieved a higher signal-to-noise ratio (SNR), lower relative error (Rel.Err), and better structural similarity (SSIM) index in comparison to other state-of-the-art algorithms. Extensive experiments demonstrate that the proposed algorithm exhibits superior performance in accuracy and efficiency than conventional compressed sensing MRI algorithms.

  16. Feasibility of gas-discharge and optical methods of creating artificial ozone layers of the earth

    International Nuclear Information System (INIS)

    Batanov, G.M.; Kossyi, I.A.; Matveev, A.A.; Silakov, V.P.

    1996-01-01

    Gas-discharge (microwave) and optical (laser) methods of generating large-scale artificial ozone layers in the stratosphere are analyzed. A kinetic model is developed to calculate the plasma-chemical consequences of discharges localized in the stratosphere. Computations and simple estimates indicate that, in order to implement gas-discharge and optical methods, the operating power of ozone-producing sources should be comparable to or even much higher than the present-day power production throughout the world. Consequently, from the engineering and economic standpoints, microwave and laser methods cannot be used to repair large-scale ozone 'holes'

  17. Linearly decoupled energy-stable numerical methods for multi-component two-phase compressible flow

    KAUST Repository

    Kou, Jisheng

    2017-12-06

    In this paper, for the first time we propose two linear, decoupled, energy-stable numerical schemes for multi-component two-phase compressible flow with a realistic equation of state (e.g. Peng-Robinson equation of state). The methods are constructed based on the scalar auxiliary variable (SAV) approaches for Helmholtz free energy and the intermediate velocities that are designed to decouple the tight relationship between velocity and molar densities. The intermediate velocities are also involved in the discrete momentum equation to ensure a consistency relationship with the mass balance equations. Moreover, we propose a component-wise SAV approach for a multi-component fluid, which requires solving a sequence of linear, separate mass balance equations. We prove that the methods have the unconditional energy-dissipation feature. Numerical results are presented to verify the effectiveness of the proposed methods.

  18. Minimal invasive stabilization of osteoporotic vertebral compression fractures. Methods and preinterventional diagnostics

    International Nuclear Information System (INIS)

    Grohs, J.G.; Krepler, P.

    2004-01-01

    Minimal invasive stabilizations represent a new alternative for the treatment of osteoporotic compression fractures. Vertebroplasty and balloon kyphoplasty are two methods to enhance the strength of osteoporotic vertebral bodies by the means of cement application. Vertebroplasty is the older and technically easier method. The balloon kyphoplasty is the newer and more expensive method which does not only improve pain but also restores the sagittal profile of the spine. By balloon kyphoplasty the height of 101 fractured vertebral bodies could be increased up to 90% and the wedge decreased from 12 to 7 degrees. Pain was reduced from 7,2 to 2,5 points. The Oswestry disability index decreased from 60 to 26 points. This effects persisted over a period of two years. Cement leakage occurred in only 2% of vertebral bodies. Fractures of adjacent vertebral bodies were found in 11%. Good preinterventional diagnostics and intraoperative imaging are necessary to make the balloon kyphoplasty a successful application. (orig.) [de

  19. Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz, A; Puso, M A; Sukumar, N

    2009-09-04

    Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.

  20. Experimental Study on the Compressive Strength of Big Mobility Concrete with Nondestructive Testing Method

    Directory of Open Access Journals (Sweden)

    Huai-Shuai Shang

    2012-01-01

    Full Text Available An experimental study of C20, C25, C30, C40, and C50 big mobility concrete cubes that came from laboratory and construction site was completed. Nondestructive testing (NDT was carried out using impact rebound hammer (IRH techniques to establish a correlation between the compressive strengths and the rebound number. The local curve for measuring strength of the regression method is set up and its superiority is proved. The rebound method presented is simple, quick, and reliable and covers wide ranges of concrete strengths. The rebound method can be easily applied to concrete specimens as well as existing concrete structures. The final results were compared with previous ones from the literature and also with actual results obtained from samples extracted from existing structures.

  1. A simple method for estimating the fractal dimension from digital images: The compression dimension

    International Nuclear Information System (INIS)

    Chamorro-Posada, Pedro

    2016-01-01

    The fractal structure of real world objects is often analyzed using digital images. In this context, the compression fractal dimension is put forward. It provides a simple method for the direct estimation of the dimension of fractals stored as digital image files. The computational scheme can be implemented using readily available free software. Its simplicity also makes it very interesting for introductory elaborations of basic concepts of fractal geometry, complexity, and information theory. A test of the computational scheme using limited-quality images of well-defined fractal sets obtained from the Internet and free software has been performed. Also, a systematic evaluation of the proposed method using computer generated images of the Weierstrass cosine function shows an accuracy comparable to those of the methods most commonly used to estimate the dimension of fractal data sequences applied to the same test problem.

  2. Selectively Lossy, Lossless, and/or Error Robust Data Compression Method

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Lossless compression techniques provide efficient compression of hyperspectral satellite data. The present invention combines the advantages of a clustering with...

  3. A Space-Frequency Data Compression Method for Spatially Dense Laser Doppler Vibrometer Measurements

    Directory of Open Access Journals (Sweden)

    José Roberto de França Arruda

    1996-01-01

    Full Text Available When spatially dense mobility shapes are measured with scanning laser Doppler vibrometers, it is often impractical to use phase-separation modal parameter estimation methods due to the excessive number of highly coupled modes and to the prohibitive computational cost of processing huge amounts of data. To deal with this problem, a data compression method using Chebychev polynomial approximation in the frequency domain and two-dimensional discrete Fourier series approximation in the spatial domain, is proposed in this article. The proposed space-frequency regressive approach was implemented and verified using a numerical simulation of a free-free-free-free suspended rectangular aluminum plate. To make the simulation more realistic, the mobility shapes were synthesized by modal superposition using mode shapes obtained experimentally with a scanning laser Doppler vibrometer. A reduced and smoothed model, which takes advantage of the sinusoidal spatial pattern of structural mobility shapes and the polynomial frequency-domain pattern of the mobility shapes, is obtained. From the reduced model, smoothed curves with any desired frequency and spatial resolution can he produced whenever necessary. The procedure can he used either to generate nonmodal models or to compress the measured data prior to modal parameter extraction.

  4. A new method for the artificial raising of infant rats: the palate cannula.

    Science.gov (United States)

    Blake, H H; Lau, C; Henning, S J

    1988-01-01

    Chronic removal of infant rats from their mother prior to the onset of weaning is complicated by the fact that young rats do not easily suckle from an artificial nipple. Thus, a method of artificial raising is advantageous for developmental investigations of nutrition or ingestive behaviors during the suckling period. The intragastric cannula has become a popular method for this purpose. However, for many studies, it would be advantageous if the diet could be administered to the mouth and actually swallowed by the young rat. We developed a new cannulation procedure which accomplishes these goals. Infant rats were removed from their mother on postnatal day 13 and fitted with a cannula that opened into the oral cavity through the hard palate. Liquid diet was administered by an infusion pump through the cannula for the subsequent 5 days. Growth was assessed by daily measures of body and organ weight. The results indicate that from postnatal day 13 on, the palate cannula can allow the continuation of normal growth patterns and eliminates certain complicating factors associated with other forms of artificial raising.

  5. Continuous surveillance of transformers using artificial intelligence methods; Surveillance continue des transformateurs: application des methodes d'intelligence artificielle

    Energy Technology Data Exchange (ETDEWEB)

    Schenk, A.; Germond, A. [Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Boss, P.; Lorin, P. [ABB Secheron SA, Geneve (Switzerland)

    2000-07-01

    The article describes a new method for the continuous surveillance of power transformers based on the application of artificial intelligence (AI) techniques. An experimental pilot project on a specially equipped, strategically important power transformer is described. Traditional surveillance methods and the use of mathematical models for the prediction of faults are described. The article describes the monitoring equipment used in the pilot project and the AI principles such as self-organising maps that are applied. The results obtained from the pilot project and methods for their graphical representation are discussed.

  6. Proposed Sandia frequency shift for anti-islanding detection method based on artificial immune system

    Directory of Open Access Journals (Sweden)

    A.Y. Hatata

    2018-03-01

    Full Text Available Sandia frequency shift (SFS is one of the active anti-islanding detection methods that depend on frequency drift to detect an islanding condition for inverter-based distributed generation. The non-detection zone (NDZ of the SFS method depends to a great extent on its parameters. Improper adjusting of these parameters may result in failure of the method. This paper presents a proposed artificial immune system (AIS-based technique to obtain optimal parameters of SFS anti-islanding detection method. The immune system is highly distributed, highly adaptive, and self-organizing in nature, maintains a memory of past encounters, and has the ability to continually learn about new encounters. The proposed method generates less total harmonic distortion (THD than the conventional SFS, which results in faster island detection and better non-detection zone. The performance of the proposed method is derived analytically and simulated using Matlab/Simulink. Two case studies are used to verify the proposed method. The first case includes a photovoltaic (PV connected to grid and the second includes a wind turbine connected to grid. The deduced optimized parameter setting helps to achieve the “non-islanding inverter” as well as least potential adverse impact on power quality. Keywords: Anti-islanding detection, Sandia frequency shift (SFS, Non-detection zone (NDZ, Total harmonic distortion (THD, Artificial immune system (AIS, Clonal selection algorithm

  7. SOLVING TRANSPORT LOGISTICS PROBLEMS IN A VIRTUAL ENTERPRISE THROUGH ARTIFICIAL INTELLIGENCE METHODS

    Directory of Open Access Journals (Sweden)

    Vitaliy PAVLENKO

    2017-06-01

    Full Text Available The paper offers a solution to the problem of material flow allocation within a virtual enterprise by using artificial intelligence methods. The research is based on the use of fuzzy relations when planning for optimal transportation modes to deliver components for manufactured products. The Fuzzy Logic Toolbox is used to determine the optimal route for transportation of components for manufactured products. The methods offered have been exemplified in the present research. The authors have built a simulation model for component transportation and delivery for manufactured products using the Simulink graphical environment for building models.

  8. Fluvial facies reservoir productivity prediction method based on principal component analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Pengyu Gao

    2016-03-01

    Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.

  9. Method for compression testing of composite materials at high strain rates

    Science.gov (United States)

    Daniel, I. M.; Labedz, R. H.

    1983-01-01

    A method is presented for testing composite materials in compression at strain rates up to approximately 500 per s. The method uses a thin ring specimen (4 in. in diameter, 1 in. wide, six-eight plies thick) loaded dynamically by an external pressure pulse applied explosively through a liquid. Strains in the specimen and in a steel calibration ring are recoorded with a digital processing oscilloscope. Results are plotted by an x-y plotter in the form of a dynamic stress-strain curve. Data analysis is based on a numerical solution of the equation of motion. A computer program is used which involves smoothing and approximation of the strain magnitude, strain rate, and strain acceleration. Dynamic stress-strain curves obtained for 0-deg and 90-deg specimens of two graphite/epoxy composites are presented.

  10. A multiscale method for compressible liquid-vapor flow with surface tension*

    Directory of Open Access Journals (Sweden)

    Jaegle Felix

    2013-01-01

    Full Text Available Discontinuous Galerkin methods have become a powerful tool for approximating the solution of compressible flow problems. Their direct use for two-phase flow problems with phase transformation is not straightforward because this type of flows requires a detailed tracking of the phase front. We consider the fronts in this contribution as sharp interfaces and propose a novel multiscale approach. It combines an efficient high-order Discontinuous Galerkin solver for the computation in the bulk phases on the macro-scale with the use of a generalized Riemann solver on the micro-scale. The Riemann solver takes into account the effects of moderate surface tension via the curvature of the sharp interface as well as phase transformation. First numerical experiments in three space dimensions underline the overall performance of the method.

  11. Method of analysis for compressible flow through mixed-flow centrifugal impellers of arbitrary design

    Science.gov (United States)

    Hamrick, Joseph T; Ginsburg, Ambrose; Osborn, Walter M

    1952-01-01

    A method is presented for analysis of the compressible flow between the hub and the shroud of mixed-flow impellers of arbitrary design. Axial symmetry was assumed, but the forces in the meridional (hub to shroud) plane, which are derived from tangential pressure gradients, were taken into account. The method was applied to an experimental mixed-flow impeller. The analysis of the flow in the meridional plane of the impeller showed that the rotational forces, the blade curvature, and the hub-shroud profile can introduce severe velocity gradients along the hub and the shroud surfaces. Choked flow at the impeller inlet as determined by the analysis was verified by experimental results.

  12. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2016-06-03

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  13. Unidirectional Expiratory Valve Method to Assess Maximal Inspiratory Pressure in Individuals without Artificial Airway.

    Directory of Open Access Journals (Sweden)

    Samantha Torres Grams

    Full Text Available Maximal Inspiratory Pressure (MIP is considered an effective method to estimate strength of inspiratory muscles, but still leads to false positive diagnosis. Although MIP assessment with unidirectional expiratory valve method has been used in patients undergoing mechanical ventilation, no previous studies investigated the application of this method in subjects without artificial airway.This study aimed to compare the MIP values assessed by standard method (MIPsta and by unidirectional expiratory valve method (MIPuni in subjects with spontaneous breathing without artificial airway. MIPuni reproducibility was also evaluated.This was a crossover design study, and 31 subjects performed MIPsta and MIPuni in a random order. MIPsta measured MIP maintaining negative pressure for at least one second after forceful expiration. MIPuni evaluated MIP using a unidirectional expiratory valve attached to a face mask and was conducted by two evaluators (A and B at two moments (Tests 1 and 2 to determine interobserver and intraobserver reproducibility of MIP values. Intraclass correlation coefficient (ICC[2,1] was used to determine intraobserver and interobserver reproducibility.The mean values for MIPuni were 14.3% higher (-117.3 ± 24.8 cmH2O than the mean values for MIPsta (-102.5 ± 23.9 cmH2O (p<0.001. Interobserver reproducibility assessment showed very high correlation for Test 1 (ICC[2,1] = 0.91, and high correlation for Test 2 (ICC[2,1] = 0.88. The assessment of the intraobserver reproducibility showed high correlation for evaluator A (ICC[2,1] = 0.86 and evaluator B (ICC[2,1] = 0.77.MIPuni presented higher values when compared with MIPsta and proved to be reproducible in subjects with spontaneous breathing without artificial airway.

  14. A practical discrete-adjoint method for high-fidelity compressible turbulence simulations

    International Nuclear Information System (INIS)

    Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.

    2015-01-01

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space–time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge–Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that

  15. Augmented Lagrangian Method and Compressible Visco-plastic Flows: Applications to Shallow Dense Avalanches

    Science.gov (United States)

    Bresch, D.; Fernández-Nieto, E. D.; Ionescu, I. R.; Vigneaux, P.

    In this paper we propose a well-balanced finite volume/augmented Lagrangian method for compressible visco-plastic models focusing on a compressible Bingham type system with applications to dense avalanches. For the sake of completeness we also present a method showing that such a system may be derived for a shallow flow of a rigid-viscoplastic incompressible fluid, namely for incompressible Bingham type fluid with free surface. When the fluid is relatively shallow and spreads slowly, lubrication-style asymptotic approximations can be used to build reduced models for the spreading dynamics, see for instance [N.J. Balmforth et al., J. Fluid Mech (2002)]. When the motion is a little bit quicker, shallow water theory for non-Newtonian flows may be applied, for instance assuming a Navier type boundary condition at the bottom. We start from the variational inequality for an incompressible Bingham fluid and derive a shallow water type system. In the case where Bingham number and viscosity are set to zero we obtain the classical Shallow Water or Saint-Venant equations obtained for instance in [J.F. Gerbeau, B. Perthame, DCDS (2001)]. For numerical purposes, we focus on the one-dimensional in space model: We study associated static solutions with sufficient conditions that relate the slope of the bottom with the Bingham number and domain dimensions. We also propose a well-balanced finite volume/augmented Lagrangian method. It combines well-balanced finite volume schemes for spatial discretization with the augmented Lagrangian method to treat the associated optimization problem. Finally, we present various numerical tests.

  16. Compressive loading unloading behavior of nuclear graphite grades of different forming method and raw cokes

    International Nuclear Information System (INIS)

    Chi, Sehwan; Hong, Seongdeok; Kim, Yongwan

    2012-01-01

    Nuclear graphite is used for core structural components and neutron moderators in high temperature gas-cooled reactors. As graphite is a brittle material fail at relatively low strains (e.g., ∼0.5% in tension and ∼2% in compression), cracking of these components can occur throughout the life of the reactor under the influence of thermal and mechanical stresses. While a lot of studies have been performed on the fracture of graphite, most studies have been concerned on crack initiation and propagation, with little concerns on the damage processes that lead to the very first stage of crack initiation. In this study, the graphite damage processes before the main crack formation were investigated based on the microstructure change during load relaxation. For this, 4-1/3 notched flexure strength test specimens made of nuclear graphite grades IG-110, NBG-18 and PCEA of different forming methods (isotropic molding, vibrational molding and extrusion, respectively) and ingredients (coke, binder) were subjected to 10 cyclic compressive loading-unloading, and the changes in the microstructure of notch-tip areas were examined by X-ray tomography

  17. Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers

    Science.gov (United States)

    Kennedy, Christopher A.; Carpenter, Mark H.

    1997-01-01

    An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.

  18. A method for predicting the impact velocity of a projectile fired from a compressed air gun facility

    International Nuclear Information System (INIS)

    Attwood, G.J.

    1988-03-01

    This report describes the development and use of a method for calculating the velocity at impact of a projectile fired from a compressed air gun. The method is based on a simple but effective approach which has been incorporated into a computer program. The method was developed principally for use with the Horizontal Impact Facility at AEE Winfrith but has been adapted so that it can be applied to any compressed air gun of a similar design. The method has been verified by comparison of predicted velocities with test data and the program is currently being used in a predictive manner to specify test conditions for the Horizontal Impact Facility at Winfrith. (author)

  19. Effect of double flasking and investing methods on artificial teeth movement in complete dentures processing.

    Science.gov (United States)

    Sotto-Maior, Bruno S; Jóia, Fábio A; Meloto, Carolina B; Cury, Altair A Del Bel; Rizzatti-Barbosa, Célia M

    2012-06-01

    The aim of this study was to evaluate linear dimensional alterations of artificial teeth for complete dentures when using different investment and flasking techniques. Dimensional changes in the vertical dimension may occur owing to changes in artificial teeth positioning caused by different investing and flasking techniques. Thirty pairs of the complete dentures were manufactured and randomly divided into three groups (n = 10): (1) invested with type III stone in monomaxillary PVC flask; (2) invested with type III stone in bimaxillary PVC flask; and (3) invested with laboratory silicone in bimaxillary PVC flask. Dentures were polymerised by microwave, and 12 linear distances were measured before and after denture processing. Data were analysed by one-way anova, considering manufacturing technique as the study factor. Tukey's HSD was used as post hoc ANOVA (p = 0.05). Most of the linear distances were comparable for all groups. All transversal maxillary and mandibular distances were higher for group 1 compared with groups 2 and 3 (p investment material is shown to be the most effective method to reduce changes in artificial teeth positioning. © 2011 The Gerodontology Society and John Wiley & Sons A/S.

  20. An Immersed Boundary Method for Solving the Compressible Navier-Stokes Equations with Fluid Structure Interaction

    Science.gov (United States)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    An immersed boundary method for the compressible Navier-Stokes equation and the additional infrastructure that is needed to solve moving boundary problems and fully coupled fluid-structure interaction is described. All the methods described in this paper were implemented in NASA's LAVA solver framework. The underlying immersed boundary method is based on the locally stabilized immersed boundary method that was previously introduced by the authors. In the present paper this method is extended to account for all aspects that are involved for fluid structure interaction simulations, such as fast geometry queries and stencil computations, the treatment of freshly cleared cells, and the coupling of the computational fluid dynamics solver with a linear structural finite element method. The current approach is validated for moving boundary problems with prescribed body motion and fully coupled fluid structure interaction problems in 2D and 3D. As part of the validation procedure, results from the second AIAA aeroelastic prediction workshop are also presented. The current paper is regarded as a proof of concept study, while more advanced methods for fluid structure interaction are currently being investigated, such as geometric and material nonlinearities, and advanced coupling approaches.

  1. An exact and consistent adjoint method for high-fidelity discretization of the compressible flow equations

    Science.gov (United States)

    Subramanian, Ramanathan Vishnampet Ganapathi

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvement. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs. Such methods have enabled sensitivity analysis and active control of turbulence at engineering flow conditions by providing gradient information at computational cost comparable to that of simulating the flow. They accelerate convergence of numerical design optimization algorithms, though this is predicated on the availability of an accurate gradient of the discretized flow equations. This is challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. We analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space--time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge--Kutta-like scheme

  2. Macron Formed Liner Compression as a Practical Method for Enabling Magneto-Inertial Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John

    2011-12-10

    The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. The main impediment for current nuclear fusion concepts is the complexity and large mass associated with the confinement systems. To take advantage of the smaller scale, higher density regime of magnetic fusion, an efficient method for achieving the compressional heating required to reach fusion gain conditions must be found. The very compact, high energy density plasmoid commonly referred to as a Field Reversed Configuration (FRC) provides for an ideal target for this purpose. To make fusion with the FRC practical, an efficient method for repetitively compressing the FRC to fusion gain conditions is required. A novel approach to be explored in this endeavor is to remotely launch a converging array of small macro-particles (macrons) that merge and form a more massive liner inside the reactor which then radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target FRC plasmoid suppresses the thermal transport to the confining liner significantly lowering the imploding power needed to compress the target. With the momentum flux being delivered by an assemblage of low mass, but high velocity macrons, many of the difficulties encountered with the liner implosion power technology are eliminated. The undertaking to be described in this proposal is to evaluate the feasibility achieving fusion conditions from this simple and low cost approach to fusion. During phase I the design and testing of the key components for the creation of the macron formed liner have been successfully carried out. Detailed numerical calculations of the merging, formation and radial implosion of the Macron Formed Liner (MFL) were also performed. The phase II effort will focus on an experimental demonstration of the macron launcher at full power, and the demonstration

  3. Estimating Penetration Resistance in Agricultural Soils of Ardabil Plain Using Artificial Neural Network and Regression Methods

    Directory of Open Access Journals (Sweden)

    Gholam Reza Sheykhzadeh

    2017-02-01

    Full Text Available Introduction: Penetration resistance is one of the criteria for evaluating soil compaction. It correlates with several soil properties such as vehicle trafficability, resistance to root penetration, seedling emergence, and soil compaction by farm machinery. Direct measurement of penetration resistance is time consuming and difficult because of high temporal and spatial variability. Therefore, many different regressions and artificial neural network pedotransfer functions have been proposed to estimate penetration resistance from readily available soil variables such as particle size distribution, bulk density (Db and gravimetric water content (θm. The lands of Ardabil Province are one of the main production regions of potato in Iran, thus, obtaining the soil penetration resistance in these regions help with the management of potato production. The objective of this research was to derive pedotransfer functions by using regression and artificial neural network to predict penetration resistance from some soil variations in the agricultural soils of Ardabil plain and to compare the performance of artificial neural network with regression models. Materials and methods: Disturbed and undisturbed soil samples (n= 105 were systematically taken from 0-10 cm soil depth with nearly 3000 m distance in the agricultural lands of the Ardabil plain ((lat 38°15' to 38°40' N, long 48°16' to 48°61' E. The contents of sand, silt and clay (hydrometer method, CaCO3 (titration method, bulk density (cylinder method, particle density (Dp (pychnometer method, organic carbon (wet oxidation method, total porosity(calculating from Db and Dp, saturated (θs and field soil water (θf using the gravimetric method were measured in the laboratory. Mean geometric diameter (dg and standard deviation (σg of soil particles were computed using the percentages of sand, silt and clay. Penetration resistance was measured in situ using cone penetrometer (analog model at 10

  4. Comparison between two methods for measuring the sound velocity of shock compressed Al1100

    Science.gov (United States)

    Gudinetsky, Eli; Yosef-Hai, Arnon; Eidelstein, Eitan; Bialolenker, Gabi; Paris, Vitaly; Fedotov-Gefen, Alex; Werdiger, Meir; Horovitz, Yossef; Ravid, Avi

    2017-06-01

    Sound velocity measurements are an important tool for investigating phase transitions and calibrating the EOS outside the principle Hugoniot. Two common methods are the overtake method and reverse-impact method. Although widely used, there is little discussion about the uncertainties of these methods. A comparison between the aforementioned methods for determining the sound velocity of shock compressed Al1100 is presented. The experiment consisted of an Al1100 flyer plate which was accelerated to velocity of 2.2 km/s towards two Al1100 targets of different thickness backed by a PMMA window and a third LiF target. This experiment was complimented by a second reverse-impact experiment in which an Al1100 flyer plate impacted a LiF target. The similarities of the shock impedance of LiF and Al1100 were used in order to achieve the same pressure in both of the experimental methods. The design of these experiments was led by detailed calculations in order to achieve minimal uncertainties in each experiment. These calculations took into account 2D effects such as edge rarefactions originating in the flyer plate, targets and windows. The uncertainty in the sound velocity is compared to our uncertainty estimate which was based on calculations.

  5. [A rapid dialysis method for analysis of artificial sweeteners in food].

    Science.gov (United States)

    Tahara, Shoichi; Fujiwara, Takushi; Yasui, Akiko; Hayafuji, Chieko; Kobayashi, Chigusa; Uematsu, Yoko

    2014-01-01

    A simple and rapid dialysis method was developed for the extraction and purification of four artificial sweeteners, namely, sodium saccharin (Sa), acesulfame potassium (AK), aspartame (APM), and dulcin (Du), which are present in various foods. Conventional dialysis uses a membrane dialysis tube approximately 15 cm in length and is carried out over many hours owing to the small membrane area and owing to inefficient mixing. In particular, processed cereal products such as cookies required treatment for 48 hours to obtain satisfactory recovery of the compounds. By increasing the tube length to 55 cm and introducing efficient mixing by inversion at half-hour intervals, the dialysis times of the four artificial sweeteners, spiked at 0.1 g/kg in the cookie, were shortened to 4 hours. Recovery yields of 88.9-103.2% were obtained by using the improved method, whereas recovery yields were low (65.5-82.0%) by the conventional method. Recovery yields (%) of Sa, AK, APM, and Du, spiked at 0.1 g/kg in various foods, were 91.6-100.1, 93.9-100.1, 86.7-100.0 and 88.7-104.7 using the improved method.

  6. Novel method to load multiple genes onto a mammalian artificial chromosome.

    Science.gov (United States)

    Tóth, Anna; Fodor, Katalin; Praznovszky, Tünde; Tubak, Vilmos; Udvardy, Andor; Hadlaczky, Gyula; Katona, Robert L

    2014-01-01

    Mammalian artificial chromosomes are natural chromosome-based vectors that may carry a vast amount of genetic material in terms of both size and number. They are reasonably stable and segregate well in both mitosis and meiosis. A platform artificial chromosome expression system (ACEs) was earlier described with multiple loading sites for a modified lambda-integrase enzyme. It has been shown that this ACEs is suitable for high-level industrial protein production and the treatment of a mouse model for a devastating human disorder, Krabbe's disease. ACEs-treated mutant mice carrying a therapeutic gene lived more than four times longer than untreated counterparts. This novel gene therapy method is called combined mammalian artificial chromosome-stem cell therapy. At present, this method suffers from the limitation that a new selection marker gene should be present for each therapeutic gene loaded onto the ACEs. Complex diseases require the cooperative action of several genes for treatment, but only a limited number of selection marker genes are available and there is also a risk of serious side-effects caused by the unwanted expression of these marker genes in mammalian cells, organs and organisms. We describe here a novel method to load multiple genes onto the ACEs by using only two selectable marker genes. These markers may be removed from the ACEs before therapeutic application. This novel technology could revolutionize gene therapeutic applications targeting the treatment of complex disorders and cancers. It could also speed up cell therapy by allowing researchers to engineer a chromosome with a predetermined set of genetic factors to differentiate adult stem cells, embryonic stem cells and induced pluripotent stem (iPS) cells into cell types of therapeutic value. It is also a suitable tool for the investigation of complex biochemical pathways in basic science by producing an ACEs with several genes from a signal transduction pathway of interest.

  7. Novel method to load multiple genes onto a mammalian artificial chromosome.

    Directory of Open Access Journals (Sweden)

    Anna Tóth

    Full Text Available Mammalian artificial chromosomes are natural chromosome-based vectors that may carry a vast amount of genetic material in terms of both size and number. They are reasonably stable and segregate well in both mitosis and meiosis. A platform artificial chromosome expression system (ACEs was earlier described with multiple loading sites for a modified lambda-integrase enzyme. It has been shown that this ACEs is suitable for high-level industrial protein production and the treatment of a mouse model for a devastating human disorder, Krabbe's disease. ACEs-treated mutant mice carrying a therapeutic gene lived more than four times longer than untreated counterparts. This novel gene therapy method is called combined mammalian artificial chromosome-stem cell therapy. At present, this method suffers from the limitation that a new selection marker gene should be present for each therapeutic gene loaded onto the ACEs. Complex diseases require the cooperative action of several genes for treatment, but only a limited number of selection marker genes are available and there is also a risk of serious side-effects caused by the unwanted expression of these marker genes in mammalian cells, organs and organisms. We describe here a novel method to load multiple genes onto the ACEs by using only two selectable marker genes. These markers may be removed from the ACEs before therapeutic application. This novel technology could revolutionize gene therapeutic applications targeting the treatment of complex disorders and cancers. It could also speed up cell therapy by allowing researchers to engineer a chromosome with a predetermined set of genetic factors to differentiate adult stem cells, embryonic stem cells and induced pluripotent stem (iPS cells into cell types of therapeutic value. It is also a suitable tool for the investigation of complex biochemical pathways in basic science by producing an ACEs with several genes from a signal transduction pathway of interest.

  8. Boundary Domain Integral Method for Double Diffusive Natural Convection in Porous Media Saturated with Compressible Fluid

    Science.gov (United States)

    Kramer, J.; Jecl, R.; Škerget, L.

    2008-09-01

    In the present work, a Boundary Domain Integral Method, which has been already established for the solution of viscous incompressible fluid flow through porous media, is extended to capture compressible fluid flow in porous media. The presented numerical scheme was used for solving the problem of double diffusive natural convection in a square porous cavity heated from a side, while the horizontal walls are maintained at different concentrations. The Brinkman extension of Darcy equation is used to model the flow through porous medium. The velocity-vorticity formulation is employed enabeling the computation scheme to be partitioned into kinematic and kinetic parts. The results of double diffusive natural convection in porous cavity are presented in terms of velocity, temperature and concentration redistributions.

  9. Dynamic magnetic resonance imaging method based on golden-ratio cartesian sampling and compressed sensing.

    Science.gov (United States)

    Li, Shuo; Zhu, Yanchun; Xie, Yaoqin; Gao, Song

    2018-01-01

    Dynamic magnetic resonance imaging (DMRI) is used to noninvasively trace the movements of organs and the process of drug delivery. The results can provide quantitative or semiquantitative pathology-related parameters, thus giving DMRI great potential for clinical applications. However, conventional DMRI techniques suffer from low temporal resolution and long scan time owing to the limitations of the k-space sampling scheme and image reconstruction algorithm. In this paper, we propose a novel DMRI sampling scheme based on a golden-ratio Cartesian trajectory in combination with a compressed sensing reconstruction algorithm. The results of two simulation experiments, designed according to the two major DMRI techniques, showed that the proposed method can improve the temporal resolution and shorten the scan time and provide high-quality reconstructed images.

  10. Dynamic magnetic resonance imaging method based on golden-ratio cartesian sampling and compressed sensing.

    Directory of Open Access Journals (Sweden)

    Shuo Li

    Full Text Available Dynamic magnetic resonance imaging (DMRI is used to noninvasively trace the movements of organs and the process of drug delivery. The results can provide quantitative or semiquantitative pathology-related parameters, thus giving DMRI great potential for clinical applications. However, conventional DMRI techniques suffer from low temporal resolution and long scan time owing to the limitations of the k-space sampling scheme and image reconstruction algorithm. In this paper, we propose a novel DMRI sampling scheme based on a golden-ratio Cartesian trajectory in combination with a compressed sensing reconstruction algorithm. The results of two simulation experiments, designed according to the two major DMRI techniques, showed that the proposed method can improve the temporal resolution and shorten the scan time and provide high-quality reconstructed images.

  11. Hybrid Modeling and Optimization of Manufacturing Combining Artificial Intelligence and Finite Element Method

    CERN Document Server

    Quiza, Ramón; Davim, J Paulo

    2012-01-01

    Artificial intelligence (AI) techniques and the finite element method (FEM) are both powerful computing tools, which are extensively used for modeling and optimizing manufacturing processes. The combination of these tools has resulted in a new flexible and robust approach as several recent studies have shown. This book aims to review the work already done in this field as well as to expose the new possibilities and foreseen trends. The book is expected to be useful for postgraduate students and researchers, working in the area of modeling and optimization of manufacturing processes.

  12. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    Science.gov (United States)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  13. Chapter 22: Compressed Air Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Benton, Nathanael [Nexant, Inc., San Francisco, CA (United States); Burns, Patrick [Nexant, Inc., San Francisco, CA (United States)

    2017-10-18

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressor replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  14. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  15. Color matching of fabric blends: hybrid Kubelka-Munk + artificial neural network based method

    Science.gov (United States)

    Furferi, Rocco; Governi, Lapo; Volpe, Yary

    2016-11-01

    Color matching of fabric blends is a key issue for the textile industry, mainly due to the rising need to create high-quality products for the fashion market. The process of mixing together differently colored fibers to match a desired color is usually performed by using some historical recipes, skillfully managed by company colorists. More often than desired, the first attempt in creating a blend is not satisfactory, thus requiring the experts to spend efforts in changing the recipe with a trial-and-error process. To confront this issue, a number of computer-based methods have been proposed in the last decades, roughly classified into theoretical and artificial neural network (ANN)-based approaches. Inspired by the above literature, the present paper provides a method for accurate estimation of spectrophotometric response of a textile blend composed of differently colored fibers made of different materials. In particular, the performance of the Kubelka-Munk (K-M) theory is enhanced by introducing an artificial intelligence approach to determine a more consistent value of the nonlinear function relationship between the blend and its components. Therefore, a hybrid K-M+ANN-based method capable of modeling the color mixing mechanism is devised to predict the reflectance values of a blend.

  16. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    International Nuclear Information System (INIS)

    Nedic, Vladimir; Despotovic, Danijela; Cvetanovic, Slobodan; Despotovic, Milan; Babic, Sasa

    2014-01-01

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L eq . Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model

  17. Development of ultrasound/endoscopy PACS (picture archiving and communication system) and investigation of compression method for cine images

    Science.gov (United States)

    Osada, Masakazu; Tsukui, Hideki

    2002-09-01

    ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.

  18. Numerical simulation of 2D and 3D compressible flows

    Science.gov (United States)

    Huml, Jaroslav; Kozel, Karel; Příhoda, Jaromír

    2013-02-01

    The work deals with numerical solutions of 2D inviscid and laminar compressible flows in the GAMM channel and DCA 8% cascade, and of 3D inviscid compressible flows in a 3D modification of the GAMM channel (Swept Wing). The FVM multistage Runge-Kutta method and the Lax-Wendroff scheme (Richtmyer's form) with Jameson's artificial dissipation were applied to obtain the numerical solutions. The results are discussed and compared to other similar results and experiments.

  19. A Compressive Multi-Frequency Linear Sampling Method for Underwater Acoustic Imaging.

    Science.gov (United States)

    Alqadah, Hatim F

    2016-06-01

    This paper investigates the use of a qualitative inverse scattering method known as the linear sampling method (LSM) for imaging underwater scenes using limited aperture receiver configurations. The LSM is based on solving a set of unstable integral equations known as the far-field equations and whose stability breaks down even further for under-sampled observation aperture data. Based on the results of a recent study concerning multi-frequency LSM imaging, we propose an iterative inversion method that is founded upon a compressive sensing framework. In particular, we leverage multi-frequency diversity in the data by imposing a partial frequency variation prior on the solution which we show is justified when the frequency bandwidth is sampled finely enough. We formulate an alternating direction method of multiplier approach to minimize the proposed cost function. Proof of concept is established through numerically generated data as well as experimental acoustic measurements taken in a shallow pool facility at the U.S Naval Research Laboratory.

  20. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  1. Linearly and nonlinearly optimized weighted essentially non-oscillatory methods for compressible turbulence

    Science.gov (United States)

    Taylor, Ellen Meredith

    Weighted essentially non-oscillatory (WENO) methods have been developed to simultaneously provide robust shock-capturing in compressible fluid flow and avoid excessive damping of fine-scale flow features such as turbulence. This is accomplished by constructing multiple candidate numerical stencils that adaptively combine so as to provide high order of accuracy and high bandwidth-resolving efficiency in continuous flow regions while averting instability-provoking interpolation across discontinuities. Under certain conditions in compressible turbulence, however, numerical dissipation remains unacceptably high even after optimization of the linear optimal stencil combination that dominates in smooth regions. The remaining nonlinear error arises from two primary sources: (i) the smoothness measurement that governs the application of adaptation away from the optimal stencil and (ii) the numerical properties of individual candidate stencils that govern numerical accuracy when adaptation engages. In this work, both of these sources are investigated, and corrective modifications to the WENO methodology are proposed and evaluated. Excessive nonlinear error due to the first source is alleviated through two separately considered procedures appended to the standard smoothness measurement technique that are designated the "relative smoothness limiter" and the "relative total variation limiter." In theory, appropriate values of their associated parameters should be insensitive to flow configuration, thereby sidestepping the prospect of costly parameter tuning; and this expectation of broad effectiveness is assessed in direct numerical simulations (DNS) of one-dimensional inviscid test problems, three-dimensional compressible isotropic turbulence of varying Reynolds and turbulent Mach numbers, and shock/isotropic-turbulence interaction (SITI). In the process, tools for efficiently comparing WENO adaptation behavior in smooth versus shock-containing regions are developed. The

  2. Present and future methods of mine detection using scattering parameters and an artificial neural network

    Science.gov (United States)

    Plett, Gregory; Doi, Takeshi; Torrieri, Don

    1996-05-01

    The detection and disposal of anti-personnel landmines is one of the most difficult and intractable problems faced in ground conflict. This paper first presents current detection methods which use a separated aperture microwave sensor and an artificial neural-network pattern classifier. Several data-specific pre-processing methods are developed to enhance neural-network learning. In addition, a generalized Karhunen-Loeve transform and the eigenspace separation transform are used to perform data reduction and reduce network complexity. Highly favorable results have been obtained using the above methods in conjunction with a feedforward neural network. Secondly, a very promising idea relating to future research is proposed that uses acoustic modulation of the microwave signal to provide an additional independent feature to the input of the neural network. The expectation is that near-perfect mine detection will be possible with this proposed system.

  3. Risk assessment for pipelines with active defects based on artificial intelligence methods

    International Nuclear Information System (INIS)

    Anghel, Calin I.

    2009-01-01

    The paper provides another insight into the pipeline risk assessment for in-service pressure piping containing defects. Beside of the traditional analytical approximation methods or sampling-based methods safety index and failure probability of pressure piping containing defects will be obtained based on a novel type of support vector machine developed in a minimax manner. The safety index or failure probability is carried out based on a binary classification approach. The procedure named classification reliability procedure, involving a link between artificial intelligence and reliability methods was developed as a user-friendly computer program in MATLAB language. To reveal the capacity of the proposed procedure two comparative numerical examples replicating a previous related work and predicting the failure probabilities of pressured pipeline with defects were presented.

  4. Risk assessment for pipelines with active defects based on artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Anghel, Calin I. [Department of Chemical Engineering, Faculty of Chemistry and Chemical Engineering, University ' Babes-Bolyai' , Cluj-Napoca (Romania)], E-mail: canghel@chem.ubbcluj.ro

    2009-07-15

    The paper provides another insight into the pipeline risk assessment for in-service pressure piping containing defects. Beside of the traditional analytical approximation methods or sampling-based methods safety index and failure probability of pressure piping containing defects will be obtained based on a novel type of support vector machine developed in a minimax manner. The safety index or failure probability is carried out based on a binary classification approach. The procedure named classification reliability procedure, involving a link between artificial intelligence and reliability methods was developed as a user-friendly computer program in MATLAB language. To reveal the capacity of the proposed procedure two comparative numerical examples replicating a previous related work and predicting the failure probabilities of pressured pipeline with defects were presented.

  5. Determination of Electron Optical Properties for Aperture Zoom Lenses Using an Artificial Neural Network Method.

    Science.gov (United States)

    Isik, Nimet

    2016-04-01

    Multi-element electrostatic aperture lens systems are widely used to control electron or charged particle beams in many scientific instruments. By means of applied voltages, these lens systems can be operated for different purposes. In this context, numerous methods have been performed to calculate focal properties of these lenses. In this study, an artificial neural network (ANN) classification method is utilized to determine the focused/unfocused charged particle beam in the image point as a function of lens voltages for multi-element electrostatic aperture lenses. A data set for training and testing of ANN is taken from the SIMION 8.1 simulation program, which is a well known and proven accuracy program in charged particle optics. Mean squared error results of this study indicate that the ANN classification method provides notable performance characteristics for electrostatic aperture zoom lenses.

  6. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    Science.gov (United States)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  7. Monitoring of operation with artificial intelligence methods; Betriebsueberwachung mit Verfahren der Kuenstlichen Intelligenz

    Energy Technology Data Exchange (ETDEWEB)

    Bruenninghaus, H. [DMT-Gesellschaft fuer Forschung und Pruefung mbH, Essen (Germany). Geschaeftsbereich Systemtechnik

    1999-03-11

    Taking the applications `early detection of fires` and `reduction of burst of messages` as an example, the usability of artificial intelligence (AI) methods in the monitoring of operation was checked in a R and D project. The contribution describes the concept, development and evaluation of solutions to the specified problems. A platform, which made it possible to investigate different AI methods (in particular artificial neuronal networks), had to be creaated as a basis for the project. At the same time ventilation data had to be acquired and processed by the networks for the classification. (orig.) [Deutsch] Am Beispiel der Anwendungsfaelle `Brandfrueherkennung` und `Meldungsschauerreduzierung` wurde im Rahmen eines F+E-Vorhabens die Einsetzbarkeit von Kuenstliche-Intelligenz-Methoden (KI) in der Betriebsueberwachung geprueft. Der Beitrag stellt Konzeption, Entwicklung und Bewertung von Loesungsansaetzen fuer die genannten Aufgabenstellungen vor. Als Grundlage fuer das Vorhaben wurden anhand KI-Methoden (speziell: Kuenstliche Neuronale Netze -KNN) auf der Grundlage gewonnener und aufbereiteter Wetterdaten die Beziehungen zwischen den Wettermessstellen im Laufe des Wetterwegs klassifiziert. (orig.)

  8. A Microwave Free-Space Method Using Artificial Lens with Anti-reflection Layer

    Science.gov (United States)

    Zhang, Yangjun; Aratani, Yuki; Nakazima, Hironari

    2017-12-01

    This paper describes a microwave free-space method using flat artificial lens antennas with anti-reflection layer. The lens antenna is made of an artificial material of metal particle. Comparing with our previous study, Anti-reflection (AR) layers are supplemented to the lens in this study to obtain a wave matching on the air-lens interface. The improved lens is in a disk shape of 50 mm diameter and 5.9 mm thickness. The lens is applied in a free-space setup, in which it is set in front of a patch antenna resonating at 15 GHz to get a high gain. The free-space setup is used to measure microwave attenuation and phase shift through a sawdust sample. The experimental results show that the multiple-reflection in the free-space method becomes small, because the reflection on air-lens interface has been reduced. The proposed AR lens antenna is flat and very small in the size. It is possible to construct a very compact and low cost free-space setup using the improved lens.

  9. Spatial capture-recapture: a promising method for analyzing data collected using artificial cover objects

    Science.gov (United States)

    Sutherland, Chris; Munoz, David; Miller, David A.W.; Grant, Evan H. Campbell

    2016-01-01

    Spatial capture–recapture (SCR) is a relatively recent development in ecological statistics that provides a spatial context for estimating abundance and space use patterns, and improves inference about absolute population density. SCR has been applied to individual encounter data collected noninvasively using methods such as camera traps, hair snares, and scat surveys. Despite the widespread use of capture-based surveys to monitor amphibians and reptiles, there are few applications of SCR in the herpetological literature. We demonstrate the utility of the application of SCR for studies of reptiles and amphibians by analyzing capture–recapture data from Red-Backed Salamanders, Plethodon cinereus, collected using artificial cover boards. Using SCR to analyze spatial encounter histories of marked individuals, we found evidence that density differed little among four sites within the same forest (on average, 1.59 salamanders/m2) and that salamander detection probability peaked in early October (Julian day 278) reflecting expected surface activity patterns of the species. The spatial scale of detectability, a measure of space use, indicates that the home range size for this population of Red-Backed Salamanders in autumn was 16.89 m2. Surveying reptiles and amphibians using artificial cover boards regularly generates spatial encounter history data of known individuals, which can readily be analyzed using SCR methods, providing estimates of absolute density and inference about the spatial scale of habitat use.

  10. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2017-02-25

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  11. Integer programming-based method for grammar-based tree compression and its application to pattern extraction of glycan tree structures.

    Science.gov (United States)

    Zhao, Yang; Hayashida, Morihiro; Akutsu, Tatsuya

    2010-12-14

    A bisection-type algorithm for the grammar-based compression of tree-structured data has been proposed recently. In this framework, an elementary ordered-tree grammar (EOTG) and an elementary unordered-tree grammar (EUTG) were defined, and an approximation algorithm was proposed. In this paper, we propose an integer programming-based method that finds the minimum context-free grammar (CFG) for a given string under the condition that at most two symbols appear on the right-hand side of each production rule. Next, we extend this method to find the minimum EOTG and EUTG grammars for given ordered and unordered trees, respectively. Then, we conduct computational experiments for the ordered and unordered artificial trees. Finally, we apply our methods to pattern extraction of glycan tree structures. We propose integer programming-based methods that find the minimum CFG, EOTG, and EUTG for given strings, ordered and unordered trees. Our proposed methods for trees are useful for extracting patterns of glycan tree structures.

  12. Differences of Streptococcus mutans adhesion between artificial mouth systems: a dinamic and static methods

    Directory of Open Access Journals (Sweden)

    Aryan Morita

    2016-06-01

    Full Text Available Background: Various materials have been used for treating dental caries. Dental caries is a disease that attacks hard tissues of the teeth. The initial phase of caries is a formation of bacterial biofilm, called as dental plaque. Dental restorative materials are expected for preventing secondary caries formation initiated by dental plaque. Initial bacterial adhesion is assumed to be an important stage of dental plaque formation. Bacteria that recognize the receptor for binding to the pellicle on tooth surface are known as initial bacterial colonies. One of the bacteria that plays a role in the early stage of dental plaque formation is Streptococcus mutans (S. mutans. Artificial mouth system (AMS used in bacterial biofilm research on the oral cavity provides the real condition of oral cavity and continous and intermittent supply of nutrients for bacteria. Purpose: This study aimed to compare the profile of S. mutans bacterial adhesion as the primary etiologic agent for dental caries between using static method and using artificial mouth system, a dinamic. method (AMS. Method: The study was conducted at Faculty of Dentistry and Integrated Research and testing laboratory (LPPT in Universitas Gadjah Mada from April to August 2015. Composite resin was used as the subject of this research. Twelve composite resins with a diameter of 5 mm and a width of 2 mm were divided into two groups, namely group using static method and group using dynamic method. Static method was performed by submerging the samples into a 100µl suspension of 1.5 x 108 CFU/ml S. mutans and 200µl BHI broth. Meanwhile AMS method was carried out by placing the samples at the AMS tube drained with 20 drops/minute of bacterial suspension and sterile aquadest. After 72 hours, five samples from each group were calculated for their biofilm mass using 1% crystal violet and read by a spectrofotometer with a wavelength of 570 nm. Meanwhile, one sample from each group was taken for its

  13. Triaxial- and uniaxial-compression testing methods developed for extraction of pore water from unsaturated tuff, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Mower, T.E.; Higgins, J.D.; Yang, I.C.

    1989-01-01

    To support the study of hydrologic system in the unsaturated zone at Yucca Mountain, Nevada, two extraction methods were examined to obtain representative, uncontaminated pore-water samples from unsaturated tuff. Results indicate that triaxial compression, which uses a standard cell, can remove pore water from nonwelded tuff that has an initial moisture content greater than 11% by weight; uniaxial compression, which uses a specifically fabricated cell, can extract pore water from nonwelded tuff that has an initial moisture content greater than 8% and from welded tuff that has an initial moisture content greater than 6.5%. For the ambient moisture conditions of Yucca Mountain tuffs, uniaxial compression is the most efficient method of pore-water extraction. 12 refs., 7 figs., 2 tabs

  14. Design of alluvial Egyptian irrigation canals using artificial neural networks method

    Directory of Open Access Journals (Sweden)

    Hassan Ibrahim Mohamed

    2013-06-01

    Full Text Available In the present study, artificial neural networks method (ANNs is used to estimate the main parameters which used in design of stable alluvial channels. The capability of ANN models to predict the stable alluvial channels dimensions is investigated, where the flow rate and sediment mean grain size were considered as input variables and wetted perimeter, hydraulic radius, and water surface slope were considered as output variables. The used ANN models are based on a back propagation algorithm to train a multi-layer feed-forward network (Levenberg Marquardt algorithm. The proposed models were verified using 311 data sets of field data collected from 61 manmade canals and drains. Several statistical measures and graphical representation are used to check the accuracy of the models in comparison with previous empirical equations. The results of the developed ANN model proved that this technique is reliable in such field compared with previously developed methods.

  15. The artificial periodic lattice phase analysis method applied to deformation evaluation of TiNi shape memory alloy in micro scale

    International Nuclear Information System (INIS)

    Liu, Z W; Huang, X F; Lou, X H; Xie, H M; Du, H

    2011-01-01

    The basic principle of the artificial periodic lattice phase analysis method on the basis of an artificial periodic lattice was thoroughly introduced in this investigation. The improved technique is intended to expand from nanoscale to micro- and macroscopic realms on the test field of experimental mechanics in combination with a submicron grid, which is produced by a focused ion beam (FIB). Phase information can be obtained from the filtered images after fast Fourier transform (FFT) and inverse FFT. Thus, the in-plane displacement fields as well as the local strain distributions related to the phase information will be evaluated. The application scope of the technique was obtained by the simulation experiment. The displacement fields as well as strain distributions of porous TiNi shape memory alloy were calculated by the technique after compressive loading in micro scale. The specimen grid was directly fabricated on the tested flat surface by employing a FIB. The evolution rule of shear zones in micro area near porous has been discovered. The obtained results indicate that the technique not only could be well applied to measuring full field deformation, but also, more significantly, is available to present mechanical properties in micro scale

  16. A comparison of methods for estimating fish assemblages associated with estuarine artificial reefs

    Directory of Open Access Journals (Sweden)

    Michael Lowry

    2011-01-01

    Full Text Available Monitoring strategies which adequately represent the entire community associated with artificial structures will enable more informed decisions regarding the broader effects of artificial structures and their role in the management of fisheries resources. Despite the widespread application of a range of in situ visual monitoring methodologies used in the assessment of artificial structures, the relative biases associated with each method have not been critically examined and remain poorly understood. Estimates of fish abundance on six estuarine artificial reefs carried out by divers using underwater visual census techniques (UVC were compared with estimates of relative abundance determined by baited remote underwater video (BRUV. It was found that when combined, both methods provided a more comprehensive description of the species associated with estuarine artificial reefs. However, the difference in the number of species detected and the frequency of detection varied between methods. Results indicated that the differences in rates of detection between UVC and BRUV methodologies were primarily related to the ecological niche and behaviour of the species in question. UVC provided better estimates of the rare or cryptic reef associated species. BRUV sampled a smaller proportion of species overall but did identify key recreational species such as Acanthopagrus australis, Pagrus auratus and Rhabdosargus sarba with increased frequency. Correlation of abundance indices for species classified as "permanent" identified interspecific interactions that may act as a source of bias associated with BRUV observations.O monitoramento biológico da comunidade associada a substratos artificiais permite a tomada de decisões corretas em relação ao uso e o papel dos novos habitats no manejo de recursos pesqueiros. Apesar da enorme aplicação das técnicas de censo visual no estudo da ictiofauna em recifes artificiais, os erros relativos de cada metodologia

  17. Identifying the dynamic compressive stiffness of a prospective biomimetic elastomer by an inverse method.

    Science.gov (United States)

    Mates, Steven P; Forster, Aaron M; Hunston, Donald; Rhorer, Richard; Everett, Richard K; Simmonds, Kirth E; Bagchi, Amit

    2012-10-01

    Soft elastomeric materials that mimic real soft human tissues are sought to provide realistic experimental devices to simulate the human body's response to blast loading to aid the development of more effective protective equipment. The dynamic mechanical behavior of these materials is often measured using a Kolsky bar because it can achieve both the high strain rates (>100s(-1)) and the large strains (>20%) that prevail in blast scenarios. Obtaining valid results is challenging, however, due to poor dynamic equilibrium, friction, and inertial effects. To avoid these difficulties, an inverse method was employed to determine the dynamic response of a soft, prospective biomimetic elastomer using Kolsky bar tests coupled with high-speed 3D digital image correlation. Individual tests were modeled using finite elements, and the dynamic stiffness of the elastomer was identified by matching the simulation results with test data using numerical optimization. Using this method, the average dynamic response was found to be nearly equivalent to the quasi-static response measured with stress-strain curves at compressive strains up to 60%, with an uncertainty of ±18%. Moreover, the behavior was consistent with the results in stress relaxation experiments and oscillatory tests although the latter were performed at lower strain levels. Published by Elsevier Ltd.

  18. A comparison of performance of several artificial intelligence methods for forecasting monthly discharge time series

    Science.gov (United States)

    Wang, Wen-Chuan; Chau, Kwok-Wing; Cheng, Chun-Tian; Qiu, Lin

    2009-08-01

    SummaryDeveloping a hydrological forecasting model based on past records is crucial to effective hydropower reservoir management and scheduling. Traditionally, time series analysis and modeling is used for building mathematical models to generate hydrologic records in hydrology and water resources. Artificial intelligence (AI), as a branch of computer science, is capable of analyzing long-series and large-scale hydrological data. In recent years, it is one of front issues to apply AI technology to the hydrological forecasting modeling. In this paper, autoregressive moving-average (ARMA) models, artificial neural networks (ANNs) approaches, adaptive neural-based fuzzy inference system (ANFIS) techniques, genetic programming (GP) models and support vector machine (SVM) method are examined using the long-term observations of monthly river flow discharges. The four quantitative standard statistical performance evaluation measures, the coefficient of correlation ( R), Nash-Sutcliffe efficiency coefficient ( E), root mean squared error (RMSE), mean absolute percentage error (MAPE), are employed to evaluate the performances of various models developed. Two case study river sites are also provided to illustrate their respective performances. The results indicate that the best performance can be obtained by ANFIS, GP and SVM, in terms of different evaluation criteria during the training and validation phases.

  19. UAV path planning using artificial potential field method updated by optimal control theory

    Science.gov (United States)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  20. A multitarget training method for artificial neural network with application to computer-aided diagnosis.

    Science.gov (United States)

    Liu, Bei; Jiang, Yulei

    2013-01-01

    The authors propose a new training method for artificial neural networks (ANNs) in two-class classification tasks such as classifying breast lesions on a mammogram as malignant or benign. Whereas the conventional binary training method uses binary training target values based on the diagnostic truth of a lesion being malignant or benign, the authors use multiple training target values based on more detailed histological diagnosis that presumably are related to the posterior probability of a lesion being malignant. The authors performed Monte Carlo simulation studies in which training target values were assigned based on posterior probability, and they also performed a mammography study in which training target values were assigned according to histological subtypes. These studies showed that the multitarget training method produced less variability in the ANN outputs than the binary training method. The simulation studies also showed that except for when the number of training cases was extremely large, the multitarget training method produced improved overall classification performance over the binary training method. Therefore, the multitarget ANN training method is potentially useful for ANN applications in computer-aided diagnosis of breast cancer.

  1. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    Science.gov (United States)

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  2. Using the Maturity Method in Predicting the Compressive Strength of Vinyl Ester Polymer Concrete at an Early Age

    Directory of Open Access Journals (Sweden)

    Nan Ji Jin

    2017-01-01

    Full Text Available The compressive strength of vinyl ester polymer concrete is predicted using the maturity method. The compressive strength rapidly increased until the curing age of 24 hrs and thereafter slowly increased until the curing age of 72 hrs. As the MMA content increased, the compressive strength decreased. Furthermore, as the curing temperature decreased, compressive strength decreased. For vinyl ester polymer concrete, datum temperature, ranging from −22.5 to −24.6°C, decreased as the MMA content increased. The maturity index equation for cement concrete cannot be applied to polymer concrete and the maturity of vinyl ester polymer concrete can only be estimated through control of the time interval Δt. Thus, this study introduced a suitable scaled-down factor (n for the determination of polymer concrete’s maturity, and a factor of 0.3 was the most suitable. Also, the DR-HILL compressive strength prediction model was determined as applicable to vinyl ester polymer concrete among the dose-response models. For the parameters of the prediction model, applying the parameters by combining all data obtained from the three different amounts of MMA content was deemed acceptable. The study results could be useful for the quality control of vinyl ester polymer concrete and nondestructive prediction of early age strength.

  3. A comparison of sputum induction methods: ultrasonic vs compressed-air nebulizer and hypertonic vs isotonic saline inhalation.

    Science.gov (United States)

    Loh, L C; Eg, K P; Puspanathan, P; Tang, S P; Yip, K S; Vijayasingham, P; Thayaparan, T; Kumar, S

    2004-03-01

    Airway inflammation can be demonstrated by the modem method of sputum induction using ultrasonic nebulizer and hypertonic saline. We studied whether compressed-air nebulizer and isotonic saline which are commonly available and cost less, are as effective in inducing sputum in normal adult subjects as the above mentioned tools. Sixteen subjects underwent weekly sputum induction in the following manner: ultrasonic nebulizer (Medix Sonix 2000, Clement Clarke, UK) using hypertonic saline, ultrasonic nebulizer using isotonic saline, compressed-air nebulizer (BestNeb, Taiwan) using hypertonic saline, and compressed-air nebulizer using isotonic saline. Overall, the use of an ultrasonic nebulizer and hypertonic saline yielded significantly higher total sputum cell counts and a higher percentage of cell viability than compressed-air nebulizers and isotonic saline. With the latter, there was a trend towards squamous cell contaminations. The proportion of various sputum cell types was not significantly different between the groups, and the reproducibility in sputum macrophages and neutrophils was high (Intraclass correlation coefficient, r [95%CI]: 0.65 [0.30-0.91] and 0.58 [0.22-0.89], p compressed-air nebulizers and isotonic saline. We conclude that in normal subjects, although both nebulizers and saline types can induce sputum with reproducible cellular profile, ultrasonic nebulizers and hypertonic saline are more effective but less well tolerated.

  4. Cognitive Artificial Intelligence Method for Interpreting Transformer Condition Based on Maintenance Data

    Directory of Open Access Journals (Sweden)

    Karel Octavianus Bachri

    2017-07-01

    Full Text Available A3S(Arwin-Adang-Aciek-Sembiring is a method of information fusion at a single observation and OMA3S(Observation Multi-time A3S is a method of information fusion for time-series data. This paper proposes OMA3S-based Cognitive Artificial-Intelligence method for interpreting Transformer Condition, which is calculated based on maintenance data from Indonesia National Electric Company (PLN. First, the proposed method is tested using the previously published data, and then followed by implementation on maintenance data. Maintenance data are fused to obtain part condition, and part conditions are fused to obtain transformer condition. Result shows proposed method is valid for DGA fault identification with the average accuracy of 91.1%. The proposed method not only can interpret the major fault, it can also identify the minor fault occurring along with the major fault, allowing early warning feature. Result also shows part conditions can be interpreted using information fusion on maintenance data, and the transformer condition can be interpreted using information fusion on part conditions. The future works on this research is to gather more data, to elaborate more factors to be fused, and to design a cognitive processor that can be used to implement this concept of intelligent instrumentation.

  5. Comparison of three methods for concentration of rotavirus from artificially spiked shellfish samples

    Directory of Open Access Journals (Sweden)

    Vysakh Mohan

    2014-07-01

    Full Text Available Background: Shellfish are a nutritious food source whose consumption and commercial value have risen dramatically worldwide. Shellfish being filter feeders concentrate particulate matters including microorganisms such as pathogenic bacteria and viruses and thus constitute a major public health concern. Effective preliminary sample treatment steps such as concentration of virus from shellfish are essential before RNA/DNA isolation for final PCR accuracy and reproducibility due to presence of PCR inhibitors in shellfish. Aim: The current study was done to compare three methods for concentration of rotavirus from shellfish samples. Materials and Methods: Shellfish samples artificially spiked with tenfold serial dilutions of known concentration of rotavirus were subjected to three different concentration methods namely; proteinase K treatment, precipitation with polyethylene glycol 8000 and use of lysis buffer. RNA was isolated from the concentrated samples using phenol chloroform method. Rota viral RNA was detected using RT-PCR. Results: Concentration of virus using proteinase K and lysis buffer yielded better result than concentration by PEG 8000 in samples with lowest concentration of virus. Among these two methods proteinase K treatment was superior as it showed better amplification of the highest dilution (107 used. Conclusion: Treatment with proteinase K was better than other two methods as it could detect the viral RNA in all three tenfold serial dilutions.

  6. Effectiveness of Context-Aware Character Input Method for Mobile Phone Based on Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Masafumi Matsuhara

    2012-01-01

    Full Text Available Opportunities and needs are increasing to input Japanese sentences on mobile phones since performance of mobile phones is improving. Applications like E-mail, Web search, and so on are widely used on mobile phones now. We need to input Japanese sentences using only 12 keys on mobile phones. We have proposed a method to input Japanese sentences on mobile phones quickly and easily. We call this method number-Kanji translation method. The number string inputted by a user is translated into Kanji-Kana mixed sentence in our proposed method. Number string to Kana string is a one-to-many mapping. Therefore, it is difficult to translate a number string into the correct sentence intended by the user. The proposed context-aware mapping method is able to disambiguate a number string by artificial neural network (ANN. The system is able to translate number segments into the intended words because the system becomes aware of the correspondence of number segments with Japanese words through learning by ANN. The system does not need a dictionary. We also show the effectiveness of our proposed method for practical use by the result of the evaluation experiment in Twitter data.

  7. INSTRUMENTS AND METHODS OF INVESTIGATION: Dynamic compression of hydrogen isotopes at megabar pressures

    Science.gov (United States)

    Trunin, Ryurik F.; Urlin, Vitalii D.; Medvedev, Aleksandr B.

    2010-09-01

    We review the results of shock compression of solid protium to the pressure 66 GPa, of liquid deuterium to 110 GPa, and of solid deuterium to 123 GPa in explosive devices of spherical geometry. The results are compared with data obtained by US scientists using traditional energy sources (explosives and light-gas guns), striker acceleration in a strong magnetic field (Z facility at Sandia), and powerful lasers (Nova at Lawrence Livermore National Laboratory (LLNL) and Omega at the Laboratory for Laser Energetics, University of Rochester). Results of density measurements of hydrogen isotopes under quasi-isentropic compression are analyzed. The absence of an anomalous increase in density under shock and quasi-isentropic compression of hydrogen isotopes is demonstrated. On the other hand, both processes exhibit a sharp change in the compression curve slopes, at the respective pressures 45 and 300 GPa.

  8. A Parallel Implicit Reconstructed Discontinuous Galerkin Method for Compressible Flows on Hybrid Grids

    Science.gov (United States)

    Xia, Yidong

    The objective this work is to develop a parallel, implicit reconstructed discontinuous Galerkin (RDG) method using Taylor basis for the solution of the compressible Navier-Stokes equations on 3D hybrid grids. This third-order accurate RDG method is based on a hierarchical weighed essentially non- oscillatory reconstruction scheme, termed as HWENO(P1P 2) to indicate that a quadratic polynomial solution is obtained from the underlying linear polynomial DG solution via a hierarchical WENO reconstruction. The HWENO(P1P2) is designed not only to enhance the accuracy of the underlying DG(P1) method but also to ensure non-linear stability of the RDG method. In this reconstruction scheme, a quadratic polynomial (P2) solution is first reconstructed using a least-squares approach from the underlying linear (P1) discontinuous Galerkin solution. The final quadratic solution is then obtained using a Hermite WENO reconstruction, which is necessary to ensure the linear stability of the RDG method on 3D unstructured grids. The first derivatives of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the non-linear stability of the RDG method. The parallelization in the RDG method is based on a message passing interface (MPI) programming paradigm, where the METIS library is used for the partitioning of a mesh into subdomain meshes of approximately the same size. Both multi-stage explicit Runge-Kutta and simple implicit backward Euler methods are implemented for time advancement in the RDG method. In the implicit method, three approaches: analytical differentiation, divided differencing (DD), and automatic differentiation (AD) are developed and implemented to obtain the resulting flux Jacobian matrices. The automatic differentiation is a set of techniques based on the mechanical application of the chain rule to obtain derivatives of a function given as

  9. Study on compressive strength of self compacting mortar cubes under normal & electric oven curing methods

    Science.gov (United States)

    Prasanna Venkatesh, G. J.; Vivek, S. S.; Dhinakaran, G.

    2017-07-01

    In the majority of civil engineering applications, the basic building blocks were the masonry units. Those masonry units were developed as a monolithic structure by plastering process with the help of binding agents namely mud, lime, cement and their combinations. In recent advancements, the mortar study plays an important role in crack repairs, structural rehabilitation, retrofitting, pointing and plastering operations. The rheology of mortar includes flowable, passing and filling properties which were analogous with the behaviour of self compacting concrete. In self compacting (SC) mortar cubes, the cement was replaced by mineral admixtures namely silica fume (SF) from 5% to 20% (with an increment of 5%), metakaolin (MK) from 10% to 30% (with an increment of 10%) and ground granulated blast furnace slag (GGBS) from 25% to 75% (with an increment of 25%). The ratio between cement and fine aggregate was kept constant as 1: 2 for all normal and self compacting mortar mixes. The accelerated curing namely electric oven curing with the differential temperature of 128°C for the period of 4 hours was adopted. It was found that the compressive strength obtained from the normal and electric oven method of curing was higher for self compacting mortar cubes than normal mortar cube. The cement replacement by 15% SF, 20% MK and 25%GGBS obtained higher strength under both curing conditions.

  10. Rescuers' physical fatigue with different chest compression to ventilation methods during simulated infant cardiopulmonary resuscitation.

    Science.gov (United States)

    Boldingh, Anne Marthe; Jensen, Thomas Hagen; Bjørbekk, Ane Torvik; Solevåg, Anne Lee; Nakstad, Britt

    2016-10-01

    To assess development of objective, subjective and indirect measures of fatigue during simulated infant cardiopulmonary resuscitation (CPR) with two different methods. Using a neonatal manikin, 17 subject-pairs were randomized in a crossover design to provide 5-min CPR with a 3:1 chest compression (CC) to ventilation (C:V) ratio and continuous CCs at a rate of 120 min(-1) with asynchronous ventilations (CCaV-120). We measured participants' changes in heart rate (HR) and mean arterial pressure (MAP); perceived level of fatigue on a validated Likert scale; and manikin CC measures. CCaV-120 compared with a 3:1 C:V ratio resulted in a change during 5-min of CPR in HR 49 versus 40 bpm (p = 0.01), and MAP 1.7 versus -2.8 mmHg (p = 0.03); fatigue rated on a Likert scale 12.9 versus 11.4 (p = 0.2); and a significant decay in CC depth after 90 s (p = 0.03). The results indicate a trend toward more fatigue during simulated CPR in CCaV-120 compared to the recommended 3:1 C:V CPR. These results support current guidelines.

  11. Two-dimensional Kolmogorov complexity and an empirical validation of the Coding theorem method by compressibility

    Directory of Open Access Journals (Sweden)

    Hector Zenil

    2015-09-01

    Full Text Available We propose a measure based upon the fundamental theoretical concept in algorithmic information theory that provides a natural approach to the problem of evaluating n-dimensional complexity by using an n-dimensional deterministic Turing machine. The technique is interesting because it provides a natural algorithmic process for symmetry breaking generating complex n-dimensional structures from perfectly symmetric and fully deterministic computational rules producing a distribution of patterns as described by algorithmic probability. Algorithmic probability also elegantly connects the frequency of occurrence of a pattern with its algorithmic complexity, hence effectively providing estimations to the complexity of the generated patterns. Experiments to validate estimations of algorithmic complexity based on these concepts are presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability. We then use the output frequency of the set of 2-dimensional Turing machines to classify the algorithmic complexity of the space-time evolutions of Elementary Cellular Automata.

  12. A Method of Effective Quarry Water Purifying Using Artificial Filtering Arrays

    Science.gov (United States)

    Tyulenev, M.; Garina, E.; Khoreshok, A.; Litvin, O.; Litvin, Y.; Maliukhina, E.

    2017-01-01

    The development of open pit mining in the large coal basins of Russia and other countries increases their negative impact on the environment. Along with the damage of land and air pollution by dust and combustion gases of blasting, coal pits have a significant negative impact on water resources. Polluted quarry water worsens the ecological situation on a much larger area than covered by air pollution and land damage. This significantly worsens the conditions of people living in cities and towns located near the coal pits, and complicates the subsequent restoration of the environment, irreversibly destroying the nature. Therefore, the research of quarry wastewater purifying is becoming an important mater for scholars of technical colleges and universities in the regions with developing open-pit mining. This paper describes the method of determining the basic parameters of the artificial filtering arrays formed on coal pits of Kuzbass (Western Siberia, Russia), and gives recommendations on its application.

  13. Upwind methods for the Baer–Nunziato equations and higher-order reconstruction using artificial viscosity

    Energy Technology Data Exchange (ETDEWEB)

    Fraysse, F., E-mail: francois.fraysse@rs2n.eu [RS2N, St. Zacharie (France); E. T. S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Madrid (Spain); Redondo, C.; Rubio, G.; Valero, E. [E. T. S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Madrid (Spain)

    2016-12-01

    This article is devoted to the numerical discretisation of the hyperbolic two-phase flow model of Baer and Nunziato. A special attention is paid on the discretisation of intercell flux functions in the framework of Finite Volume and Discontinuous Galerkin approaches, where care has to be taken to efficiently approximate the non-conservative products inherent to the model equations. Various upwind approximate Riemann solvers have been tested on a bench of discontinuous test cases. New discretisation schemes are proposed in a Discontinuous Galerkin framework following the criterion of Abgrall and the path-conservative formalism. A stabilisation technique based on artificial viscosity is applied to the high-order Discontinuous Galerkin method and compared against classical TVD-MUSCL Finite Volume flux reconstruction.

  14. Application of artificial neural networks for response surface modelling in HPLC method development

    Directory of Open Access Journals (Sweden)

    Mohamed A. Korany

    2012-01-01

    Full Text Available This paper discusses the usefulness of artificial neural networks (ANNs for response surface modelling in HPLC method development. In this study, the combined effect of pH and mobile phase composition on the reversed-phase liquid chromatographic behaviour of a mixture of salbutamol (SAL and guaiphenesin (GUA, combination I, and a mixture of ascorbic acid (ASC, paracetamol (PAR and guaiphenesin (GUA, combination II, was investigated. The results were compared with those produced using multiple regression (REG analysis. To examine the respective predictive power of the regression model and the neural network model, experimental and predicted response factor values, mean of squares error (MSE, average error percentage (Er%, and coefficients of correlation (r were compared. It was clear that the best networks were able to predict the experimental responses more accurately than the multiple regression analysis.

  15. Upwind methods for the Baer–Nunziato equations and higher-order reconstruction using artificial viscosity

    International Nuclear Information System (INIS)

    Fraysse, F.; Redondo, C.; Rubio, G.; Valero, E.

    2016-01-01

    This article is devoted to the numerical discretisation of the hyperbolic two-phase flow model of Baer and Nunziato. A special attention is paid on the discretisation of intercell flux functions in the framework of Finite Volume and Discontinuous Galerkin approaches, where care has to be taken to efficiently approximate the non-conservative products inherent to the model equations. Various upwind approximate Riemann solvers have been tested on a bench of discontinuous test cases. New discretisation schemes are proposed in a Discontinuous Galerkin framework following the criterion of Abgrall and the path-conservative formalism. A stabilisation technique based on artificial viscosity is applied to the high-order Discontinuous Galerkin method and compared against classical TVD-MUSCL Finite Volume flux reconstruction.

  16. An artificial neural network method for lumen and media-adventitia border detection in IVUS.

    Science.gov (United States)

    Su, Shengran; Hu, Zhenghui; Lin, Qiang; Hau, William Kongto; Gao, Zhifan; Zhang, Heye

    2017-04-01

    Intravascular ultrasound (IVUS) has been well recognized as one powerful imaging technique to evaluate the stenosis inside the coronary arteries. The detection of lumen border and media-adventitia (MA) border in IVUS images is the key procedure to determine the plaque burden inside the coronary arteries, but this detection could be burdensome to the doctor because of large volume of the IVUS images. In this paper, we use the artificial neural network (ANN) method as the feature learning algorithm for the detection of the lumen and MA borders in IVUS images. Two types of imaging information including spatial, neighboring features were used as the input data to the ANN method, and then the different vascular layers were distinguished accordingly through two sparse auto-encoders and one softmax classifier. Another ANN was used to optimize the result of the first network. In the end, the active contour model was applied to smooth the lumen and MA borders detected by the ANN method. The performance of our approach was compared with the manual drawing method performed by two IVUS experts on 461 IVUS images from four subjects. Results showed that our approach had a high correlation and good agreement with the manual drawing results. The detection error of the ANN method close to the error between two groups of manual drawing result. All these results indicated that our proposed approach could efficiently and accurately handle the detection of lumen and MA borders in the IVUS images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. HPLC-QTOF-MS method for quantitative determination of active compounds in an anti-cellulite herbal compress

    Directory of Open Access Journals (Sweden)

    Ngamrayu Ngamdokmai

    2017-08-01

    Full Text Available A herbal compress used in Thai massage has been modified for use in cellulite treatment. Its main active ingredients were ginger, black pepper, java long pepper, tea and coffee. The objective of this study was to develop and validate an HPLCQTOF-MS method for determining its active compounds, i.e., caffeine, 6-gingerol, and piperine in raw materials as well as in the formulation together with the flavouring agent, camphor. The four compounds were chromatographically separated. The analytical method was validated through selectivity, intra-, inter day precision, accuracy and matrix effect. The results showed that the herbal compress contained caffeine (2.16 mg/g, camphor (106.15 mg/g, 6-gingerol (0.76 mg/g, and piperine (4.19 mg/g. The chemical stability study revealed that herbal compresses retained >80% of their active compounds after 1 month of storage at ambient conditions. Our method can be used for quality control of the herbal compress and its raw materials.

  18. Artificial intelligence methods applied in the controlled synthesis of polydimethilsiloxane - poly (methacrylic acid) copolymer networks with imposed properties

    Science.gov (United States)

    Rusu, Teodora; Gogan, Oana Marilena

    2016-05-01

    This paper describes the use of artificial intelligence method in copolymer networks design. In the present study, we pursue a hybrid algorithm composed from two research themes in the genetic design framework: a Kohonen neural network (KNN), path (forward problem) combined with a genetic algorithm path (backward problem). The Tabu Search Method is used to improve the performance of the genetic algorithm path.

  19. Development and validation of dissolution method for carvedilol compression-coated tablets

    Directory of Open Access Journals (Sweden)

    Ritesh Shah

    2011-12-01

    Full Text Available The present study describes the development and validation of a dissolution method for carvedilol compression-coated tablets. Dissolution test was performed using a TDT-06T dissolution apparatus. Based on the physiological conditions of the body, 0.1N hydrochloric acid was used as dissolution medium and release was monitored for 2 hours to verify the immediate release pattern of the drug in acidic pH, followed by pH 6.8 in citric-phosphate buffer for 22 hours, to simulate a sustained release pattern in the intestine. Influences of rotation speed and surfactant concentration in medium were evaluated. Samples were analysed by validated UV visible spectrophotometric method at 286 nm. 1% sodium lauryl sulphate (SLS was found to be optimum for improving carvedilol solubility in pH 6.8 citric-phosphate buffer. Analysis of variance showed no significant difference between the results obtained at 50 and 100 rpm. The discriminating dissolution method was successfully developed for carvedilol compression-coated tablets. The conditions that allowed dissolution determination were USP type I apparatus at 100 rpm, containing 1000 ml of 0.1N HCl for 2 hours, followed by pH 6.8 citric-phosphate buffer with 1% SLS for 22 hours at 37.0 ± 0.5 ºC. Samples were analysed by UV spectrophotometric method and validated as per ICH guidelines.O presente estudo descreve o desenvolvimento e a validação de método de dissolução para comprimidos revestidos de carvedilol. O teste de dissolução foi efetuado utilizando-se o aparelho para dissolução TDT-06T. Com base nas condições fisiológicas do organismo, utilizou-se ácido clorídrico 0,1 N como meio de dissolução e a liberação foi monitorada por 2 horas para se verificar o padrão de liberação imediata do fármaco em condições de pH baixo, seguidas por pH 6,8 em tampão cítrico-fosfato por 22 horas, para simular o padrão de liberação controlada no intestino. Avaliou-se a influência da velocidade de

  20. A Rapid Dialysis Method for Analysis of Artificial Sweeteners in Foods (2nd Report).

    Science.gov (United States)

    Tahara, Shoichi; Yamamoto, Sumiyo; Yamajima, Yukiko; Miyakawa, Hiroyuki; Uematsu, Yoko; Monma, Kimio

    2017-01-01

    Following the previous report, a rapid dialysis method was developed for the extraction and purification of four artificial sweeteners, namely, sodium saccharide (Sa), acesulfame potassium (AK), aspartame (APM), and dulcin (Du), which are present in various foods. The method was evaluated by the addition of 0.02 g/kg of these sweeteners to a cookie sample, in the same manner as in the previous report. Revisions from the previous method were: reduction of the total dialysis volume from 200 to 100 mL, change of tube length from 55 to 50 cm, change of dialysate from 0.01 mol/L hydrochloric aqueous solution containing 10% sodium chloride to 30% methanol solution, and change of dialysis conditions from ambient temperature with occasional shaking to 50℃ with shaking at 160 rpm. As a result of these revisions, the recovery reached 99.3-103.8% with one hour dialysis. The obtained recovery yields were comparable to the recovery yields in the previous method with four hour dialysis.

  1. The use of artificial intelligence methods for visual analysis of properties of surface layers

    Directory of Open Access Journals (Sweden)

    Tomasz Wójcicki

    2014-12-01

    Full Text Available [b]Abstract[/b]. The article presents a selected area of research on the possibility of automatic prediction of material properties based on the analysis of digital images. Original, holistic model of forecasting properties of surface layers based on a multi-step process that includes the selected methods of processing and analysis of images, inference with the use of a priori knowledge bases and multi-valued fuzzy logic, and simulation with the use of finite element methods is presented. Surface layers characteristics and core technologies of their production processes such as mechanical, thermal, thermo-mechanical, thermo-chemical, electrochemical, physical are discussed. Developed methods used in the model for the classification of images of the surface layers are shown. The objectives of the use of selected methods of processing and analysis of digital images, including techniques for improving the quality of images, segmentation, morphological transformation, pattern recognition and simulation of physical phenomena in the structures of materials are described.[b]Keywords[/b]: image analysis, surface layer, artificial intelligence, fuzzy logic

  2. Compression simulations of plant tissue in 3D using a mass-spring system approach and discrete element method.

    Science.gov (United States)

    Pieczywek, Piotr M; Zdunek, Artur

    2017-10-18

    A hybrid model based on a mass-spring system methodology coupled with the discrete element method (DEM) was implemented to simulate the deformation of cellular structures in 3D. Models of individual cells were constructed using the particles which cover the surfaces of cell walls and are interconnected in a triangle mesh network by viscoelastic springs. The spatial arrangement of the cells required to construct a virtual tissue was obtained using Poisson-disc sampling and Voronoi tessellation in 3D space. Three structural features were included in the model: viscoelastic material of cell walls, linearly elastic interior of the cells (simulating compressible liquid) and a gas phase in the intercellular spaces. The response of the models to an external load was demonstrated during quasi-static compression simulations. The sensitivity of the model was investigated at fixed compression parameters with variable tissue porosity, cell size and cell wall properties, such as thickness and Young's modulus, and a stiffness of the cell interior that simulated turgor pressure. The extent of the agreement between the simulation results and other models published is discussed. The model demonstrated the significant influence of tissue structure on micromechanical properties and allowed for the interpretation of the compression test results with respect to changes occurring in the structure of the virtual tissue. During compression virtual structures composed of smaller cells produced higher reaction forces and therefore they were stiffer than structures with large cells. The increase in the number of intercellular spaces (porosity) resulted in a decrease in reaction forces. The numerical model was capable of simulating the quasi-static compression experiment and reproducing the strain stiffening observed in experiment. Stress accumulation at the edges of the cell walls where three cells meet suggests that cell-to-cell debonding and crack propagation through the contact edge of

  3. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  4. A Comparative Study of Compression Methods and the Development of CODEC Program of Biological Signal for Emergency Telemedicine Service

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, T.S.; Kim, J.S. [Changwon National University, Changwon (Korea); Lim, Y.H. [Visionite Co., Ltd., Seoul (Korea); Yoo, S.K. [Yonsei University, Seoul (Korea)

    2003-05-01

    In an emergency telemedicine system such as the High-quality Multimedia based Real-time Emergency Telemedicine(HMRET) service, it is very important to examine the status of the patient continuously using the multimedia data including the biological signals(ECG, BP, Respiration, S{sub p}O{sub 2}) of the patient. In order to transmit these data real time through the communication means which have the limited transmission capacity, it is also necessary to compress the biological data besides other multimedia data. For this purpose, we investigate and compare the ECG compression techniques in the time domain and in the wavelet transform domain, and present an effective lossless compression method of the biological signals using JPEG Huffman table for an emergency telemedicine system. And, for the HMRET service, we developed the lossless compression and reconstruction program of the biological signals in MSVC++ 6.0 using DPCM method and JPEG Huffman table, and tested in an internet environment. (author). 15 refs., 17 figs., 7 tabs.

  5. Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method

    Science.gov (United States)

    Nourani, Vahid; Mousavi, Shahram

    2016-05-01

    Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.

  6. Fertility response of artificial insemination methods in sheep with fresh and frozen-thawed semen.

    Science.gov (United States)

    Masoudi, Reza; Zare Shahneh, Ahmad; Towhidi, Armin; Kohram, Hamid; Akbarisharif, Abbas; Sharafi, Mohsen

    2017-02-01

    The aim of this study was to evaluate the fertility response of artificial insemination (AI) methods with fresh and frozen sperm in sheep. In experiment 1, one hundred and fifty fat tailed Zandi ewes were assigned into 3 equal groups and inseminated with three AI methods consisting of vaginal, laparoscopic and trans-cervical AI with fresh semen. In experiment 2, a factorial study (3 AI methods × 2 extenders) was used to analyze the effects of three AI methods and two freezing extenders containing soybean lecithin (SL) or Egg yolk (EY) on reproductive performance of 300 fat tailed Zandi ewes. Also, total motility, progressive motility, viability and lipid peroxidation of semen were evaluated after freeze-thawing in two extenders. In result, there was no significant difference among three AI methods when fresh semen was used. In experiment 2, the highest percentage of pregnancy rate, parturition rate and lambing rate were obtained in laparoscopic AI group (P < 0.05). Although pregnancy rate, parturition rate and lambing rate in trans-cervical group were higher (P < 0.05) than vaginal group, the results were not as high as laparoscopic group. No difference was observed between SL and EY extenders and their performance was close to each other. It can be concluded that although no difference was observed on reproductive performance for fresh semen, trans-cervical AI was more efficient than vaginal method when frozen-thawed semen was used, but its efficiency was not as high as laparoscopic method. Also, SL extender can be an efficient alternative extender to preserve ram sperm during cryopreservation procedure without adverse effects of EY. Copyright © 2016. Published by Elsevier Inc.

  7. Pore-water extraction from unsaturated tuff by triaxial and one-dimensional compression methods, Nevada Test Site, Nevada

    Science.gov (United States)

    Mower, Timothy E.; Higgins, Jerry D.; Yang, In C.; Peters, Charles A.

    1994-01-01

    Study of the hydrologic system at Yucca Mountain, Nevada, requires the extraction of pore-water samples from welded and nonwelded, unsaturated tuffs. Two compression methods (triaxial compression and one-dimensional compression) were examined to develop a repeatable extraction technique and to investigate the effects of the extraction method on the original pore-fluid composition. A commercially available triaxial cell was modified to collect pore water expelled from tuff cores. The triaxial cell applied a maximum axial stress of 193 MPa and a maximum confining stress of 68 MPa. Results obtained from triaxial compression testing indicated that pore-water samples could be obtained from nonwelded tuff cores that had initial moisture contents as small as 13 percent (by weight of dry soil). Injection of nitrogen gas while the test core was held at the maximum axial stress caused expulsion of additional pore water and reduced the required initial moisture content from 13 to 11 percent. Experimental calculations, together with experience gained from testing moderately welded tuff cores, indicated that the triaxial cell used in this study could not apply adequate axial or confining stress to expel pore water from cores of densely welded tuffs. This concern led to the design, fabrication, and testing of a one-dimensional compression cell. The one-dimensional compression cell used in this study was constructed from hardened 4340-alloy and nickel-alloy steels and could apply a maximum axial stress of 552 MPa. The major components of the device include a corpus ring and sample sleeve to confine the sample, a piston and base platen to apply axial load, and drainage plates to transmit expelled water from the test core out of the cell. One-dimensional compression extracted pore water from nonwelded tuff cores that had initial moisture contents as small as 7.6 percent; pore water was expelled from densely welded tuff cores that had initial moisture contents as small as 7

  8. Real power transfer allocation method with the application of artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Mustafa, M.W.; Khalid, S.N.; Shareef, H.; Khairuddin, A. [Technological Univ. of Malaysia, Skudai, Johor Bahru (Malaysia). Dept. of Electrical Power Enginering

    2008-07-01

    This paper presented a newly modified nodal equations method for identifying the real power transfer between generators and load. The objective was to represent each load current as a function of the generator's current and load voltages. The modified admittance matrix of a circuit was used to decompose the load voltage dependent term into components of generator dependent terms. By using these two decompositions of current and voltage terms, the real power transfer between loads and generators was obtained. The robustness of the proposed method was demonstrated on the modified IEEE 30-bus system. An appropriate Artificial Neural Network (ANN) was also created to solve the same problem in a simpler and faster manner with very good accuracy. For this purpose, supervised learning paradigm and feedforward architecture were chosen for the proposed ANN power transfer allocation technique. The method could be adapted to other larger systems by modifying the neural network structure. This technique can be used to solve some of the difficult real power pricing and costing issues and to ensure fairness and transparency in the deregulated environment of power system operation. 22 refs., 5 tabs., 8 figs.

  9. [A method of recognizing biology surface spectrum using cascade-connection artificial neural nets].

    Science.gov (United States)

    Shi, Wei-Jie; Yao, Yong; Zhang, Tie-Qiang; Meng, Xian-Jiang

    2008-05-01

    A method of recognizing the visible spectrum of micro-areas on the biological surface with cascade-connection artificial neural nets is presented in the present paper. The visible spectra of spots on apples' pericarp, ranging from 500 to 730 nm, were obtained with a fiber-probe spectrometer, and a new spectrum recognition system consisting of three-level cascade-connection neural nets was set up. The experiments show that the spectra of rotten, scar and bumped spot on an apple's pericarp can be recognized by the spectrum recognition system, and the recognition accuracy is higher than 85% even when noise level is 15%. The new recognition system overcomes the disadvantages of poor accuracy and poor anti-noise with the traditional system based on single cascade neural nets. Finally, a new method of expression of recognition results was proved. The method is based on the conception of degree of membership in fuzzing mathematics, and through it the recognition results can be expressed exactly and objectively.

  10. Review on applications of artificial intelligence methods for dam and reservoir-hydro-environment models.

    Science.gov (United States)

    Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Abdullah, Sharifah Mastura Syed; El-Shafie, Ahmed

    2018-04-03

    Efficacious operation for dam and reservoir system could guarantee not only a defenselessness policy against natural hazard but also identify rule to meet the water demand. Successful operation of dam and reservoir systems to ensure optimal use of water resources could be unattainable without accurate and reliable simulation models. According to the highly stochastic nature of hydrologic parameters, developing accurate predictive model that efficiently mimic such a complex pattern is an increasing domain of research. During the last two decades, artificial intelligence (AI) techniques have been significantly utilized for attaining a robust modeling to handle different stochastic hydrological parameters. AI techniques have also shown considerable progress in finding optimal rules for reservoir operation. This review research explores the history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation. In addition, critical assessment of the advantages and disadvantages of integrated AI simulation methods with optimization methods has been reported. Future research on the potential of utilizing new innovative methods based AI techniques for reservoir simulation and optimization models have also been discussed. Finally, proposal for the new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.

  11. Characterisation of PV CIS module by artificial neural networks. A comparative study with other methods

    International Nuclear Information System (INIS)

    Almonacid, F.; Rus, C.; Hontoria, L.; Munoz, F.J.

    2010-01-01

    The presence of PV modules made with new technologies and materials is increasing in PV market, in special Thin Film Solar Modules (TFSM). They are ready to make a substantial contribution to the world's electricity generation. Although Si wafer-based cells account for the most of increase, technologies of thin film have been those of the major growth in last three years. During 2007 they grew 133%. On the other hand, manufacturers provide ratings for PV modules for conditions referred to as Standard Test Conditions (STC). However, these conditions rarely occur outdoors, so the usefulness and applicability of the indoors characterisation in standard test conditions of PV modules is a controversial issue. Therefore, to carry out a correct photovoltaic engineering, a suitable characterisation of PV module electrical behaviour is necessary. The IDEA Research Group from Jaen University has developed a method based on artificial neural networks (ANNs) to electrical characterisation of PV modules. An ANN was able to generate V-I curves of si-crystalline PV modules for any irradiance and module cell temperature. The results show that the proposed ANN introduces a good accurate prediction for si-crystalline PV modules performance when compared with the measured values. Now, this method is going to be applied for electrical characterisation of PV CIS modules. Finally, a comparative study with other methods, of electrical characterisation, is done. (author)

  12. Development of modifications to the material point method for the simulation of thin membranes, compressible fluids, and their interactions

    Energy Technology Data Exchange (ETDEWEB)

    York, A.R. II [Sandia National Labs., Albuquerque, NM (United States). Engineering and Process Dept.

    1997-07-01

    The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.

  13. Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows

    Science.gov (United States)

    Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think

  14. Method and device for the powerful compression of laser-produced plasmas for nuclear fusion

    International Nuclear Information System (INIS)

    Hora, H.

    1975-01-01

    According to the invention, more than 10% of the laser energy are converted into mechanical energy of compression, in that the compression is produced by non-linear excessive radiation pressure. The time and local spectral and intensity distribution of the laser pulse must be controlled. The focussed laser beams must increase to over 10 15 W/cm 2 in less than 10 -9 seconds and the time variation of the intensities must be carried out so that the dynamic absorption of the outer plasma corona by rippling consumes less than 90% of the laser energy. (GG) [de

  15. Comparison of three artificial digestion methods for detection of non-encapsulated Trichinella pseudospiralis larvae in pork

    DEFF Research Database (Denmark)

    Nockler, K.; Reckinger, S.; Szabo, I.

    2009-01-01

    In a ring trial involving five laboratories (A, B, C, D, and E), three different methods of artificial digestion were compared for the detection of non-encapsulated Trichinella pseudospiralis larvae in minced meat. Each sample panel consisted often 1 g minced pork samples. All samples in each panel...... were derived from a bulk meat preparation with a nominal value of either 7 or 17 larvae per g (Ipg). Samples were tested for the number of muscle larvae using the magnetic stirrer method (labs A, B, and E), stomacher method (lab B), and Trichomatic 35 (R) (labs C and D). T. pseudospiralis larvae were...... by using the magnetic stirrer method (22%), followed by the stomacher method (25%), and Trichomatic 35 (R) (30%). Results revealed that T. pseudospiralis larvae in samples with a nominal value of 7 and 17 Ipg can be detected by all three methods of artificial digestion....

  16. Residential building energy estimation method based on the application of artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Marshall, S.; Kajl, S.

    1999-07-01

    The energy requirements of a residential building five to twenty-five stories high can be measured using a newly proposed analytical method based on artificial intelligence. The method is fast and provides a wide range of results such as total energy consumption values, power surges, and heating or cooling consumption values. A series of database were created to take into account the particularities which influence the energy consumption of a building. In this study, DOE-2 software was created for use in 8 apartment models. A total of 27 neural networks were used, 3 for the estimation of energy consumption in the corridor, and 24 for inside the apartments. Three user interfaces were created to facilitate the estimation of energy consumption. These were named the Energy Estimation Assistance System (EEAS) interfaces and are only accessible using MATLAB software. The input parameters for EEAS are: climatic region, exterior wall resistance, roofing resistance, type of windows, infiltration, number of storeys, and corridor ventilation system operating schedule. By changing the parameters, the EEAS can determine annual heating, cooling and basic energy consumption levels for apartments and corridors. 2 tabs., 2 figs.

  17. QSAR Study of Insecticides of Phthalamide Derivatives Using Multiple Linear Regression and Artificial Neural Network Methods

    Directory of Open Access Journals (Sweden)

    Adi Syahputra

    2014-03-01

    Full Text Available Quantitative structure activity relationship (QSAR for 21 insecticides of phthalamides containing hydrazone (PCH was studied using multiple linear regression (MLR, principle component regression (PCR and artificial neural network (ANN. Five descriptors were included in the model for MLR and ANN analysis, and five latent variables obtained from principle component analysis (PCA were used in PCR analysis. Calculation of descriptors was performed using semi-empirical PM6 method. ANN analysis was found to be superior statistical technique compared to the other methods and gave a good correlation between descriptors and activity (r2 = 0.84. Based on the obtained model, we have successfully designed some new insecticides with higher predicted activity than those of previously synthesized compounds, e.g.2-(decalinecarbamoyl-5-chloro-N’-((5-methylthiophen-2-ylmethylene benzohydrazide, 2-(decalinecarbamoyl-5-chloro-N’-((thiophen-2-yl-methylene benzohydrazide and 2-(decaline carbamoyl-N’-(4-fluorobenzylidene-5-chlorobenzohydrazide with predicted log LC50 of 1.640, 1.672, and 1.769 respectively.

  18. Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.

    Science.gov (United States)

    Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun

    2011-12-01

    Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.

  19. A stable penalty method for the compressible Navier-Stokes equations: I. Open boundary conditions

    DEFF Research Database (Denmark)

    Hesthaven, Jan; Gottlieb, D.

    1996-01-01

    The purpose of this paper is to present asymptotically stable open boundary conditions for the numerical approximation of the compressible Navier-Stokes equations in three spatial dimensions. The treatment uses the conservation form of the Navier-Stokes equations and utilizes linearization...

  20. Hall et al., 2016 Artificial Turf Surrogate Surface Methods Paper Data File

    Data.gov (United States)

    U.S. Environmental Protection Agency — Mercury dry deposition data quantified via static water surrogate surface (SWSS) and artificial turf surrogate surface (ATSS) collectors. This dataset is associated...

  1. Application of artificial intelligence (AI) methods for designing and analysis of reconfigurable cellular manufacturing system (RCMS)

    CSIR Research Space (South Africa)

    Xing, B

    2009-12-01

    Full Text Available This work focuses on the design and control of a novel hybrid manufacturing system: Reconfigurable Cellular Manufacturing System (RCMS) by using Artificial Intelligence (AI) approach. It is hybrid as it combines the advantages of Cellular...

  2. AmiRNA Designer - new method of artificial miRNA design.

    Science.gov (United States)

    Mickiewicz, Agnieszka; Rybarczyk, Agnieszka; Sarzynska, Joanna; Figlerowicz, Marek; Blazewicz, Jacek

    2016-01-01

    MicroRNAs (miRNAs) are small non-coding RNAs that have been found in most of the eukaryotic organisms. They are involved in the regulation of gene expression at the post-transcriptional level in a sequence specific manner. MiRNAs are produced from their precursors by Dicer-dependent small RNA biogenesis pathway. Involvement of miRNAs in a wide range of biological processes makes them excellent candidates for studying gene function or for therapeutic applications. For this purpose, different RNA-based gene silencing techniques have been developed. Artificially transformed miRNAs (amiRNAs) targeting one or several genes of interest represent one of such techniques being a potential tool in functional genomics. Here, we present a new approach to amiRNA*design, implemented as AmiRNA Designer software. Our method is based on the thermodynamic analysis of the native miRNA/miRNA* and miRNA/target duplexes. In contrast to the available automated tools, our program allows the user to perform analysis of natural miRNAs for the organism of interest and to create customized constraints for the design stage. It also provides filtering of the amiRNA candidates for the potential off-targets. AmiRNA Designer is freely available at http://www.cs.put.poznan.pl/arybarczyk/AmiRNA/.

  3. Comparative study of artificial neural network and multivariate methods to classify Spanish DO rose wines.

    Science.gov (United States)

    Pérez-Magariño, S; Ortega-Heras, M; González-San José, M L; Boger, Z

    2004-04-19

    Classical multivariate analysis techniques such as factor analysis and stepwise linear discriminant analysis and artificial neural networks method (ANN) have been applied to the classification of Spanish denomination of origin (DO) rose wines according to their geographical origin. Seventy commercial rose wines from four different Spanish DO (Ribera del Duero, Rioja, Valdepeñas and La Mancha) and two successive vintages were studied. Nineteen different variables were measured in these wines. The stepwise linear discriminant analyses (SLDA) model selected 10 variables obtaining a global percentage of correct classification of 98.8% and of global prediction of 97.3%. The ANN model selected seven variables, five of which were also selected by the SLDA model, and it gave a 100% of correct classification for training and prediction. So, both models can be considered satisfactory and acceptable, being the selected variables useful to classify and differentiate these wines by their origin. Furthermore, the casual index analysis gave information that can be easily explained from an enological point of view.

  4. Fault detection and analysis in nuclear research facility using artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Ghazali, Abu Bakar, E-mail: Abakar@uniten.edu.my [Department of Electronics & Communication, College of Engineering, Universiti Tenaga Nasional, 43009 Kajang, Selangor (Malaysia); Ibrahim, Maslina Mohd [Instrumentation Program, Malaysian Nuclear Agency, Bangi (Malaysia)

    2016-01-22

    In this article, an online detection of transducer and actuator condition is discussed. A case study is on the reading of area radiation monitor (ARM) installed at the chimney of PUSPATI TRIGA nuclear reactor building, located at Bangi, Malaysia. There are at least five categories of abnormal ARM reading that could happen during the transducer failure, namely either the reading becomes very high, or very low/ zero, or with high fluctuation and noise. Moreover, the reading may be significantly higher or significantly lower as compared to the normal reading. An artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are good methods for modeling this plant dynamics. The failure of equipment is based on ARM reading so it is then to compare with the estimated ARM data from ANN/ ANFIS function. The failure categories in either ‘yes’ or ‘no’ state are obtained from a comparison between the actual online data and the estimated output from ANN/ ANFIS function. It is found that this system design can correctly report the condition of ARM equipment in a simulated environment and later be implemented for online monitoring. This approach can also be extended to other transducers, such as the temperature profile of reactor core and also to include other critical actuator conditions such as the valves and pumps in the reactor facility provided that the failure symptoms are clearly defined.

  5. Fault detection and analysis in nuclear research facility using artificial intelligence methods

    Science.gov (United States)

    Ghazali, Abu Bakar; Ibrahim, Maslina Mohd

    2016-01-01

    In this article, an online detection of transducer and actuator condition is discussed. A case study is on the reading of area radiation monitor (ARM) installed at the chimney of PUSPATI TRIGA nuclear reactor building, located at Bangi, Malaysia. There are at least five categories of abnormal ARM reading that could happen during the transducer failure, namely either the reading becomes very high, or very low/ zero, or with high fluctuation and noise. Moreover, the reading may be significantly higher or significantly lower as compared to the normal reading. An artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are good methods for modeling this plant dynamics. The failure of equipment is based on ARM reading so it is then to compare with the estimated ARM data from ANN/ ANFIS function. The failure categories in either `yes' or `no' state are obtained from a comparison between the actual online data and the estimated output from ANN/ ANFIS function. It is found that this system design can correctly report the condition of ARM equipment in a simulated environment and later be implemented for online monitoring. This approach can also be extended to other transducers, such as the temperature profile of reactor core and also to include other critical actuator conditions such as the valves and pumps in the reactor facility provided that the failure symptoms are clearly defined.

  6. Artificial Intelligence Mechanisms on Interactive Modified Simplex Method with Desirability Function for Optimising Surface Lapping Process

    Directory of Open Access Journals (Sweden)

    Pongchanun Luangpaiboon

    2014-01-01

    Full Text Available A study has been made to optimise the influential parameters of surface lapping process. Lapping time, lapping speed, downward pressure, and charging pressure were chosen from the preliminary studies as parameters to determine process performances in terms of material removal, lap width, and clamp force. The desirability functions of the-nominal-the-best were used to compromise multiple responses into the overall desirability function level or D response. The conventional modified simplex or Nelder-Mead simplex method and the interactive desirability function are performed to optimise online the parameter levels in order to maximise the D response. In order to determine the lapping process parameters effectively, this research then applies two powerful artificial intelligence optimisation mechanisms from harmony search and firefly algorithms. The recommended condition of (lapping time, lapping speed, downward pressure, and charging pressure at (33, 35, 6.0, and 5.0 has been verified by performing confirmation experiments. It showed that the D response level increased to 0.96. When compared with the current operating condition, there is a decrease of the material removal and lap width with the improved process performance indices of 2.01 and 1.14, respectively. Similarly, there is an increase of the clamp force with the improved process performance index of 1.58.

  7. On the identification of quark and gluon jets using artificial neural network method

    CERN Document Server

    Zhang, Kun Shi

    2004-01-01

    The identification of quark and gluon jets produced in e^{+}e^{-} collisions using the artificial neural network method is addressed. The structure and the learning algorithm of the BP( back propagation) neural network model is studied. Three characteristic parameters-the average multiplicity and the average transverse momentum of jets and the average value of the angles opposite to the quark or gluon jets are taken as training parameters and are input to the BP network for repeated training. The learning process is ended when the output error of the neural network is less than a preset precision( sigma =0.005). The same training routine is repeated in each of the 8 energy bins ranging from 2.5-22.5 GeV, respectively. The finally updated weights and thresholds of the BP neural network are tested using the quark and gluon jet samples, getting from the nonsymmetric three-jet events produced by the Monte Carlo generator JETSET 7.4. Then the pattern recognition of the mixed sample getting from the combination of ...

  8. Artificial Neural Network Methods Applied to Drug Discovery for Neglected Diseases.

    Science.gov (United States)

    Scotti, Luciana; Ishiki, Hamilton; Mendonça Júnior, Francisco J B; da Silva, Marcelo S; Scotti, Marcus T

    2015-01-01

    Among the chemometric tools used in rational drug design, we find artificial neural network methods (ANNs), a statistical learning algorithm similar to the human brain, to be quite powerful. Some ANN applications use biological and molecular data of the training series that are inserted to ensure the machine learning, and to generate robust and predictive models. In drug discovery, researchers use this methodology, looking to find new chemotherapeutic agents for various diseases. The neglected diseases are a group of tropical parasitic diseases that primarily affect poor countries in Africa, Asia, and South America. Current drugs against these diseases cause side effects, are ineffective during the chronic stages of the disease, and are often not available to the needy population, have relative high toxicity, and face developing resistance. Faced with so many problems, new chemotherapeutic agents to treat these infections are much needed. The present review reports on neural network research, which studies new ligands against Chagas' disease, sleeping sickness, malaria, tuberculosis, and leishmaniasis; a few of the neglected diseases.

  9. Evaluating clustering methods within the Artificial Ecosystem Algorithm and their application to bike redistribution in London.

    Science.gov (United States)

    Adham, Manal T; Bentley, Peter J

    2016-08-01

    This paper proposes and evaluates a solution to the truck redistribution problem prominent in London's Santander Cycle scheme. Due to the complexity of this NP-hard combinatorial optimisation problem, no efficient optimisation techniques are known to solve the problem exactly. This motivates our use of the heuristic Artificial Ecosystem Algorithm (AEA) to find good solutions in a reasonable amount of time. The AEA is designed to take advantage of highly distributed computer architectures and adapt to changing problems. In the AEA a problem is first decomposed into its relative sub-components; they then evolve solution building blocks that fit together to form a single optimal solution. Three variants of the AEA centred on evaluating clustering methods are presented: the baseline AEA, the community-based AEA which groups stations according to journey flows, and the Adaptive AEA which actively modifies clusters to cater for changes in demand. We applied these AEA variants to the redistribution problem prominent in bike share schemes (BSS). The AEA variants are empirically evaluated using historical data from Santander Cycles to validate the proposed approach and prove its potential effectiveness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Artificial Bee Colony Algorithm Combined with Grenade Explosion Method and Cauchy Operator for Global Optimization

    Directory of Open Access Journals (Sweden)

    Jian-Guo Zheng

    2015-01-01

    Full Text Available Artificial bee colony (ABC algorithm is a popular swarm intelligence technique inspired by the intelligent foraging behavior of honey bees. However, ABC is good at exploration but poor at exploitation and its convergence speed is also an issue in some cases. To improve the performance of ABC, a novel ABC combined with grenade explosion method (GEM and Cauchy operator, namely, ABCGC, is proposed. GEM is embedded in the onlooker bees’ phase to enhance the exploitation ability and accelerate convergence of ABCGC; meanwhile, Cauchy operator is introduced into the scout bees’ phase to help ABCGC escape from local optimum and further enhance its exploration ability. Two sets of well-known benchmark functions are used to validate the better performance of ABCGC. The experiments confirm that ABCGC is significantly superior to ABC and other competitors; particularly it converges to the global optimum faster in most cases. These results suggest that ABCGC usually achieves a good balance between exploitation and exploration and can effectively serve as an alternative for global optimization.

  11. Novel blood sampling method of an artificial endocrine pancreas via the cardiopulmonary bypass circuit.

    Science.gov (United States)

    Kawahito, Shinji; Higuchi, Seiichi; Mita, Naoji; Kitagawa, Tetsuya; Kitahata, Hiroshi

    2013-12-01

    We tried to perform continuous blood glucose monitoring during cardiovascular surgery involving cardiopulmonary bypass using an artificial endocrine pancreas (STG-22 or -55; Nikkiso, Tokyo, Japan); however, we often encountered problems during these procedures because insufficient blood was obtained for monitoring. Thus, we started performing the blood sampling via the venous side of the cardiopulmonary bypass circuit. As a result, continuous blood glucose monitoring using an artificial endocrine pancreas was proven to be stable and reliable during cardiovascular surgery involving cardiopulmonary bypass.

  12. Simulation of 2-D Compressible Flows on a Moving Curvilinear Mesh with an Implicit-Explicit Runge-Kutta Method

    KAUST Repository

    AbuAlSaud, Moataz

    2012-07-01

    The purpose of this thesis is to solve unsteady two-dimensional compressible Navier-Stokes equations for a moving mesh using implicit explicit (IMEX) Runge- Kutta scheme. The moving mesh is implemented in the equations using Arbitrary Lagrangian Eulerian (ALE) formulation. The inviscid part of the equation is explicitly solved using second-order Godunov method, whereas the viscous part is calculated implicitly. We simulate subsonic compressible flow over static NACA-0012 airfoil at different angle of attacks. Finally, the moving mesh is examined via oscillating the airfoil between angle of attack = 0 and = 20 harmonically. It is observed that the numerical solution matches the experimental and numerical results in the literature to within 20%.

  13. Comparison of Intrabursal Transfer of Spermatozoa, a New Method for Artificial Insemination in Mice, with Intraoviductal Transfer of Spermatozoa

    OpenAIRE

    Sato, Masahiro; Nagashima, Ayako; Watanabe, Toshiteru; Kimura, Minoru

    2002-01-01

    Purpose: The objective of this paper was to compare the in vivo fertilizing abilities of fresh epididymal spermatozoa with a new method of artificial insemination in mice, so-called “intrabursal transfer of spermatozoa (ITS),” which requires transfer of spermatozoa into a space near the infundibulum between the ovary and ovarian bursa of superovulated females, and the previous method, so-called “intraoviductal transfer of spermatozoa (IOTS),” especially as regards sperm number and capacitatio...

  14. Larvas output and influence of human factor in reliability of meat inspection by the method of artificial digestion

    OpenAIRE

    Đorđević Vesna; Savić Marko; Vasilev Saša; Đorđević Milovan

    2013-01-01

    On the basis of the performed analyses of the factors that contributed the infected meat reach food chain, we have found out that the infection occurred after consuming the meat inspected by the method of collective samples artificial digestion by using a magnetic stirrer (MM). In this work there are presented assay results which show how modifications of the method, on the level of final sedimentation, influence the reliability of Trichinella larvas detect...

  15. Determination of penetration depth at high velocity impact using finite element method and artificial neural network tools

    OpenAIRE

    Namık KılıÇ; Bülent Ekici; Selim Hartomacıoğlu

    2015-01-01

    Determination of ballistic performance of an armor solution is a complicated task and evolved significantly with the application of finite element methods (FEM) in this research field. The traditional armor design studies performed with FEM requires sophisticated procedures and intensive computational effort, therefore simpler and accurate numerical approaches are always worthwhile to decrease armor development time. This study aims to apply a hybrid method using FEM simulation and artificial...

  16. Effect of Molarity of Sodium Hydroxide and Curing Method on the Compressive Strength of Ternary Blend Geopolymer Concrete

    Science.gov (United States)

    Sathish Kumar, V.; Ganesan, N.; Indira, P. V.

    2017-07-01

    Concrete plays a vital role in the development of infrastructure and buildings all over the world. Geopolymer based cement-less concrete is one of the current findings in the construction industry which leads to a green environment. This research paper deals with the results of the use of Fly ash (FA), Ground Granulated Blast Furnace Slag (GGBS) and Metakaolin (MK) as a ternary blend source material in Geopolymer concrete (GPC). The aspects that govern the compressive strength of GPC like the proportion of source material, Molarity of Sodium Hydroxide (NaOH) and Curing methods were investigated. The purpose of this research is to optimise the local waste material and use them effectively as a ternary blend in GPC. Seven combinations of binder were made in this study with replacement of FA with GGBS and MK by 35%, 30%, 25%, 20%, 15%, 10%, 5% and 5%, 10%, 15%, 20%, 25%, 30%, 35% respectively. The molarity of NaOH solution was varied by 12M, 14M and 16M and two types of curing method were adopted, viz. Hot air oven curing and closed steam curing for 24 hours at 60°C (140°F). The samples were kept at ambient temperature till testing. The compressive strength was obtained after 7 days and 28 days for the GPC cubes. The test data reveals that the ternary blend GPC with molarity 14M cured by hot air oven produces the maximum compressive strength. It was also observed that the compressive strength of the oven cured GPC is approximately 10% higher than the steam cured GPC using the ternary blend.

  17. Path Planning of Multi-robot Cooperation for Avoiding Obstacle Based on Improved Artificial Potential Field Method

    Directory of Open Access Journals (Sweden)

    Yang Zhaofeng

    2014-02-01

    Full Text Available In the process of the multi-robot collaboration, the problem of path planning is necessary to consider the barrier of static obstacles, but also to avoid collisions between the collaborative robots. This paper selected artificial potential field method as the basic method of path planning, and proposed improvement strategies for its unreachable goal and the problem of easy to fall into local minimum value and other issues. In theory, improvement measures of this paper can make repulsive force of the robot close to the obstacles around target tends to 0, and forms path planning along the edge of the obstacles. Simulation results show that the proposed method can effectively solve the inherent problems of artificial potential field to form a satisfactory path planning.

  18. Exploiting of the Compression Methods for Reconstruction of the Antenna Far-Field Using Only Amplitude Near-Field Measurements

    Directory of Open Access Journals (Sweden)

    J. Puskely

    2010-06-01

    Full Text Available The novel approach exploits the principle of the conventional two-plane amplitude measurements for the reconstruction of the unknown electric field distribution on the antenna aperture. The method combines a global optimization with a compression method. The global optimization method (GO is used to minimize the functional, and the compression method is used to reduce the number of unknown variables. The algorithm employs the Real Coded Genetic Algorithm (RCGA as the global optimization approach. The Discrete Cosine Transform (DCT and the Discrete Wavelet Transform (DWT are applied to reduce the number of unknown variables. Pros and cons of methods are investigated and reported for the solution of the problem. In order to make the algorithm faster, exploitation of amplitudes from a single scanning plane is also discussed. First, the algorithm is used to obtain an initial estimate. Subsequently, the common Fourier iterative algorithm is used to reach global minima with sufficient accuracy. The method is examined measuring the dish antenna.

  19. An Examination of a Music Appreciation Method Incorporating Tactile Sensations from Artificial Vibrations

    Science.gov (United States)

    Ideguchi, Tsuyoshi; Yoshida, Ryujyu; Ooshima, Keita

    We examined how test subject impressions of music changed when artificial vibrations were incorporated as constituent elements of a musical composition. In this study, test subjects listened to several music samples in which different types of artificial vibration had been incorporated and then subjectively evaluated any resulting changes to their impressions of the music. The following results were obtained: i) Even if rhythm vibration is added to a silent component of a musical composition, it can effectively enhance musical fitness. This could be readily accomplished when actual sounds that had been synchronized with the vibration components were provided beforehand. ii) The music could be listened to more comfortably by adding not only a natural vibration extracted from percussion instruments but also artificial vibration as tactile stimulation according to intentional timing. Furthermore, it was found that the test subjects' impression of the music was affected by a characteristic of the artificial vibration. iii) Adding vibration to high-frequency areas can offer an effective and practical way of enhancing the appeal of a musical composition. iv) The movement sensations of sound and vibration could be experienced when the strength of the sound and vibration are modified in turn. These results suggest that the intentional application of artificial vibration could result in a sensitivity amplification factor on the part of a listener.

  20. Real-time and encryption efficiency improvements of simultaneous fusion, compression and encryption method based on chaotic generators

    Science.gov (United States)

    Jridi, Maher; Alfalou, Ayman

    2018-03-01

    In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.

  1. Participant satisfaction with a school telehealth education program using interactive compressed video delivery methods in rural Arkansas.

    Science.gov (United States)

    Bynum, Ann B; Cranford, Charles O; Irwin, Cathy A; Denny, George S

    2002-08-01

    Socioeconomic and demographic factors can affect the impact of telehealth education programs that use interactive compressed video technology. This study assessed program satisfaction among participants in the University of Arkansas for Medical Sciences' School Telehealth Education Program delivered by interactive compressed video. Variables in the one-group posttest study were age, gender, ethnicity, education, community size, and program topics for years 1997-1999. The convenience sample included 3,319 participants in junior high and high schools. The School Telehealth Education Program provided information about health risks, disease prevention, health promotion, personal growth, and health sciences. Adolescents reported medium to high levels of satisfaction regarding program interest and quality. Significantly higher satisfaction was expressed for programs on muscular dystrophy, anatomy of the heart, and tobacco addiction (p Education Program, delivered by interactive compressed video, promoted program satisfaction among rural and minority populations and among junior high and high school students. Effective program methods included an emphasis on participants' learning needs, increasing access in rural areas among ethnic groups, speaker communication, and clarity of the program presentation.

  2. ChIPWig: A Random Access-Enabling Lossless and Lossy Compression Method for ChIP-seq Data.

    Science.gov (United States)

    Ravanmehr, Vida; Kim, Minji; Wang, Zhiying; Milenkovic, Olgica

    2017-10-26

    Chromatin immunoprecipitation sequencing (ChIP-seq) experiments are inexpensive and time-efficient, and result in massive datasets that introduce significant storage and maintenance challenges. To address the resulting Big Data problems, we propose a lossless and lossy compression framework specifically designed for ChIP-seq Wig data, termed ChIPWig. ChIPWig enables random access, summary statistics lookups and it is based on the asymptotic theory of optimal point density design for nonuniform quantizers. We tested the ChIPWig compressor on 10 ChIP-seq datasets generated by the ENCODE consortium. On average, lossless ChIPWig reduced the file sizes to merely 6% of the original, and offered 6-fold compression rate improvement compared to bigWig. The lossy feature further reduced file sizes 2-fold compared to the lossless mode, with little or no effects on peak calling and motif discovery using specialized NarrowPeaks methods. The compression and decompression speed rates are of the order of 0:2 MB/sec using general purpose computers. The source code and binaries are freely available for download athttps://github.com/vidarmehr/ChIPWig-v2, implemented in C ++. milenkov@illinois.edu. Available on the Bioinformatics submission site.

  3. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    International Nuclear Information System (INIS)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-01-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis

  4. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Stoitsis, John [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)]. E-mail: stoitsis@biosim.ntua.gr; Valavanis, Ioannis [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Mougiakakou, Stavroula G. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Golemati, Spyretta [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Nikita, Alexandra [University of Athens, Medical School 152 28 Athens (Greece); Nikita, Konstantina S. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)

    2006-12-20

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  5. Semantic Source Coding for Flexible Lossy Image Compression

    National Research Council Canada - National Science Library

    Phoha, Shashi; Schmiedekamp, Mendel

    2007-01-01

    Semantic Source Coding for Lossy Video Compression investigates methods for Mission-oriented lossy image compression, by developing methods to use different compression levels for different portions...

  6. Evaluation of dna extraction methods of the Salmonella sp. bacterium in artificially infected chickens eggs

    Directory of Open Access Journals (Sweden)

    Ana Cristina dos Reis Ferreira

    2015-06-01

    Full Text Available ABSTRACT. Ferreira A.C.dosR. & dos Santos B.M. [Evaluation of dna extraction methods of the Salmonella sp. bacterium in artificially infected chickens eggs.] Avaliação de três métodos de extração de DNA de Salmonella sp. em ovos de galinhas contaminados artificialmente. Revista Brasileira de Medicina Veterinária, 37(2:115-119, 2015. Departamento de Veterinária, Universidade Federal de Viçosa, Campus Universitário, Av. Peter Henry Rolfs, s/n, Viçosa, MG 36571-000, Brasil. E-mail: bmsantos@ufv.br The present study evaluated the efficiency of different protocols for the genomic DNA extraction of Salmonella bacteria in chicken eggs free of specific pathogens – SPF. Seventy-five eggs were used and divided into five groups with fifteen eggs each. Three of the five groups of eggs were inoculated with enteric Salmonella cultures. One of the five groups was inoculated with Escherichia coli bacterium culture. And another group of eggs was the negative control that received saline solution 0.85% infertile. The eggs were incubated on a temperature that varied from 20 to 25°C during 24, 48 and 72 hours. Five yolks of each group were collected every 24 hours. These yolks were homogenized and centrifuged during 10 minutes. The supernatant was rejected. After the discard, PBS ph 7.2 was added and centrifuged again. The sediment obtained of each group was used for the extraction of bacterial genomic DNA. Silica particles and a commercial kit were utilized as the extraction methods. The extracted DNA was kept on a temperature of 20°C until the evaluation through PCR. The primers utilized were related with the invA gene and they were the following: 5’ GTA AAA TTA TCG CCA CGT TCG GGC AA 3’ and 5’ TCA TCG CAC CGT CAA AGG AAC C 3’. The amplification products were visualized in transilluminator with ultraviolet light. The obtained results through the bacterial DNA extractions demonstrated that the extraction method utilizing silica particles was

  7. Lattice Boltzmann method for simulation of compressible flows on standard lattices.

    Science.gov (United States)

    Prasianakis, Nikolaos I; Karlin, Iliya V

    2008-07-01

    The recently introduced lattice Boltzmann model for thermal flow simulation on a standard lattice [Prasianakis and Karlin, Phys. Rev. E 76, 016702 (2007)] is studied numerically in the case where compressibility effects are essential. It is demonstrated that the speed of sound and shock propagation are described correctly in a wide temperature range, and that it is possible to take into account additional physics such as heat sources and sinks. A remarkable simplicity of the model makes it viable for engineering applications in subsonic flows with large temperature and density variations.

  8. An efficient finite differences method for the computation of compressible, subsonic, unsteady flows past airfoils and panels

    Science.gov (United States)

    Colera, Manuel; Pérez-Saborid, Miguel

    2017-09-01

    A finite differences scheme is proposed in this work to compute in the time domain the compressible, subsonic, unsteady flow past an aerodynamic airfoil using the linearized potential theory. It improves and extends the original method proposed in this journal by Hariharan, Ping and Scott [1] by considering: (i) a non-uniform mesh, (ii) an implicit time integration algorithm, (iii) a vectorized implementation and (iv) the coupled airfoil dynamics and fluid dynamic loads. First, we have formulated the method for cases in which the airfoil motion is given. The scheme has been tested on well known problems in unsteady aerodynamics -such as the response to a sudden change of the angle of attack and to a harmonic motion of the airfoil- and has been proved to be more accurate and efficient than other finite differences and vortex-lattice methods found in the literature. Secondly, we have coupled our method to the equations governing the airfoil dynamics in order to numerically solve problems where the airfoil motion is unknown a priori as happens, for example, in the cases of the flutter and the divergence of a typical section of a wing or of a flexible panel. Apparently, this is the first self-consistent and easy-to-implement numerical analysis in the time domain of the compressible, linearized coupled dynamics of the (generally flexible) airfoil-fluid system carried out in the literature. The results for the particular case of a rigid airfoil show excellent agreement with those reported by other authors, whereas those obtained for the case of a cantilevered flexible airfoil in compressible flow seem to be original or, at least, not well-known.

  9. Prediction of enthalpy of fusion of pure compounds using an Artificial Neural Network-Group Contribution method

    Energy Technology Data Exchange (ETDEWEB)

    Gharagheizi, Farhad, E-mail: fghara@ut.ac.ir [Saman Energy Giti Co., Postal Code 3331619636,Tehran (Iran, Islamic Republic of); Salehi, Gholam Reza [Islamic Azad University Nowshahr Branch, Nowshahr (Iran, Islamic Republic of)

    2011-07-10

    Highlights: {yields} An Artificial Neural Network-Group Contribution method is presented for prediction of enthalpy of fusion of pure compounds at their normal melting point. {yields} Validity of the model is confirmed using a large evaluated data set containing 4157 pure compounds. {yields} The average percent error of the model is equal to 2.65% in comparison with the experimental data. - Abstract: In this work, the Artificial Neural Network-Group Contribution (ANN-GC) method is applied to estimate the enthalpy of fusion of pure chemical compounds at their normal melting point. 4157 pure compounds from various chemical families are investigated to propose a comprehensive and predictive model. The obtained results show the Squared Correlation Coefficient (R{sup 2}) of 0.999, Root Mean Square Error of 0.82 kJ/mol, and average absolute deviation lower than 2.65% for the estimated properties from existing experimental values.

  10. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning.

    Science.gov (United States)

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-03-21

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  11. Investigation of a novel non-surgical method of artificial insemination for sheep

    Science.gov (United States)

    Transcervical artificial insemination (AI) with sheep is not frequently used in the US due to low fertility rates. Consequently, laparoscopic AI has been employed to circumvent this situation. The problem with this technique is that while it provides satisfactory levels of fertility the degree of ...

  12. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  13. Comparison of three artificial digestion methods for detection of non-encapsulated Trichinella pseudospiralis larvae in pork.

    Science.gov (United States)

    Nöckler, K; Reckinger, S; Szabó, I; Maddox-Hyttel, C; Pozio, E; van der Giessen, J; Vallée, I; Boireau, P

    2009-02-23

    In a ring trial involving five laboratories (A, B, C, D, and E), three different methods of artificial digestion were compared for the detection of non-encapsulated Trichinella pseudospiralis larvae in minced meat. Each sample panel consisted of ten 1g minced pork samples. All samples in each panel were derived from a bulk meat preparation with a nominal value of either 7 or 17 larvae per g (lpg). Samples were tested for the number of muscle larvae using the magnetic stirrer method (labs A, B, and E), stomacher method (lab B), and Trichomatic 35 (labs C and D). T. pseudospiralis larvae were found in all 120 samples tested. For samples with 7 lpg, larval recoveries were significantly higher using the stomacher method versus the magnetic stirrer method, but there were no significant differences for samples with 17 lpg. In comparing laboratory results irrespective of the method used, lab B detected a significantly higher number of larvae than lab E for samples with 7 lpg, and lab E detected significantly less larvae than labs A, B, and D in samples with 17 lpg. The lowest overall variation for quantitative results (i.e. larval recoveries which were outside the tolerance range) was achieved by using the magnetic stirrer method (22%), followed by the stomacher method (25%), and Trichomatic 35 (30%). Results revealed that T. pseudospiralis larvae in samples with a nominal value of 7 and 17 lpg can be detected by all three methods of artificial digestion.

  14. Efficient solution of the non-linear Reynolds equation for compressible fluid using the finite element method

    DEFF Research Database (Denmark)

    Larsen, Jon Steffen; Santos, Ilmar

    2015-01-01

    An efficient finite element scheme for solving the non-linear Reynolds equation for compressible fluid coupled to compliant structures is presented. The method is general and fast and can be used in the analysis of airfoil bearings with simplified or complex foil structure models. To illustrate...... the computational performance, it is applied to the analysis of a compliant foil bearing modelled using the simple elastic foundation model. The model is derived and perturbed using complex notation. Top foil sagging effect is added to the bump foil compliance in terms of a close-form periodic function. For a foil...

  15. Numerical simulation of the interaction between a nonlinear elastic structure and compressible flow by the discontinuous Galerkin method

    Czech Academy of Sciences Publication Activity Database

    Kosík, Adam; Feistauer, M.; Hadrava, Martin; Horáček, Jaromír

    2015-01-01

    Roč. 267, September (2015), s. 382-396 ISSN 0096-3003 R&D Projects: GA ČR(CZ) GAP101/11/0207 Institutional support: RVO:61388998 Keywords : discontinuous Galerkin method * nonlinear elasticity * compressible viscous flow * fluid–structure interaction Subject RIV: BI - Acoustics Impact factor: 1.345, year: 2015 http://www. science direct.com/ science /article/pii/S0096300315002453/pdfft?md5=02d46bc730e3a7fb8a5008aaab1da786&pid=1-s2.0-S0096300315002453-main.pdf

  16. Disposal of Kr-85 in compressed gas cylinders and in zeolites; application of the zeolite method to other radioactive gases

    International Nuclear Information System (INIS)

    Penzhorn, R.D.

    1983-01-01

    Ultimate storage of Kr-85 in compressed gas cylinders of structural steel or austenic special steels is possible for the required storage time of 100 years at temperatures of up to 200 0 C, since Rb corrosion under ultimate storage conditions may be neglected. When Kr is stored in CaNaA zeolite at temperatures from 340-650 0 C, the pressure is of secondary importance. CO 2 and CH 4 can also be durably solidified in zeolites 4A and 5A. It is presently being assessed whether this method is applicable to T and J-129. (DG) [de

  17. ARTIFICIAL NEURAL NETWORK BASED METHOD OF ASSESSMENT OF STUDENTS` FOREIGN LANGUAGE COMPETENCE BY THE GROUP OF EXPERTS

    Directory of Open Access Journals (Sweden)

    Olha V. Zastelo

    2015-09-01

    Full Text Available In this article the method of the integral assessment of the level of students` foreign language communicative competence by the group of experts through the complex test in a foreign language is considered. The use of mathematical methods and modern specialized software during complex testing of students significantly improves the expert methods, particularly in the direction of increasing the reliability of the assessment. Capitalizing analytical software environment realizes the simulation of non-linear generalizations based on artificial neural networks, which increases the accuracy of the estimate and allows further efficient use of the competent experts` experience gained in the model.

  18. Investigation of Surface Pre-Treatment Methods for Wafer-Level Cu-Cu Thermo-Compression Bonding

    Directory of Open Access Journals (Sweden)

    Koki Tanaka

    2016-12-01

    Full Text Available To increase the yield of the wafer-level Cu-Cu thermo-compression bonding method, certain surface pre-treatment methods for Cu are studied which can be exposed to the atmosphere before bonding. To inhibit re-oxidation under atmospheric conditions, the reduced pure Cu surface is treated by H2/Ar plasma, NH3 plasma and thiol solution, respectively, and is covered by Cu hydride, Cu nitride and a self-assembled monolayer (SAM accordingly. A pair of the treated wafers is then bonded by the thermo-compression bonding method, and evaluated by the tensile test. Results show that the bond strengths of the wafers treated by NH3 plasma and SAM are not sufficient due to the remaining surface protection layers such as Cu nitride and SAMs resulting from the pre-treatment. In contrast, the H2/Ar plasma–treated wafer showed the same strength as the one with formic acid vapor treatment, even when exposed to the atmosphere for 30 min. In the thermal desorption spectroscopy (TDS measurement of the H2/Ar plasma–treated Cu sample, the total number of the detected H2 was 3.1 times more than the citric acid–treated one. Results of the TDS measurement indicate that the modified Cu surface is terminated by chemisorbed hydrogen atoms, which leads to high bonding strength.

  19. A three-dimensional, compressible, laminar boundary-layer method for general fuselages. Volume 1: Numerical method

    Science.gov (United States)

    Wie, Yong-Sun

    1990-01-01

    A procedure for calculating 3-D, compressible laminar boundary layer flow on general fuselage shapes is described. The boundary layer solutions can be obtained in either nonorthogonal 'body oriented' coordinates or orthogonal streamline coordinates. The numerical procedure is 'second order' accurate, efficient and independent of the cross flow velocity direction. Numerical results are presented for several test cases, including a sharp cone, an ellipsoid of revolution, and a general aircraft fuselage at angle of attack. Comparisons are made between numerical results obtained using nonorthogonal curvilinear 'body oriented' coordinates and streamline coordinates.

  20. Core damage severity evaluation for pressurized water reactors by artificial intelligence methods

    Science.gov (United States)

    Mironidis, Anastasios Pantelis

    1998-12-01

    During the course of nuclear power evolution, accidents have occurred. However, in the western world, none of them had a severe impact on the public because of the design features of nuclear plants. In nuclear reactors, barriers constitute physical obstacles to uncontrolled fission product releases. These barriers are an important factor in safety analysis. During an accident, reactor safety systems become actuated to prevent the barriers from been breached. In addition, operators are required to take specified actions, meticulously depicted in emergency response procedures. In an accident, on-the-spot knowledge regarding the condition of the core is necessary. In order to make the right decisions toward mitigating the accident severity and its consequences, we need to know the status of the core [1, 3]. However, power plant instrumentation that can provide a direct indication of the status of the core during the time when core damage is a potential outcome, does not exist. Moreover, the information from instruments may have large uncertainty of various types. Thus, a very strong potential for misinterpreting incoming information exists. This research endeavor addresses the problem of evaluating the core damage severity of a Pressurized Water Reactor during a transient or an accident. An expert system has been constructed, that incorporates knowledge and reasoning of human experts. The expert system's inference engine receives incoming plant data that originate in the plethora of core-related instruments. Its knowledge base relies on several massive, multivariate fuzzy logic rule-sets, coupled with several artificial neural networks. These mathematical models have encoded information that defines possible core states, based on correlations of parameter values. The inference process classifies the core as intact, or as experiencing clad damage and/or core melting. If the system detects a form of core damage, a quantification procedure will provide a numerical

  1. "Compressed" Compressed Sensing

    OpenAIRE

    Reeves, Galen; Gastpar, Michael

    2010-01-01

    The field of compressed sensing has shown that a sparse but otherwise arbitrary vector can be recovered exactly from a small number of randomly constructed linear projections (or samples). The question addressed in this paper is whether an even smaller number of samples is sufficient when there exists prior knowledge about the distribution of the unknown vector, or when only partial recovery is needed. An information-theoretic lower bound with connections to free probability theory and an upp...

  2. Artificial Inductance Concept to Compensate Nonlinear Inductance Effects in the Back EMF-Based Sensorless Control Method for PMSM

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Lei, Xiao; Blaabjerg, Frede

    2013-01-01

    at different loading conditions due to saturation effects. In this paper, a new concept of using a constant artificial inductance to replace the actual varying machine inductance for position estimation is introduced. This facilitates greatly the analysis of the influence of inductance variation......The back EMF-based sensorless control method is very popular for permanent magnet synchronous machines (PMSMs) in the medium- to high-speed operation range due to its simple structure. In this speed range, the accuracy of the estimated position is mainly affected by the inductance, which varies...... on the estimated position error, and gives a deep insight into this problem. It also provides a simple approach to achieve a globally minimized position error. A proper choice of the artificial machine inductance may reduce the maximum position error by 50% without considering the actual inductance variation...

  3. Artificial life and Piaget.

    Science.gov (United States)

    Mueller, Ulrich; Grobman, K H.

    2003-04-01

    Artificial life provides important theoretical and methodological tools for the investigation of Piaget's developmental theory. This new method uses artificial neural networks to simulate living phenomena in a computer. A recent study by Parisi and Schlesinger suggests that artificial life might reinvigorate the Piagetian framework. We contrast artificial life with traditional cognitivist approaches, discuss the role of innateness in development, and examine the relation between physiological and psychological explanations of intelligent behaviour.

  4. Notion Of Artificial Labs Slow Global Warming And Advancing Engine Studies Perspectives On A Computational Experiment On Dual-Fuel Compression-Ignition Engine Research

    Directory of Open Access Journals (Sweden)

    Tonye K. Jack

    2017-06-01

    Full Text Available To appreciate clean energy applications of the dual-fuel internal combustion engine D-FICE with pilot Diesel fuel to aid public policy formulation in terms of present and future benefits to the modern transportation stationary power and promotion of oil and gas green- drilling the brief to an engine research team was to investigate the feasible advantages of dual-fuel compression-ignition engines guided by the following concerns i Sustainable fuel and engine power delivery ii The requirements for fuel flexibility iii Low exhausts emissions and environmental pollution iv Achieving low specific fuel consumption and economy for maximum power v The comparative advantages over the conventional Diesel engines vi Thermo-economic modeling and analysis for the optimal blend as basis for a benefitcost evaluation Planned in two stages for reduced cost and fast turnaround of results - initial preliminary stage with basic simple models and advanced stage with more detailed complex modeling. The paper describes a simplified MATLAB based computational experiment predictive model for the thermodynamic combustion and engine performance analysis of dual-fuel compression-ignition engine studies operating on the theoretical limited-pressure cycle with several alternative fuel-blends. Environmental implications for extreme temperature moderation are considered by finite-time thermodynamic modeling for maximum power with predictions for pollutants formation and control by reaction rates kinetics analysis of systematic reduced plausible coupled chemistry models through the NCN reaction pathway for the gas-phase reactions classes of interest. Controllable variables for engine-out pollutants emissions reduction and in particular NOx elimination are identified. Verifications and Validations VampV through Performance Comparisons were made using a clinical approach in selection of StrokeBore ratios greater-than and equal-to one amp88051 low-to-high engine speeds and medium

  5. Numerical and theoretical aspects of the modelling of compressible two-phase flow by interface capture methods

    International Nuclear Information System (INIS)

    Kokh, S.

    2001-01-01

    This research thesis reports the development of a numerical direct simulation of compressible two-phase flows by using interface capturing methods. These techniques are based on the use of an Eulerian fixed grid to describe flow variables as well as the interface between fluids. The author first recalls conventional interface capturing methods and makes the distinction between those based on discontinuous colour functions and those based on level set functions. The approach is then extended to a five equation model to allow the largest as possible choice of state equations for the fluids. Three variants are developed. A solver inspired by the Roe scheme is developed for one of them. These interface capturing methods are then refined, more particularly for problems of numerical diffusion at the interface. A last part addresses the study of dynamic phase change. Non-conventional thermodynamics tools are used to study the structures of an interface which performs phase transition [fr

  6. A novel method for semen collection and artificial insemination in large parrots (Psittaciformes)

    Science.gov (United States)

    Lierz, Michael; Reinschmidt, Matthias; Müller, Heiner; Wink, Michael; Neumann, Daniel

    2013-01-01

    The paper described a novel technique for semen collection in large psittacines (patent pending), a procedure which was not routinely possible before. For the first time, a large set of semen samples is now available for analysis as well as for artificial insemination. Semen samples of more than 100 psittacine taxa were collected and analysed; data demonstrate large differences in the spermatological parameters between families, indicating an ecological relationship with breeding behaviour (polygamous versus monogamous birds). Using semen samples for artificial insemination resulted in the production of offspring in various families, such as Macaws and Cockatoos, for the first time ever. The present technique represents a breakthrough in species conservation programs and will enable future research into the ecology and environmental factors influencing endangered species. PMID:23797622

  7. Event classification and optimization methods using artificial intelligence and other relevant techniques: Sharing the experiences

    Science.gov (United States)

    Mohamed, Abdul Aziz; Hasan, Abu Bakar; Ghazali, Abu Bakar Mhd.

    2017-01-01

    Classification of large data into respected classes or groups could be carried out with the help of artificial intelligence (AI) tools readily available in the market. To get the optimum or best results, optimization tool could be applied on those data. Classification and optimization have been used by researchers throughout their works, and the outcomes were very encouraging indeed. Here, the authors are trying to share what they have experienced in three different areas of applied research.

  8. Semen parameters can be predicted from environmental factors and lifestyle using artificial intelligence methods.

    Science.gov (United States)

    Girela, Jose L; Gil, David; Johnsson, Magnus; Gomez-Torres, María José; De Juan, Joaquín

    2013-04-01

    Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors as well as life habits may affect semen quality. In this paper we use artificial intelligence techniques in order to predict semen characteristics resulting from environmental factors, life habits, and health status, with these techniques constituting a possible decision support system that can help in the study of male fertility potential. A total of 123 young, healthy volunteers provided a semen sample that was analyzed according to the World Health Organization 2010 criteria. They also were asked to complete a validated questionnaire about life habits and health status. Sperm concentration and percentage of motile sperm were related to sociodemographic data, environmental factors, health status, and life habits in order to determine the predictive accuracy of a multilayer perceptron network, a type of artificial neural network. In conclusion, we have developed an artificial neural network that can predict the results of the semen analysis based on the data collected by the questionnaire. The semen parameter that is best predicted using this methodology is the sperm concentration. Although the accuracy for motility is slightly lower than that for concentration, it is possible to predict it with a significant degree of accuracy. This methodology can be a useful tool in early diagnosis of patients with seminal disorders or in the selection of candidates to become semen donors.

  9. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  10. A study on the advanced methods for on-line signal processing by using artificial intelligence in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Wan Joo

    1993-02-01

    signals in a certain time interval for reducing the loads of the fusion part. The simulation results of LOCA in the simulator are demonstrated for the classification of the signal trend. The demonstration is performed for the transient states of a steam generator. Using the fuzzy memberships, the pre-processors classify the trend types in each time interval into three classes; increase, decrease, and steady that are fuzzy to classify. The result compared with the artificial neural network which has no pre-processor shows that the training time is reduced and the outputs are seldom influenced by noises. Because most knowledge of human operators include fuzzy concepts and words, the method like this is very helpful for computerizing the buman expert's knowledge

  11. zlib compression library

    OpenAIRE

    Gailly, Jean-loup; Adler, Mark

    2004-01-01

    (taken from http://www.gzip.org/ on 2004-12-01) zlib is designed to be a free, general-purpose, legally unencumbered -- that is, not covered by any patents -- lossless data-compression library for use on virtually any computer hardware and operating system. The zlib data format is itself portable across platforms. Unlike the LZW compression method used in Unix compress(1) and in the GIF image format, the compression method currently used in zlib essentially never expands the data. (LZW ca...

  12. The Effects of Design Strength, Fly Ash Content and Curing Method on Compressive Strength of High Volume Fly Ash Concrete: A Design of Experimental

    OpenAIRE

    Solikin Mochamad; Setiawan Budi

    2017-01-01

    High volume fly ash concrete becomes one of alternatives to produce green concrete as it uses waste material and significantly reduces the utilization of Portland cement in concrete production. Although using less cement, its compressive strength is comparable to ordinary Portland cement (hereafter OPC) and the its durability increases significantly. This paper reports investigation on the effect of design strength, fly ash content and curing method on compressive strength of High Volume Fly ...

  13. Simulation of moving boundaries interacting with compressible reacting flows using a second-order adaptive Cartesian cut-cell method

    Science.gov (United States)

    Muralidharan, Balaji; Menon, Suresh

    2018-03-01

    A high-order adaptive Cartesian cut-cell method, developed in the past by the authors [1] for simulation of compressible viscous flow over static embedded boundaries, is now extended for reacting flow simulations over moving interfaces. The main difficulty related to simulation of moving boundary problems using immersed boundary techniques is the loss of conservation of mass, momentum and energy during the transition of numerical grid cells from solid to fluid and vice versa. Gas phase reactions near solid boundaries can produce huge source terms to the governing equations, which if not properly treated for moving boundaries, can result in inaccuracies in numerical predictions. The small cell clustering algorithm proposed in our previous work is now extended to handle moving boundaries enforcing strict conservation. In addition, the cell clustering algorithm also preserves the smoothness of solution near moving surfaces. A second order Runge-Kutta scheme where the boundaries are allowed to change during the sub-time steps is employed. This scheme improves the time accuracy of the calculations when the body motion is driven by hydrodynamic forces. Simple one dimensional reacting and non-reacting studies of moving piston are first performed in order to demonstrate the accuracy of the proposed method. Results are then reported for flow past moving cylinders at subsonic and supersonic velocities in a viscous compressible flow and are compared with theoretical and previously available experimental data. The ability of the scheme to handle deforming boundaries and interaction of hydrodynamic forces with rigid body motion is demonstrated using different test cases. Finally, the method is applied to investigate the detonation initiation and stabilization mechanisms on a cylinder and a sphere, when they are launched into a detonable mixture. The effect of the filling pressure on the detonation stabilization mechanisms over a hyper-velocity sphere launched into a hydrogen

  14. Stabilization of Gob-Side Entry with an Artificial Side for Sustaining Mining Work

    OpenAIRE

    Hong-sheng Wang; Dong-sheng Zhang; Lang Liu; Wei-bin Guo; Gang-wei Fan; KI-IL Song; Xu-feng Wang

    2016-01-01

    A concrete artificial side (AS) is introduced to stabilize a gob-side entry (GSE). To evaluate the stability of the AS, a uniaxial compression failure experiment was conducted with large and small-scale specimens. The distribution characteristics of the shear stress were obtained from a numerical simulation. Based on the failure characteristics and the variation of the shear stress, a failure criterion was determined and implemented in the strengthening method for the artificial side. In an e...

  15. Entropy stable high order discontinuous Galerkin methods for ideal compressible MHD on structured meshes

    Science.gov (United States)

    Liu, Yong; Shu, Chi-Wang; Zhang, Mengping

    2018-02-01

    We present a discontinuous Galerkin (DG) scheme with suitable quadrature rules [15] for ideal compressible magnetohydrodynamic (MHD) equations on structural meshes. The semi-discrete scheme is analyzed to be entropy stable by using the symmetrizable version of the equations as introduced by Godunov [32], the entropy stable DG framework with suitable quadrature rules [15], the entropy conservative flux in [14] inside each cell and the entropy dissipative approximate Godunov type numerical flux at cell interfaces to make the scheme entropy stable. The main difficulty in the generalization of the results in [15] is the appearance of the non-conservative "source terms" added in the modified MHD model introduced by Godunov [32], which do not exist in the general hyperbolic system studied in [15]. Special care must be taken to discretize these "source terms" adequately so that the resulting DG scheme satisfies entropy stability. Total variation diminishing / bounded (TVD/TVB) limiters and bound-preserving limiters are applied to control spurious oscillations. We demonstrate the accuracy and robustness of this new scheme on standard MHD examples.

  16. Comparison of Methods to Predict Lower Bound Buckling Loads of Cylinders Under Axial Compression

    Science.gov (United States)

    Haynie, Waddy T.; Hilburger, Mark W.

    2010-01-01

    Results from a numerical study of the buckling response of two different orthogrid stiffened circular cylindrical shells with initial imperfections and subjected to axial compression are used to compare three different lower bound buckling load prediction techniques. These lower bound prediction techniques assume different imperfection types and include an imperfection based on a mode shape from an eigenvalue analysis, an imperfection caused by a lateral perturbation load, and an imperfection in the shape of a single stress-free dimple. The STAGS finite element code is used for the analyses. Responses of the cylinders for ranges of imperfection amplitudes are considered, and the effect of each imperfection is compared to the response of a geometrically perfect cylinder. Similar behavior was observed for shells that include a lateral perturbation load and a single dimple imperfection, and the results indicate that the predicted lower bounds are much less conservative than the corresponding results for the cylinders with the mode shape imperfection considered herein. In addition, the lateral perturbation technique and the single dimple imperfection produce response characteristics that are physically meaningful and can be validated via testing.

  17. An example of the use of the DELPHI method: future prospects of artificial heart techniques in France

    International Nuclear Information System (INIS)

    Derian, Jean-Claude; Morize, Francoise; Vernejoul, Pierre de; Vial, Renee

    1971-01-01

    The artificial heart is still only a research project surrounded by numerous uncertainties which make it very difficult to estimate, at the moment, the possibilities for future development of this technique in France. A systematic analysis of the hazards which characterize this project has been undertaken in the following report: restricting these uncertainties has required a taking into account of opinions of specialists concerned with type of research or its upshot. We have achieved this by adapting an investigation technique which is still unusual in France, the DELPHI method. This adaptation has allowed the confrontation and statistical aggregation of the opinions given by a body of a hundred experts who were consulted through a program of sequential interrogations which studied in particular, the probable date of the research issue, the clinical cases which require the use of an artificial heart, as well as the probable future needs. After having taken into account the economic constraints, we can deduce from these results the probable amount of plutonium 238 needed in the hypothesis where isotopic generator would be retained for the energetics feeding of the artificial heart [fr

  18. Long-time stability effects of quadrature and artificial viscosity on nodal discontinuous Galerkin methods for gas dynamics

    Science.gov (United States)

    Durant, Bradford; Hackl, Jason; Balachandar, Sivaramakrishnan

    2017-11-01

    Nodal discontinuous Galerkin schemes present an attractive approach to robust high-order solution of the equations of fluid mechanics, but remain accompanied by subtle challenges in their consistent stabilization. The effect of quadrature choices (full mass matrix vs spectral elements), over-integration to manage aliasing errors, and explicit artificial viscosity on the numerical solution of a steady homentropic vortex are assessed over a wide range of resolutions and polynomial orders using quadrilateral elements. In both stagnant and advected vortices in periodic and non-periodic domains the need arises for explicit stabilization beyond the numerical surface fluxes of discontinuous Galerkin spectral elements. Artificial viscosity via the entropy viscosity method is assessed as a stabilizing mechanism. It is shown that the regularity of the artificial viscosity field is essential to its use for long-time stabilization of small-scale features in nodal discontinuous Galerkin solutions of the Euler equations of gas dynamics. Supported by the Department of Energy Predictive Science Academic Alliance Program Contract DE-NA0002378.

  19. Comparison of Kinetic-based and Artificial Neural Network Modeling Methods for a Pilot Scale Vacuum Gas Oil Hydrocracking Reactor

    Directory of Open Access Journals (Sweden)

    Sepehr Sadighi

    2013-12-01

    Full Text Available An artificial neural network (ANN and kinetic-based models for a pilot scale vacuum gas oil (VGO hydrocracking plant are presented in this paper. Reported experimental data in the literature were used to develop, train, and check these models. The proposed models are capable of predicting the yield of all main hydrocracking products including dry gas, light naphtha, heavy naphtha, kerosene, diesel, and unconverted VGO (residue. Results showed that kinetic-based and artificial neural models have specific capabilities to predict yield of hydrocracking products. The former is able to accurately predict the yield of lighter products, i.e. light naphtha, heavy naphtha and kerosene. However, ANN model is capable of predicting yields of diesel and residue with higher precision. The comparison shows that the ANN model is superior to the kinetic-base models.  © 2013 BCREC UNDIP. All rights reservedReceived: 9th April 2013; Revised: 13rd August 2013; Accepted: 18th August 2013[How to Cite: Sadighi, S., Zahedi, G.R. (2013. Comparison of Kinetic-based and Artificial Neural Network Modeling Methods for a Pilot Scale Vacuum Gas Oil Hydrocracking Reactor. Bulletin of Chemical Reaction Engineering & Catalysis, 8 (2: 125-136. (doi:10.9767/bcrec.8.2.4722.125-136][Permalink/DOI: http://dx.doi.org/10.9767/bcrec.8.2.4722.125-136

  20. Application of the collapsing method to acoustic emissions in a rock salt sample during a triaxial compression experiment

    International Nuclear Information System (INIS)

    Manthei, G.; Eisenblaetter, J.; Moriya, H.; Niitsuma, H.; Jones, R.H.

    2003-01-01

    Collapsing is a relatively new method. It is used for detecting patterns and structures in blurred and cloudy pictures of multiple soundings. In the case described here, the measurements were made in a very small region with a length of only a few decimeters. The events were registered during a triaxial compression experiment on a compact block of rock salt. The collapsing method showed a cellular structure of the salt block across the whole length of the test piece. The cells had a length of several cm, enclosing several grains of salt with an average grain size of less than one cm. In view of the fact that not all cell walls corresponded to acoustic emission events, it was assumed that only those grain boundaries are activated that are oriented at a favourable angle to the field of tension of the test piece [de

  1. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2016-01-01

    Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.

  2. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Science.gov (United States)

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  3. Accretion rate in mangroves sediment at Sungai Miang, Pahang, Malaysia: 230Thexcess versus artificial horizon marker method

    International Nuclear Information System (INIS)

    Kamaruzzaman Yunus; Jamil Tajam; Hasrizal Shaari; Noor Azhar Mohd Shazili; Misbahul Mohd Amin

    2008-01-01

    Mangroves have enormous ecological value and one of their important role is to act as an efficient sediment trappers which dominantly supplied by rivers and the atmosphere to the oceans. Applying the 230 Th excess method, an average accretion rate of 0.54 cm yr -1 was obtained. this is comparable to that of an artificial horizon marker method giving an average of 0.54 cm yr -1 . The 230 Th excess method provides a rapid and simple method of evaluating 230 Th excess accumulation histories in sediment cores. Sample preparation is also significantly simplified, thus providing a relatively quick and easy method for the determination of the accretion rate in mangrove area. (author)

  4. Estimation of the groundwater recharge in laterita using the artificial tritium method

    International Nuclear Information System (INIS)

    Castro Rubio Poli, D. de; Kimmelman e Silva, A.A.; Pfisterer, U.

    1990-01-01

    An estimation of the groundwater recharge was made, for the first time, in laterita, which is a alteration of dunite. This work was carried out at the city of Cajati-Jacupiranga, situated in the Ribeira Valley, state of Sao Paulo. The moisture migration in unsaturated zones was analized using water tagget with artificial tritium. In the place studied, an annual recharge of 1070mm was estimated. This value corresponds to 65% of local precipitation (1650 mm/year). The difference can be considered as a loss through evaporation, evapotranspiration and run off. (author) [pt

  5. Discontinuous Galerkin finite element method with anisotropic local grid refinement for inviscid compressible flows

    NARCIS (Netherlands)

    van der Vegt, Jacobus J.W.; van der Ven, H.

    1998-01-01

    A new discretization method for the three-dimensional Euler equations of gas dynamics is presented, which is based on the discontinuous Galerkin finite element method. Special attention is paid to an efficient implementation of the discontinuous Galerkin method that minimizes the number of flux

  6. Fluid-driven origami-inspired artificial muscles

    Science.gov (United States)

    Li, Shuguang; Vogt, Daniel M.; Rus, Daniela; Wood, Robert J.

    2017-12-01

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ˜600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration.

  7. The research of optimal selection method for wavelet packet basis in compressing the vibration signal of a rolling bearing in fans and pumps

    International Nuclear Information System (INIS)

    Hao, W; Jinji, G

    2012-01-01

    Compressing the vibration signal of a rolling bearing has important significance to wireless monitoring and remote diagnosis of fans and pumps which is widely used in the petrochemical industry. In this paper, according to the characteristics of the vibration signal in a rolling bearing, a compression method based on the optimal selection of wavelet packet basis is proposed. We analyze several main attributes of wavelet packet basis and the effect to the compression of the vibration signal in a rolling bearing using wavelet packet transform in various compression ratios, and proposed a method to precisely select a wavelet packet basis. Through an actual signal, we come to the conclusion that an orthogonal wavelet packet basis with low vanishing moment should be used to compress the vibration signal of a rolling bearing to get an accurate energy proportion between the feature bands in the spectrum of reconstructing the signal. Within these low vanishing moments, orthogonal wavelet packet basis, and 'coif' wavelet packet basis can obtain the best signal-to-noise ratio in the same compression ratio for its best symmetry.

  8. Supervised artificial neural network-based method for conversion of solar radiation data (case study: Algeria)

    Science.gov (United States)

    Laidi, Maamar; Hanini, Salah; Rezrazi, Ahmed; Yaiche, Mohamed Redha; El Hadj, Abdallah Abdallah; Chellali, Farouk

    2017-04-01

    In this study, a backpropagation artificial neural network (BP-ANN) model is used as an alternative approach to predict solar radiation on tilted surfaces (SRT) using a number of variables involved in physical process. These variables are namely the latitude of the site, mean temperature and relative humidity, Linke turbidity factor and Angstrom coefficient, extraterrestrial solar radiation, solar radiation data measured on horizontal surfaces (SRH), and solar zenith angle. Experimental solar radiation data from 13 stations spread all over Algeria around the year (2004) were used for training/validation and testing the artificial neural networks (ANNs), and one station was used to make the interpolation of the designed ANN. The ANN model was trained, validated, and tested using 60, 20, and 20 % of all data, respectively. The configuration 8-35-1 (8 inputs, 35 hidden, and 1 output neurons) presented an excellent agreement between the prediction and the experimental data during the test stage with determination coefficient of 0.99 and root meat squared error of 5.75 Wh/m2, considering a three-layer feedforward backpropagation neural network with Levenberg-Marquardt training algorithm, a hyperbolic tangent sigmoid and linear transfer function at the hidden and the output layer, respectively. This novel model could be used by researchers or scientists to design high-efficiency solar devices that are usually tilted at an optimum angle to increase the solar incident on the surface.

  9. DXAGE: A New Method for Age at Death Estimation Based on Femoral Bone Mineral Density and Artificial Neural Networks.

    Science.gov (United States)

    Navega, David; Coelho, João d'Oliveira; Cunha, Eugénia; Curate, Francisco

    2018-03-01

    Age at death estimation in adult skeletons is hampered, among others, by the unremarkable correlation of bone estimators with chronological age, implementation of inappropriate statistical techniques, observer error, and skeletal incompleteness or destruction. Therefore, it is beneficial to consider alternative methods to assess age at death in adult skeletons. The decrease in bone mineral density with age was explored to generate a method to assess age at death in human remains. A connectionist computational approach, artificial neural networks, was employed to model femur densitometry data gathered in 100 female individuals from the Coimbra Identified Skeletal Collection. Bone mineral density declines consistently with age and the method performs appropriately, with mean absolute differences between known and predicted age ranging from 9.19 to 13.49 years. The proposed method-DXAGE-was implemented online to streamline age estimation. This preliminary study highlights the value of densitometry to assess age at death in human remains. © 2017 American Academy of Forensic Sciences.

  10. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  11. Compressive force-path method unified ultimate limit-state design of concrete structures

    CERN Document Server

    Kotsovos, Michael D

    2014-01-01

    This book presents a method which simplifies and unifies the design of reinforced concrete (RC) structures and is applicable to any structural element under both normal and seismic loading conditions. The proposed method has a sound theoretical basis and is expressed in a unified form applicable to all structural members, as well as their connections. It is applied in practice through the use of simple failure criteria derived from first principles without the need for calibration through the use of experimental data. The method is capable of predicting not only load-carrying capacity but also the locations and modes of failure, as well as safeguarding the structural performance code requirements. In this book, the concepts underlying the method are first presented for the case of simply supported RC beams. The application of the method is progressively extended so as to cover all common structural elements. For each structural element considered, evidence of the validity of the proposed method is presented t...

  12. The development of the distraction-compression osteogenesis method in orthopedic surgery in Poland.

    Science.gov (United States)

    Wall, Andrzej; Orzechowski, Wiktor

    2002-06-30

    The dynamic development of the Ilizarov method around the world and in Poland during the last decade has been made possible by the scientific of its application, based on universal laws of physiology and biomechanics. The Ilizarov method is beyond a doubt the treatment method of choice in many serious disorders of the locomotor apparatus, and is of extremely high value in the treatment of complicated open fractures with concomitant diffuse infuries of soft tissue, and not just in Poland's flagship orthopedic centers. Even though the complication rate is fairly high, the method is well tolerated by the patients, and the majority of failures can be effectively treated by the same method. This article outlines the history of limb lengthening in the worlds and the history of the development of the Ilizarov method in Poland.

  13. Website-based PNG image steganography using the modified Vigenere Cipher, least significant bit, and dictionary based compression methods

    Science.gov (United States)

    Rojali, Salman, Afan Galih; George

    2017-08-01

    Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.

  14. The Discrete Equation Method (DEM) for Fully Compressible Two-Phase Flows in Ducts of Spatially Varying Cross-Section

    Energy Technology Data Exchange (ETDEWEB)

    Ray A. Berry; Richard Saurel; Tamara Grimmett

    2009-07-01

    Typically, multiphase modeling begins with an averaged (or homogenized) system of partial differential equations (traditionally ill-posed) then discretizes this system to form a numerical scheme. Assuming that the ill-posedness problem is avoided by using a well-posed formulation such as the seven-equation model, this presents problems for the numerical approximation of non-conservative terms at discontinuities (interfaces, shocks) as well as unwieldy treatment of fluxes with seven waves. To solve interface problems without conservation errors and to avoid this questionable determination of average variables and the numerical approximation of the non-conservative terms associated with 2 velocity mixture flows we employ a new homogenization method known as the Discrete Equations Method (DEM). Contrary to conventional methods, the averaged equations for the mixture are not used, and this method directly obtains a (well-posed) discrete equation system from the single-phase system to produce a numerical scheme which accurately computes fluxes for arbitrary numbers of phases and solves non-conservative products. The method effectively uses a sequence of single phase Riemann equation solves. Phase interactions are accounted for by Riemann solvers at each interface. Flow topology can change with changing expressions for the fluxes. Non-conservative terms are correctly approximated. Some of the closure relations missing from the traditional approach are automatically obtained. Lastly, we can often times identify the continuous equation system, resulting from taking the continuous limit with weak wave assumptions, of the discrete equations. This can be very useful from a theoretical standpoint. As a first step toward implict integration of the DEM method in multidimensions, in this paper we construct a DEM model for the flow of two compressible phases in 1-D ducts of spatially varying cross-section to test this approach. To relieve time step size restrictions due to

  15. River flow estimation from upstream flow records by artificial intelligence methods

    Science.gov (United States)

    Turan, M. Erkan; Yurdusev, M. Ali

    2009-05-01

    SummaryWater resources management has become more and more crucial by the depletion of available water resources to use as opposed to the increase of the water consumption. An effective management relies on accurate and complete information about the river on which a project will be constructed. Artificial intelligence techniques are often and successfully used to complete the unmeasured data. In this study, feed forward back propagation neural networks, generalized regression neural network, fuzzy logic are used to estimate unmeasured data using the data of the four runoff gauge station on the Birs River in Switzerland. The performances of these models are measured by the mean square error, determination coefficients and efficiency coefficients to choose the best fit model.

  16. Compression-Based Compressed Sensing

    OpenAIRE

    Rezagah, Farideh Ebrahim; Jalali, Shirin; Erkip, Elza; Poor, H. Vincent

    2016-01-01

    Modern compression algorithms exploit complex structures that are present in signals to describe them very efficiently. On the other hand, the field of compressed sensing is built upon the observation that "structured" signals can be recovered from their under-determined set of linear projections. Currently, there is a large gap between the complexity of the structures studied in the area of compressed sensing and those employed by the state-of-the-art compression codes. Recent results in the...

  17. Comparing and validating methods of reading instruction using behavioural and neural findings in an artificial orthography.

    Science.gov (United States)

    Taylor, J S H; Davis, Matthew H; Rastle, Kathleen

    2017-06-01

    There is strong scientific consensus that emphasizing print-to-sound relationships is critical when learning to read alphabetic languages. Nevertheless, reading instruction varies across English-speaking countries, from intensive phonic training to multicuing environments that teach sound- and meaning-based strategies. We sought to understand the behavioral and neural consequences of these differences in relative emphasis. We taught 24 English-speaking adults to read 2 sets of 24 novel words (e.g., /buv/, /sig/), written in 2 different unfamiliar orthographies. Following pretraining on oral vocabulary, participants learned to read the novel words over 8 days. Training in 1 language was biased toward print-to-sound mappings while training in the other language was biased toward print-to-meaning mappings. Results showed striking benefits of print-sound training on reading aloud, generalization, and comprehension of single words. Univariate analyses of fMRI data collected at the end of training showed that print-meaning relative to print-sound relative training increased neural effort in dorsal pathway regions involved in reading aloud. Conversely, activity in ventral pathway brain regions involved in reading comprehension was no different following print-meaning versus print-sound training. Multivariate analyses validated our artificial language approach, showing high similarity between the spatial distribution of fMRI activity during artificial and English word reading. Our results suggest that early literacy education should focus on the systematicities present in print-to-sound relationships in alphabetic languages, rather than teaching meaning-based strategies, in order to enhance both reading aloud and comprehension of written words. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. A Compressed Sensing Based Method for Reducing the Sampling Time of A High Resolution Pressure Sensor Array System.

    Science.gov (United States)

    Sun, Chenglu; Li, Wei; Chen, Wei

    2017-08-10

    For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array.

  19. A Compressed Sensing Based Method for Reducing the Sampling Time of A High Resolution Pressure Sensor Array System

    Directory of Open Access Journals (Sweden)

    Chenglu Sun

    2017-08-01

    Full Text Available For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array.

  20. Application of discontinuous Galerkin method for solving a compressible five-equation two-phase flow model

    Directory of Open Access Journals (Sweden)

    M. Rehan Saleem

    2018-03-01

    Full Text Available In this article, a reduced five-equation two-phase flow model is numerically investigated. The formulation of the model is based on the conservation and energy exchange laws. The model is non-conservative and the governing equations contain two equations for the mass conservation, one for the over all momentum and one for the total energy. The fifth equation is the energy equation for one of the two phases that includes a source term on the right hand side for incorporating energy exchange between the two fluids in the form of mechanical and thermodynamical works. A Runge-Kutta discontinuous Galerkin finite element method is applied to solve the model equations. The main attractive features of the proposed method include its formal higher order accuracy, its nonlinear stability, its ability to handle complicated geometries, and its ability to capture sharp discontinuities or strong gradients in the solutions without producing spurious oscillations. The proposed method is robust and well suited for large-scale time-dependent computational problems. Several case studies of two-phase flows are presented. For validation and comparison of the results, the same model equations are also solved by using a staggered central scheme. It was found that discontinuous Galerkin scheme produces better results as compared to the staggered central scheme. Keywords: Two-phase compressible flows, Non-conservative system, Shock discontinuities, Discontinuous Galerkin method, Central scheme

  1. Spatial interpolation and radiological mapping of ambient gamma dose rate by using artificial neural networks and fuzzy logic methods.

    Science.gov (United States)

    Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkın, Halim; Çevik, Uğur

    2017-09-01

    The aim of this study was to determine spatial risk dispersion of ambient gamma dose rate (AGDR) by using both artificial neural network (ANN) and fuzzy logic (FL) methods, compare the performances of methods, make dose estimations for intermediate stations with no previous measurements and create dose rate risk maps of the study area. In order to determine the dose distribution by using artificial neural networks, two main networks and five different network structures were used; feed forward ANN; Multi-layer perceptron (MLP), Radial basis functional neural network (RBFNN), Quantile regression neural network (QRNN) and recurrent ANN; Jordan networks (JN), Elman networks (EN). In the evaluation of estimation performance obtained for the test data, all models appear to give similar results. According to the cross-validation results obtained for explaining AGDR distribution, Pearson's r coefficients were calculated as 0.94, 0.91, 0.89, 0.91, 0.91 and 0.92 and RMSE values were calculated as 34.78, 43.28, 63.92, 44.86, 46.77 and 37.92 for MLP, RBFNN, QRNN, JN, EN and FL, respectively. In addition, spatial risk maps showing distributions of AGDR of the study area were created by all models and results were compared with geological, topological and soil structure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Determination of penetration depth at high velocity impact using finite element method and artificial neural network tools

    Directory of Open Access Journals (Sweden)

    Namık KılıÇ

    2015-06-01

    Full Text Available Determination of ballistic performance of an armor solution is a complicated task and evolved significantly with the application of finite element methods (FEM in this research field. The traditional armor design studies performed with FEM requires sophisticated procedures and intensive computational effort, therefore simpler and accurate numerical approaches are always worthwhile to decrease armor development time. This study aims to apply a hybrid method using FEM simulation and artificial neural network (ANN analysis to approximate ballistic limit thickness for armor steels. To achieve this objective, a predictive model based on the artificial neural networks is developed to determine ballistic resistance of high hardness armor steels against 7.62 mm armor piercing ammunition. In this methodology, the FEM simulations are used to create training cases for Multilayer Perceptron (MLP three layer networks. In order to validate FE simulation methodology, ballistic shot tests on 20 mm thickness target were performed according to standard Stanag 4569. Afterwards, the successfully trained ANN(s is used to predict the ballistic limit thickness of 500 HB high hardness steel armor. Results show that even with limited number of data, FEM-ANN approach can be used to predict ballistic penetration depth with adequate accuracy.

  3. Introducing micrometer-sized artificial objects into live cells: a method for cell-giant unilamellar vesicle electrofusion.

    Directory of Open Access Journals (Sweden)

    Akira C Saito

    Full Text Available Here, we report a method for introducing large objects of up to a micrometer in diameter into cultured mammalian cells by electrofusion of giant unilamellar vesicles. We prepared GUVs containing various artificial objects using a water-in-oil (w/o emulsion centrifugation method. GUVs and dispersed HeLa cells were exposed to an alternating current (AC field to induce a linear cell-GUV alignment, and then a direct current (DC pulse was applied to facilitate transient electrofusion. With uniformly sized fluorescent beads as size indexes, we successfully and efficiently introduced beads of 1 µm in diameter into living cells along with a plasmid mammalian expression vector. Our electrofusion did not affect cell viability. After the electrofusion, cells proliferated normally until confluence was reached, and the introduced fluorescent beads were inherited during cell division. Analysis by both confocal microscopy and flow cytometry supported these findings. As an alternative approach, we also introduced a designed nanostructure (DNA origami into live cells. The results we report here represent a milestone for designing artificial symbiosis of functionally active objects (such as micro-machines in living cells. Moreover, our technique can be used for drug delivery, tissue engineering, and cell manipulation.

  4. The Effects of Design Strength, Fly Ash Content and Curing Method on Compressive Strength of High Volume Fly Ash Concrete: A Design of Experimental

    Directory of Open Access Journals (Sweden)

    Solikin Mochamad

    2017-01-01

    Full Text Available High volume fly ash concrete becomes one of alternatives to produce green concrete as it uses waste material and significantly reduces the utilization of Portland cement in concrete production. Although using less cement, its compressive strength is comparable to ordinary Portland cement (hereafter OPC and the its durability increases significantly. This paper reports investigation on the effect of design strength, fly ash content and curing method on compressive strength of High Volume Fly Ash Concrete. The experiment and data analysis were prepared using minitab, a statistic software for design of experimental. The specimens were concrete cylinder with diameter of 15 cm and height of 30 cm, tested for its compressive strength at 56 days. The result of the research demonstrates that high volume fly ash concrete can produce comparable compressive strength which meets the strength of OPC design strength especially for high strength concrete. In addition, the best mix proportion to achieve the design strength is the combination of high strength concrete and 50% content of fly ash. Moreover, the use of spraying method for curing method of concrete on site is still recommended as it would not significantly reduce the compressive strength result.

  5. Finite element methods in incompressible, adiabatic, and compressible flows from fundamental concepts to applications

    CERN Document Server

    Kawahara, Mutsuto

    2016-01-01

    This book focuses on the finite element method in fluid flows. It is targeted at researchers, from those just starting out up to practitioners with some experience. Part I is devoted to the beginners who are already familiar with elementary calculus. Precise concepts of the finite element method remitted in the field of analysis of fluid flow are stated, starting with spring structures, which are most suitable to show the concepts of superposition/assembling. Pipeline system and potential flow sections show the linear problem. The advection–diffusion section presents the time-dependent problem; mixed interpolation is explained using creeping flows, and elementary computer programs by FORTRAN are included. Part II provides information on recent computational methods and their applications to practical problems. Theories of Streamline-Upwind/Petrov–Galerkin (SUPG) formulation, characteristic formulation, and Arbitrary Lagrangian–Eulerian (ALE) formulation and others are presented with practical results so...

  6. Quickprop method to speed up learning process of Artificial Neural Network in money's nominal value recognition case

    Science.gov (United States)

    Swastika, Windra

    2017-03-01

    A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.

  7. Efficacy of Blood Sources and Artificial Blood Feeding Methods in Rearing of Aedes aegypti (Diptera: Culicidae) for Sterile Insect Technique and Incompatible Insect Technique Approaches in Sri Lanka

    OpenAIRE

    Nayana Gunathilaka; Tharaka Ranathunge; Lahiru Udayanga; Wimaladharma Abeyewickreme

    2017-01-01

    Introduction Selection of the artificial membrane feeding technique and blood meal source has been recognized as key considerations in mass rearing of vectors. Methodology Artificial membrane feeding techniques, namely, glass plate, metal plate, and Hemotek membrane feeding method, and three blood sources (human, cattle, and chicken) were evaluated based on feeding rates, fecundity, and hatching rates of Aedes aegypti. Significance in the variations among blood feeding was investigated by one...

  8. Methods for evaluating tensile and compressive properties of plastic laminates reinforced with unwoven glass fibers

    Science.gov (United States)

    Karl Romstad

    1964-01-01

    Methods of obtaining strength and elastic properties of plastic laminates reinforced with unwoven glass fibers were evaluated using the criteria of the strength values obtained and the failure characteristics observed. Variables investigated were specimen configuration and the manner of supporting and loading the specimens. Results of this investigation indicate that...

  9. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods.

    Science.gov (United States)

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-21

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  10. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods

    International Nuclear Information System (INIS)

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-01-01

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate O(1/k 2 ). In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques. (paper)

  11. Fracture Toughness Prediction under Compressive Residual Stress by Using a Stress-Distribution T-Scaling Method

    Directory of Open Access Journals (Sweden)

    Toshiyuki Meshii

    2017-12-01

    Full Text Available The improvement in the fracture toughness Jc of a material in the ductile-to-brittle transition temperature region due to compressive residual stress (CRS was considered in this study. A straightforward fracture prediction was performed for a specimen with mechanical CRS by using the T-scaling method, which was originally proposed to scale the fracture stress distributions between different temperatures. The method was validated for a 780-MPa-class high-strength steel and 0.45% carbon steel. The results showed that the scaled stress distributions at fracture loads without and with CRS are the same, and that Jc improvement was caused by the loss in the one-to-one correspondence between J and the crack-tip stress distribution. The proposed method is advantageous in possibly predicting fracture loads for specimens with CRS by using only the stress–strain relationship, and by performing elastic-plastic finite element analysis, i.e., without performing fracture toughness testing on specimens without CRS.

  12. Geochemical and isotopic methods for management of artificial recharge in mazraha station (Damascus)

    International Nuclear Information System (INIS)

    Abou Zakhem, B.; Hafez, R.; Kadkoy, N.

    2009-11-01

    Artificial recharge of shallow groundwater at specially designed facilities is an attractive option increasing the storage capacity of potable water in arid and semi arid region such as Syria, Damascus Oasis. This operation needs integral management and detailed knowledge of groundwater dynamics and quantity and quality development of water. The objective of this study is to determine the temporal and spatial variations of chemical and environmental isotopic characteristics of groundwater during injection and recovery process. The geochemical and environmental isotope techniques are ideally suited for these investigations. 400 to 500 x10 3 m 3 of spring water were injected annually into the ambient groundwater in Mazraha station, Damascus Oasis, which is used later for drinking purpose. Native groundwater and injected water are calcium bicarbonate type with EC of about 850±100 μS/cm and 300±50 μS/cm respectively. The injected water is under saturated with respect to calcite, while ambient groundwater is over saturated and the mixed water is in equilibrium after injection. It was observed that The injection process created a dilution cloud decreasing chemical concentrations progressively that improve the groundwater quality. After completed injection, the dilution center moved about 200 m during 85 days to the south southeast according to the ambient groundwater flow path. Based on this observation, the hydraulic conductivity of the aquifer is estimated about 7.5±1.3x10 -4 m/s. The spatial distribution maps of CFC-11 and CFC-12, after injection, showed the same shape and flow direction of the spatial distribution of chemical elements. The effective diameter of artificial recharge is limited to about 250 m from the injection wells, as EC, Cl- and NO 3 - concentrations are effected significantly. Mixing ratio of 30% is required in order to lower nitrate concentration to less than 50 mg/l in native groundwater for potable water. Depending on pumping rate, the

  13. Deterministic Compressed Sensing

    Science.gov (United States)

    2011-11-01

    programs. Examples of such algorithms are the interior point methods [51, 52], Lasso modification to LARS [106, 171], homotopy methods [99], weighted...component analysis . IEEE Signal Processing Letters, 9(2):40–42, 2002. [171] S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. A method for...53 7.3 Analysis of the GAME Algorithm . . . . . . . . . . . . . . . . . . . . 57 III Expander-Based Compressed Sensing 61 8 Efficient Compressed

  14. Optimization of the overtake method for sound velocity measurements in shock compressed Sn

    Science.gov (United States)

    Gudinetsky, Eli; Yosef-Hai, Arnon; Eidelstein, Eitan; Paris, Vitaly; Bialolenker, Gabi; Fedotov-Gefen, Alex; Werdiger, Meir; Horovitz, Yossef; Ravid, Avi

    2017-06-01

    Sound velocity measurements are useful for mapping the phase diagram of materials and for calibration of their EOS outside the principle Hugoniot. A common method is the overtake method, in which a flyer plate is accelerated towards two or more targets of different thickness. In the present work, detailed calculations were carried out in order to design optimal experiments in terms of expected uncertainties. These calculations took into account many factors: 2D effects such as edge rarefactions originating in the flyer plate, targets and the windows, EOS accuracy, thickness and diameters tolerances and error correlations. The experimental results were compared with these calculations to test the design of high accuracy experiments. The sound velocity measurements in Sn were compared to the literature.

  15. Convergence of a numerical method for the compressible Navier-Stokes system on general domains

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Karper, T.; Michálek, Martin

    2016-01-01

    Roč. 134, č. 4 (2016), s. 667-704 ISSN 0029-599X R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : numerical methods * Navier-Stokes system Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016 http://link.springer.com/article/10.1007%2Fs00211-015-0786-6

  16. Convergence of a numerical method for the compressible Navier-Stokes system on general domains

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Karper, T.; Michálek, Martin

    2016-01-01

    Roč. 134, č. 4 (2016), s. 667-704 ISSN 0029-599X R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : numerical methods * Navier - Stokes system Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016 http://link.springer.com/article/10.1007%2Fs00211-015-0786-6

  17. Comparative study of landslides susceptibility mapping methods: Multi-Criteria Decision Making (MCDM) and Artificial Neural Network (ANN)

    Science.gov (United States)

    Salleh, S. A.; Rahman, A. S. A. Abd; Othman, A. N.; Mohd, W. M. N. Wan

    2018-02-01

    As different approach produces different results, it is crucial to determine the methods that are accurate in order to perform analysis towards the event. This research aim is to compare the Rank Reciprocal (MCDM) and Artificial Neural Network (ANN) analysis techniques in determining susceptible zones of landslide hazard. The study is based on data obtained from various sources such as local authority; Dewan Bandaraya Kuala Lumpur (DBKL), Jabatan Kerja Raya (JKR) and other agencies. The data were analysed and processed using Arc GIS. The results were compared by quantifying the risk ranking and area differential. It was also compared with the zonation map classified by DBKL. The results suggested that ANN method gives better accuracy compared to MCDM with 18.18% higher accuracy assessment of the MCDM approach. This indicated that ANN provides more reliable results and it is probably due to its ability to learn from the environment thus portraying realistic and accurate result.

  18. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Mustafa Serter Uzer

    2013-01-01

    Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

  19. Artificial intelligence

    CERN Document Server

    Hunt, Earl B

    1975-01-01

    Artificial Intelligence provides information pertinent to the fundamental aspects of artificial intelligence. This book presents the basic mathematical and computational approaches to problems in the artificial intelligence field.Organized into four parts encompassing 16 chapters, this book begins with an overview of the various fields of artificial intelligence. This text then attempts to connect artificial intelligence problems to some of the notions of computability and abstract computing devices. Other chapters consider the general notion of computability, with focus on the interaction bet

  20. A relaxation-projection method for compressible flows. Part I: The numerical equation of state for the Euler equations

    International Nuclear Information System (INIS)

    Saurel, Richard; Franquet, Erwin; Daniel, Eric; Le Metayer, Olivier

    2007-01-01

    A new projection method is developed for the Euler equations to determine the thermodynamic state in computational cells. It consists in the resolution of a mechanical relaxation problem between the various sub-volumes present in a computational cell. These sub-volumes correspond to the ones traveled by the various waves that produce states with different pressures, velocities, densities and temperatures. Contrarily to Godunov type schemes the relaxed state corresponds to mechanical equilibrium only and remains out of thermal equilibrium. The pressure computation with this relaxation process replaces the use of the conventional equation of state (EOS). A simplified relaxation method is also derived and provides a specific EOS (named the Numerical EOS). The use of the Numerical EOS gives a cure to spurious pressure oscillations that appear at contact discontinuities for fluids governed by real gas EOS. It is then extended to the computation of interface problems separating fluids with different EOS (liquid-gas interface for example) with the Euler equations. The resulting method is very robust, accurate, oscillation free and conservative. For the sake of simplicity and efficiency the method is developed in a Lagrange-projection context and is validated over exact solutions. In a companion paper [F. Petitpas, E. Franquet, R. Saurel, A relaxation-projection method for compressible flows. Part II: computation of interfaces and multiphase mixtures with stiff mechanical relaxation. J. Comput. Phys. (submitted for publication)], the method is extended to the numerical approximation of a non-conservative hyperbolic multiphase flow model for interface computation and shock propagation into mixtures

  1. Investigation of thermal stratification in cisterns using analytical and Artificial Neural Networks methods

    International Nuclear Information System (INIS)

    Ameri Siahoui, H.R.; Dehghani, A.R.; Razavi, M.; Khani, M.R.

    2011-01-01

    The thermal characteristics of an underground cold-water reservoir are investigated analytically and using Artificial Neural Networks (ANN). An analytical solution is developed for the temperature distribution in the reservoir by assuming a linearized boundary condition at the water surface. For the general non-linear boundary condition, the temperature distribution is modeled using ANN. Very good agreements between the analytical and ANN results at various times during the withdrawal cycle are observed, ensuring the accuracy of the analytical and ANN procedures. The results show that a stable thermal stratification is preserved in the reservoir throughout the entire course of withdrawal cycle. As one important outcome of this research, two different regions are observed inside the thermally stratified tank during discharge cycle. The bottom region with a linear temperature distribution and the upper one in which a nearly exponential thermal stratification are developed. During withdrawal cycle, the outside temperature reaches as high as 42 o C, while cool water with the temperature varying from 12 to 13 o C is easily available from the underground water reservoir under investigation.

  2. Determination of Odour Interactions in Gaseous Mixtures Using Electronic Nose Methods with Artificial Neural Networks

    Science.gov (United States)

    Szulczyński, Bartosz; Gębicki, Jacek

    2018-01-01

    This paper presents application of an electronic nose prototype comprised of eight sensors, five TGS-type sensors, two electrochemical sensors and one PID-type sensor, to identify odour interaction phenomenon in two-, three-, four- and five-component odorous mixtures. Typical chemical compounds, such as toluene, acetone, triethylamine, α-pinene and n-butanol, present near municipal landfills and sewage treatment plants were subjected to investigation. Evaluation of predicted odour intensity and hedonic tone was performed with selected artificial neural network structures with the activation functions tanh and Leaky rectified linear units (Leaky ReLUs) with the parameter a=0.03. Correctness of identification of odour interactions in the odorous mixtures was determined based on the results obtained with the electronic nose instrument and non-linear data analysis. This value (average) was at the level of 88% in the case of odour intensity, whereas the average was at the level of 74% in the case of hedonic tone. In both cases, correctness of identification depended on the number of components present in the odorous mixture. PMID:29419798

  3. Computer vision-based method for classification of wheat grains using artificial neural network.

    Science.gov (United States)

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  4. Development of methods for remediation of artificial polluted soils and improvement of soils for ecologically clean agricultural production systems

    International Nuclear Information System (INIS)

    Bogachev, V.; Adrianova, G.; Zaitzev, V.; Kalinin, V.; Kovalenko, E.; Makeev, A.; Malikova, L.; Popov, Yu.; Savenkov, A.; Shnyakina, V.

    1996-01-01

    The purpose of the research: Development of methods for the remediation of artificial polluted soils and the improvement of polluted lands to ecologically clean agricultural production.The following tasks will be implemented in this project to achieve viable practical solutions: - To determine the priority pollutants, their ecological pathways, and sources of origin. - To form a supervised environmental monitoring data bank throughout the various geo system conditions. - To evaluate the degree of the bio geo system pollution and the influence on the health of the local human populations. - To establish agricultural plant tolerance levels to the priority pollutants. - To calculate the standard concentrations of the priority pollutants for main agricultural plant groups. - To develop a soil remediation methodology incorporating the structural, functional geo system features. - To establish a territory zone division methodology in consideration of the degree of component pollution, plant tolerance to pollutants, plant production conditions, and human health. - Scientific grounding of the soil remediation proposals and agricultural plant material introductions with soil pollution levels and relative plant tolerances to pollutants. Technological Means, Methods, and Approaches Final proposed solutions will be based upon geo system and ecosystem approaches and methodologies. The complex ecological valuation methods of the polluted territories will be used in this investigation. Also, laboratory culture in vitro, application work, and multi-factor field experiments will be conducted. The results will be statistically analyzed using appropriate methods. Expected Results Complex biogeochemical artificial province assessment according to primary pollutant concentrations. Development of agricultural plant tolerance levels relative to the priority pollutants. Assessment of newly introduced plant materials that may possess variable levels of pollution tolerance. Remediation

  5. Modeling of feed-forward control using the partial least squares regression method in the tablet compression process.

    Science.gov (United States)

    Hattori, Yusuke; Otsuka, Makoto

    2017-05-30

    In the pharmaceutical industry, the implementation of continuous manufacturing has been widely promoted in lieu of the traditional batch manufacturing approach. More specially, in recent years, the innovative concept of feed-forward control has been introduced in relation to process analytical technology. In the present study, we successfully developed a feed-forward control model for the tablet compression process by integrating data obtained from near-infrared (NIR) spectra and the physical properties of granules. In the pharmaceutical industry, batch manufacturing routinely allows for the preparation of granules with the desired properties through the manual control of process parameters. On the other hand, continuous manufacturing demands the automatic determination of these process parameters. Here, we proposed the development of a control model using the partial least squares regression (PLSR) method. The most significant feature of this method is the use of dataset integrating both the NIR spectra and the physical properties of the granules. Using our model, we determined that the properties of products, such as tablet weight and thickness, need to be included as independent variables in the PLSR analysis in order to predict unknown process parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. In vitro biomechanical properties of 2 compression fixation methods for midbody proximal sesamoid bone fractures in horses.

    Science.gov (United States)

    Woodie, J B; Ruggles, A J; Litsky, A S

    2000-01-01

    To evaluate 2 methods of midbody proximal sesamoid bone repair--fixation by a screw placed in lag fashion and circumferential wire fixation--by comparing yield load and the adjacent soft-tissue strain during monotonic loading. Experimental study. 10 paired equine cadaver forelimbs from race-trained horses. A transverse midbody osteotomy of the medial proximal sesamoid bone (PSB) was created. The osteotomy was repaired with a 4.5-mm cortex bone screw placed in lag fashion or a 1.25-mm circumferential wire. The limbs were instrumented with differential variable reluctance transducers placed in the suspensory apparatus and distal sesamoidean ligaments. The limbs were tested in axial compression in a single cycle until failure. The cortex bone screw repairs had a mean yield load of 2,908.2 N; 1 limb did not fail when tested to 5,000 N. All circumferential wire repairs failed with a mean yield load of 3,406.3 N. There was no statistical difference in mean yield load between the 2 repair methods. The maximum strain generated in the soft tissues attached to the proximal sesamoid bones was not significantly different between repair groups. All repaired limbs were able to withstand loads equal to those reportedly applied to the suspensory apparatus in vivo during walking. Each repair technique should have adequate yield strength for repair of midbody fractures of the PSB immediately after surgery.

  7. Leak Detection Modeling and Simulation for Oil Pipeline with Artificial Intelligence Method

    Directory of Open Access Journals (Sweden)

    Pudjo Sukarno

    2007-05-01

    Full Text Available Leak detection is always interesting research topic, where leak location and leak rate are two pipeline leaking parameters that should be determined accurately to overcome pipe leaking problems. In this research those two parameters are investigated by developing transmission pipeline model and the leak detection model which is developed using Artificial Neural Network. The mathematical approach needs actual leak data to train the leak detection model, however such data could not be obtained from oil fields. Therefore, for training purposes hypothetical data are developed using the transmission pipeline model, by applying various physical configuration of pipeline and applying oil properties correlations to estimate the value of oil density and viscosity. The various leak locations and leak rates are also represented in this model. The prediction of those two leak parameters will be completed until the total error is less than certain value of tolerance, or until iterations level is reached. To recognize the pattern, forward procedure is conducted. The application of this approach produces conclusion that for certain pipeline network configuration, the higher number of iterations will produce accurate result. The number of iterations depend on the leakage rate, the smaller leakage rate, the higher number of iterations are required. The accuracy of this approach is clearly determined by the quality of training data. Therefore, in the preparation of training data the results of pressure drop calculations should be validated by the real measurement of pressure drop along the pipeline. For the accuracy purposes, there are possibility to change the pressure drop and fluid properties correlations, to get the better results. The results of this research are expected to give real contribution for giving an early detection of oil-spill in oil fields.

  8. Evaluation of a customized artificial osteoporotic bone model of the distal femur.

    Science.gov (United States)

    Wähnert, Dirk; Hoffmeier, Konrad L; Stolarczyk, Yves; Fröber, Rosemarie; Hofmann, Gunther O; Mückley, Thomas

    2011-11-01

    In the development of new implants biomechanical testing is essential. Since human bones vary markedly in density and geometry their suitability for biomechanical testing is limited. In contrast artificial bones are of great uniformity and therefore appropriate for biomechanical testing. However, the applied artificial bones have to be proved as comparable to human bone. An anatomical shaped artificial bone representing the distal human femur was created by foaming polyurethane. To get a bone model with properties of osteoporotic bone a foam density of 150 kg/m3 was used. The biomechanical properties of our artificial bones were evaluated against eight mildly osteoporotic fresh frozen human femora by mechanical testing. At the artificial bones all tested parameters showed a very small variation. In contrast significant correlation between bone mass density and tested parameters was found for the human bones. The artificial bones reached 39% of the compression strength and 41% of the screw pullout force of the human bone. In indentation testing the artificial bones reached 27% (cancellous) and 59% (cortical) respectively of the human bones strength. Regarding Shore hardness artificial bone and human bone showed comparable results for the cortical layer and at the cancellous layer the artificial bone reached 57% of human bones hardness. Our described method for customizing of artificial bones regarding their shape and bone stock quality provides suitable results. In relation to the as mildly osteoporotic classified human bones we assume that the biomechanical properties matching to serve osteoporotic bone.

  9. An automated microplate-based method for monitoring DNA strand breaks in plasmids and bacterial artificial chromosomes.

    Science.gov (United States)

    Rock, Cassandra; Shamlou, Parviz Ayazi; Levy, M Susana

    2003-06-01

    A method is described for high-throughput monitoring of DNA backbone integrity in plasmids and artificial chromosomes in solution. The method is based on the denaturation properties of double-stranded DNA in alkaline conditions and uses PicoGreen fluorochrome to monitor denaturation. In the present method, fluorescence enhancement of PicoGreen at pH 12.4 is normalised by its value at pH 8 to give a ratio that is proportional to the average backbone integrity of the DNA molecules in the sample. A good regression fit (r2 > 0.98) was obtained when results derived from the present method and those derived from agarose gel electrophoresis were compared. Spiking experiments indicated that the method is sensitive enough to detect a proportion of 6% (v/v) molecules with an average of less than two breaks per molecule. Under manual operation, validation parameters such as inter-assay and intra-assay variation gave values of electrophoresis of sheared samples were in agreement with those obtained using the microplate-based method.

  10. An oscillation free shock-capturing method for compressible van der Waals supercritical fluid flows

    International Nuclear Information System (INIS)

    Pantano, C.; Saurel, R.; Schmitt, T.

    2017-01-01

    Numerical solutions of the Euler equations using real gas equations of state (EOS) often exhibit serious inaccuracies. The focus here is the van der Waals EOS and its variants (often used in supercritical fluid computations). The problems are not related to a lack of convexity of the EOS since the EOS are considered in their domain of convexity at any mesh point and at any time. The difficulties appear as soon as a density discontinuity is present with the rest of the fluid in mechanical equilibrium and typically result in spurious pressure and velocity oscillations. This is reminiscent of well-known pressure oscillations occurring with ideal gas mixtures when a mass fraction discontinuity is present, which can be interpreted as a discontinuity in the EOS parameters. We are concerned with pressure oscillations that appear just for a single fluid each time a density discontinuity is present. As a result, the combination of density in a nonlinear fashion in the EOS with diffusion by the numerical method results in violation of mechanical equilibrium conditions which are not easy to eliminate, even under grid refinement.

  11. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    Science.gov (United States)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2017-12-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ_1 -norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ_1 (SPGℓ_1 ) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ_1 -norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ_1 -norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ_2 -norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ_1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  12. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    Science.gov (United States)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  13. A simple method for preparing artificial larval diet of the West Indian sweetpotato weevil, Euscepes postfasciatus (Fairmaire) (Coleoptera: Curculionidae)

    International Nuclear Information System (INIS)

    Uesato, T.; Kohama, T.

    2008-01-01

    The method for preparing ordinary larval artificial diet for Euscepes postfasciatus (old diet) was complicated and time consuming. Some ingredients (casein, saccharose, salt mixture, etc.) of the diet were added to boiled agar solution, others (vitamin mixture, sweetpotato powder, etc.) were added after the solution was cooled to 55degC. To simplify the diet preparation, we combined all ingredients before mixing with water, and then boiled the solution (new diet). There were no significant differences of survival rate (from egg hatching to adult eclosion) and right elytron length between the weevils reared on the old and new diets, but the development period (from egg to adult) of the weevils fed the new diet was significantly (1.3 days) longer than that of those fed the old diet. Preparation time of the new diet was half that of the old diet. These results suggest that simplified diet preparation can be introduced into the mass-rearing of E. postfasciatus

  14. The influence of kind of coating additive on the compressive strength of RCA-based concrete prepared by triple-mixing method

    Science.gov (United States)

    Urban, K.; Sicakova, A.

    2017-10-01

    The paper deals with the use of alternative powder additives (fly ash and fine fraction of recycled concrete) to improve the recycled concrete aggregate and this occurs directly in the concrete mixing process. Specific mixing process (triple mixing method) is applied as it is favourable for this goal. Results of compressive strength after 2 and 28 days of hardening are given. Generally, using powder additives for coating the coarse recycled concrete aggregate in the first stage of triple mixing resulted in decrease of compressive strength, comparing the cement. There is no very important difference between samples based on recycled concrete aggregate and those based on natural aggregate as far as the cement is used for coating. When using both the fly ash and recycled concrete powder, the kind of aggregate causes more significant differences in compressive strength, with the values of those based on the recycled concrete aggregate being worse.

  15. Artificial Intelligence Methods in Analysis of Morphology of Selected Structures in Medical Images

    OpenAIRE

    Ryszard Tadeusiewicz; Marek R. Ogiela

    2001-01-01

    The goal of this paper is the presentation of the possibilities of application of syntactic method of computer image analysis for recognition of local stenoscs of coronary arteries lumen and detection of pathological signs in upper parts of ureter ducts and renal calyxes. Analysis of correct morphology of these structures is possible thanks to thc application of sequence and tree methods from the group of syntactic methods of pattern recognition. In the case of analysis of coronary arteries i...

  16. A stable penalty method for the compressible Navier-Stokes equations: II: One-dimensional domain decomposition schemes

    DEFF Research Database (Denmark)

    Hesthaven, Jan

    1997-01-01

    This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions and as a res......This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions...... and as a result the patching of subdomains is local in space. The scheme is studied in detail for Burgers's equation and developed for the compressible Navier-Stokes equations in general curvilinear coordinates. The versatility of the proposed scheme for the compressible Navier-Stokes equations is illustrated...

  17. A Joint Feature Extraction and Data Compression Method for Low Bit Rate Transmission in Distributed Acoustic Sensor Environments

    National Research Council Canada - National Science Library

    Azimi-Sadjadi, M. R; Pezeshki, A

    2004-01-01

    ... in a surveillance area of interest. These distributed microphones are considerably less expensive and small sized and contain generic DSP boards capable of performing detection, feature extraction and data compression tasks...

  18. An evaluation of the sandwich beam in four-point bending as a compressive test method for composites

    Science.gov (United States)

    Shuart, M. J.; Herakovich, C. T.

    1978-01-01

    The experimental phase of the study included compressive tests on HTS/PMR-15 graphite/polyimide, 2024-T3 aluminum alloy, and 5052 aluminum honeycomb at room temperature, and tensile tests on graphite/polyimide at room temperature, -157 C, and 316 C. Elastic properties and strength data are presented for three laminates. The room temperature elastic properties were generally found to differ in tension and compression with Young's modulus values differing by as much as twenty-six percent. The effect of temperature on modulus and strength was shown to be laminate dependent. A three-dimensional finite element analysis predicted an essentially uniform, uniaxial compressive stress state in the top flange test section of the sandwich beam. In conclusion, the sandwich beam can be used to obtain accurate, reliable Young's modulus and Poisson's ratio data for advanced composites; however, the ultimate compressive stress for some laminates may be influenced by the specimen geometry.

  19. Defining spinal instability and methods of classification to optimise care for patients with malignant spinal cord compression: A systematic review

    International Nuclear Information System (INIS)

    Sheehan, C.

    2016-01-01

    The incidence of Malignant Spinal Cord Compression (MSCC) is thought to be increasing in the UK due to an aging population and improving cancer survivorship. The impact of such a diagnosis requires emergency treatment. In 2008 the National Institute of Clinical Excellence produced guidelines on the management of MSCC which includes a recommendation to assess spinal instability. However, a lack of guidelines to assess spinal instability in oncology patients is widely acknowledged. This can result in variations in the management of care for such patients. A spinal instability assessment can influence optimum patient care (bed rest or encouraged mobilisation) and inform the best definitive treatment modality (surgery or radiotherapy) for an individual patient. The aim of this systematic review is to attempt to identify a consensus definition of spinal instability and methods by which it can be classified. - Highlights: • A lack of guidance on metastatic spinal instability results in variations of care. • Definitions and assessments for spinal instability are explored in this review. • A Spinal Instability Neoplastic Scoring (SINS) system has been identified. • SINS could potentially be adopted to optimise and standardise patient care.

  20. Application of multicriteria decision making methods to compression ignition engine efficiency and gaseous, particulate, and greenhouse gas emissions.

    Science.gov (United States)

    Surawski, Nicholas C; Miljevic, Branka; Bodisco, Timothy A; Brown, Richard J; Ristovski, Zoran D; Ayoko, Godwin A

    2013-02-19

    Compression ignition (CI) engine design is subject to many constraints, which present a multicriteria optimization problem that the engine researcher must solve. In particular, the modern CI engine must not only be efficient but must also deliver low gaseous, particulate, and life cycle greenhouse gas emissions so that its impact on urban air quality, human health, and global warming is minimized. Consequently, this study undertakes a multicriteria analysis, which seeks to identify alternative fuels, injection technologies, and combustion strategies that could potentially satisfy these CI engine design constraints. Three data sets are analyzed with the Preference Ranking Organization Method for Enrichment Evaluations and Geometrical Analysis for Interactive Aid (PROMETHEE-GAIA) algorithm to explore the impact of (1) an ethanol fumigation system, (2) alternative fuels (20% biodiesel and synthetic diesel) and alternative injection technologies (mechanical direct injection and common rail injection), and (3) various biodiesel fuels made from 3 feedstocks (i.e., soy, tallow, and canola) tested at several blend percentages (20-100%) on the resulting emissions and efficiency profile of the various test engines. The results show that moderate ethanol substitutions (~20% by energy) at moderate load, high percentage soy blends (60-100%), and alternative fuels (biodiesel and synthetic diesel) provide an efficiency and emissions profile that yields the most "preferred" solutions to this multicriteria engine design problem. Further research is, however, required to reduce reactive oxygen species (ROS) emissions with alternative fuels and to deliver technologies that do not significantly reduce the median diameter of particle emissions.

  1. Improved mesh sequencing method for the accelerated solution of the compressible Euler and Navier-Stokes equations

    Science.gov (United States)

    Tsangaris, S.; Drikakis, D.

    The solution of the compressible Euler and Navier-Stokes equations via an upwind finite volume scheme is obtained. For the inviscid fluxes the monotone, upstream centered scheme for conservation laws (MUSCL) has been incorporated into a Riemann solver. The flux vector splitting method of Steger and Warming is used with some modifications. The MUSCL scheme is used for the unfactored implicit equations which are solved by a Newton form and relaxation is performed with a Gauss-Seidel technique. The solution on the fine grid is obtained by iterating first on a sequence of coarse grids and then interpolating the solution up to the next refined grid. Because the distribution of the numerical error is not uniform, the local solution of the equations in regions where the numerical error is large can be obtained. The choice of the partial meshes, in which the iterations will be continued, is determined by the use of an adaptive procedure taking into account some convergence criteria. Reduction of the iterations for the two-dimensional problem is obtained via the local adaptive mesh solution which is expected to be more effective in three-dimensional complex flow computations.

  2. A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov

    Science.gov (United States)

    Greenough, J. A.; Rider, W. J.

    2004-05-01

    A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are

  3. A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov

    International Nuclear Information System (INIS)

    Greenough, J.A.; Rider, W.J.

    2004-01-01

    A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the 'peak' shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are

  4. Application of artificial neural network methods for the lightning performance evaluation of Hellenic high voltage transmission lines

    Energy Technology Data Exchange (ETDEWEB)

    Ekonomou, L.; Gonos, I.F.; Iracleous, D.P.; Stathopulos, I.A. [National Technical University of Athens, School of Electrical and Computer Engineering, High Voltage Laboratory, 9 Iroon Politechniou St., Zografou, GR 157 80 Athens (Greece)

    2007-01-15

    Feed-forward (FF) artificial neural networks (ANN) and radial basis function (RBF) ANN methods were addressed for evaluating the lightning performance of high voltage transmission lines. Several structures, learning algorithms and transfer functions were tested in order to produce a model with the best generalizing ability. Actual input and output data, collected from operating Hellenic high voltage transmission lines, as well as simulated output data were used in the training, validation and testing process. The aims of the paper are to describe in detail and compare the proposed FF and RBF ANN models, to state their advantages and disadvantages and to present results obtained by their application on operating Hellenic transmission lines of 150kV and 400kV. The ANN results are also compared with results obtained using conventional methods and real records of outage rate showing a quite satisfactory agreement. The proposed ANN methods can be used by electric power utilities as useful tools for the design of electric power systems, alternative to the conventional analytical methods. (author)

  5. Larvas output and influence of human factor in reliability of meat inspection by the method of artificial digestion

    Directory of Open Access Journals (Sweden)

    Đorđević Vesna

    2013-01-01

    Full Text Available On the basis of the performed analyses of the factors that contributed the infected meat reach food chain, we have found out that the infection occurred after consuming the meat inspected by the method of collective samples artificial digestion by using a magnetic stirrer (MM. In this work there are presented assay results which show how modifications of the method, on the level of final sedimentation, influence the reliability of Trichinella larvas detection in the infected meat samples. It has been shown that use of inadequate laboratory containers for larva collecting in final sedimentation and change of volume of digestive liquid that outflow during colouring preparations, can significantly influence inspection results. Larva detection errors ranged from 4 to 80% in presented the experimental groups in regard to the control group of samples inspected by using MM method, which had been carried out completely according to Europe Commission procedure No 2075/2005, where no errors in larva number per sample was found. We consider that the results of this work will contribute to the improvement of control of the method performance and especially of the critical point during inspection of meat samples to Trichinella larvas in Serbia.

  6. A symmetry-preserving discretisation and regularisation model for compressible flow with application to turbulent channel flow

    NARCIS (Netherlands)

    Rozema, W.; Kok, J. C.; Verstappen, R. W. C. P.; Veldman, A. E. P.

    2014-01-01

    Most simulation methods for compressible flow attain numerical stability at the cost of swamping the fine turbulent flow structures by artificial dissipation. This article demonstrates that numerical stability can also be attained by preserving conservation laws at the discrete level. A new

  7. Capillary electrophoresis method for the discrimination between natural and artificial vanilla flavour for controlling food frauds.

    Science.gov (United States)

    Lahouidak, Samah; Salghi, Rachid; Zougagh, Mohammed; Ríos, Angel

    2018-03-06

    A capillary electrophoresis method was developed for the determination of coumarin (COUM), ethyl vanillin (EVA), p-hydroxybenzaldehyde (PHB), p-hydroxybenzoic acid (PHBA), vanillin (VAN), vanillic acid (VANA) and vanillic alcohol (VOH) in vanilla products. The measured concentrations are compared to values obtained by liquid chromatography (LC) method. Analytical results, method precision, and accuracy data are presented and limits of detection for the method ranged from 2 to 5 μg mL -1 . The results obtained are used in monitoring the composition of vanilla flavorings, as well as for confirmation of natural or non-natural origin of vanilla in samples using four selected food samples containing this flavour. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. A Comparison of Artificial Intelligence Methods on Determining Coronary Artery Disease

    Science.gov (United States)

    Babaoğlu, Ismail; Baykan, Ömer Kaan; Aygül, Nazif; Özdemir, Kurtuluş; Bayrak, Mehmet

    The aim of this study is to show a comparison of multi-layered perceptron neural network (MLPNN) and support vector machine (SVM) on determination of coronary artery disease existence upon exercise stress testing (EST) data. EST and coronary angiography were performed on 480 patients with acquiring 23 verifying features from each. The robustness of the proposed methods is examined using classification accuracy, k-fold cross-validation method and Cohen's kappa coefficient. The obtained classification accuracies are approximately 78% and 79% for MLPNN and SVM respectively. Both MLPNN and SVM methods are rather satisfactory than human-based method looking to Cohen's kappa coefficients. Besides, SVM is slightly better than MLPNN when looking to the diagnostic accuracy, average of sensitivity and specificity, and also Cohen's kappa coefficient.

  9. Simulation embedded artificial intelligence search method for supplier trading portfolio decision

    DEFF Research Database (Denmark)

    Feng, Donghan; Yan, Z.; Østergaard, Jacob

    2010-01-01

    . The simulation results also reveal the accumulation effect along trading period, which will improve the normality of the supplier trading portfolios. The authors believe the proposed method is a useful complement for the MV method and conditional value at risk (CVaR)-based methods in the supplier trading......An electric power supplier in the deregulated environment needs to allocate its generation capacities to participate in contract and spot markets. Different trading portfolios will provide suppliers with different future revenue streams of various distributions. The classical mean-variance (MV......) method is inappropriate to deal with the trading portfolios whose return distribution is non-normal. In order to consider the non-normal characteristics in electricity trading, this study proposes a new model based on expected utility theory (EUT) and employs a hybrid genetic algorithm (GA) - Monte...

  10. A Rapid Identification Method for Calamine Using Near-Infrared Spectroscopy Based on Multi-Reference Correlation Coefficient Method and Back Propagation Artificial Neural Network.

    Science.gov (United States)

    Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli

    2017-07-01

    As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.

  11. Artificial insemination in poultry

    Science.gov (United States)

    Artificial insemination is a relative simple yet powerful tool geneticists can employ for the propagation of economically important traits in livestock and poultry. In this chapter, we address the fundamental methods of the artificial insemination of poultry, including semen collection, semen evalu...

  12. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  13. Determining the bistability parameter ranges of artificially induced lac operon using the root locus method.

    Science.gov (United States)

    Avcu, N; Alyürük, H; Demir, G K; Pekergin, F; Cavas, L; Güzeliş, C

    2015-06-01

    This paper employs the root locus method to conduct a detailed investigation of the parameter regions that ensure bistability in a well-studied gene regulatory network namely, lac operon of Escherichia coli (E. coli). In contrast to previous works, the parametric bistability conditions observed in this study constitute a complete set of necessary and sufficient conditions. These conditions were derived by applying the root locus method to the polynomial equilibrium equation of the lac operon model to determine the parameter values yielding the multiple real roots necessary for bistability. The lac operon model used was defined as an ordinary differential equation system in a state equation form with a rational right hand side, and it was compatible with the Hill and Michaelis-Menten approaches of enzyme kinetics used to describe biochemical reactions that govern lactose metabolism. The developed root locus method can be used to study the steady-state behavior of any type of convergent biological system model based on mass action kinetics. This method provides a solution to the problem of analyzing gene regulatory networks under parameter uncertainties because the root locus method considers the model parameters as variable, rather than fixed. The obtained bistability ranges for the lac operon model parameters have the potential to elucidate the appearance of bistability for E. coli cells in in vivo experiments, and they could also be used to design robust hysteretic switches in synthetic biology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Radio frequency pulse compression

    International Nuclear Information System (INIS)

    Farkas, Z.D.

    1988-12-01

    High gradients require peak powers. One possible way to generate high peak powers is to generate a relatively long pulse at a relatively low power and compress it into a shorter pulse with higher peak power. It is possible to compress before dc to rf conversion as is done for the relativistic klystron or after dc to rf conversion as is done with SLED. In this note only radio frequency pulse compression (RFPC) is considered. Three methods of RFPC will be discussed: SLED, BEC, and REC. 3 refs., 8 figs., 1 tab

  15. Artificial Intelligence Methods in Analysis of Morphology of Selected Structures in Medical Images

    Directory of Open Access Journals (Sweden)

    Ryszard Tadeusiewicz

    2001-01-01

    Full Text Available The goal of this paper is the presentation of the possibilities of application of syntactic method of computer image analysis for recognition of local stenoscs of coronary arteries lumen and detection of pathological signs in upper parts of ureter ducts and renal calyxes. Analysis of correct morphology of these structures is possible thanks to thc application of sequence and tree methods from the group of syntactic methods of pattern recognition. In the case of analysis of coronary arteries images the main objective is computer-aided early diagnosis of different form of ischemic cardiovascular diseases. Such diseases may reveal in the form of stable or unstable disturbances of heart rhythm or infarction. ln analysis of kidney radiograms the main goal is recognition of local irregularities in ureter lumens and examination of morphology of renal pelvis and calyxes.

  16. Method in analysis of CdZnTe γ spectrum with artificial neural network

    International Nuclear Information System (INIS)

    Ai Xianyun; Wei Yixiang; Xiao Wuyun

    2005-01-01

    The analysis of gamma-ray spectra to identify lines and their intensities usually requires expert knowledge and time consuming calculations with complex fitting functions. CdZnTe detector often exhibits asymmetric peak shape particularly at high energies making peak fitting methods and sophisticated isotope identification programs difficult to use. This paper investigates the use of the neural network to process gamma spectra measured with CdZnTe detector to verify nuclear materials. Results show that the neural network method gives advantages, in particular, when large low-energetic peak tailings are observed. (authors)

  17. Are Imaging and Lesioning Convergent Methods for Assessing Functional Specialisation? Investigations Using an Artificial Neural Network

    Science.gov (United States)

    Thomas, Michael S. C.; Purser, Harry R. M.; Tomlinson, Simon; Mareschal, Denis

    2012-01-01

    This article presents an investigation of the relationship between lesioning and neuroimaging methods of assessing functional specialisation, using synthetic brain imaging (SBI) and lesioning of a connectionist network of past-tense formation. The model comprised two processing "routes": one was a direct route between layers of input and output…

  18. Comparison of site preparation methods and stock types for artificial regeneration of oaks in bottomlands

    Science.gov (United States)

    Gordon W. Shaw; Daniel C. Dey; John Kabrick; Jennifer Grabner; Rose-Marie Muzika

    2003-01-01

    Regenerating oak in floodplains is problematic and current silvicultural methods are not always reliable. We are evaluating the field performance of a new nursery product, the RPM™ seedling, and the benefit of soil mounding and a cover crop of redtop grass to the survival and growth of pin oak and swamp white oak regeneration on former bottomland cropfields....

  19. On the Application of Formal Methods to Clinical Guidelines, an Artificial Intelligence Perspective

    NARCIS (Netherlands)

    Hommersom, A.J.

    2008-01-01

    In computer science, all kinds of methods and techniques have been developed to study systems, such as simulation of the behaviour of a system. Furthermore, it is possible to study these systems by proving formal formal properties or by searching through all the possible states that a system may be

  20. Methods and procedures for the verification and validation of artificial neural networks

    CERN Document Server

    Taylor, Brian J

    2006-01-01

    Neural networks are members of a class of software that have the potential to enable intelligent computational systems capable of simulating characteristics of biological thinking and learning. This volume introduces some of the methods and techniques used for the verification and validation of neural networks and adaptive systems.

  1. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    Science.gov (United States)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  2. To the problem of control methods unification of natural and artificial radionuclide admission into environment

    International Nuclear Information System (INIS)

    Gedeonov, L.I.

    1981-01-01

    Radioactive substances (RAS) released into the environment during NPP operation form the fields of increased radiation level as compared with the natural background. Preservation of the environment from intolerable contamination requires deter-- mination of the effluent norm by concentration and quantity of RAS released to the environment for every source. The necessity of unification of the methods for radioactive nuclide control of the environment as well as means and conditions of this control are discussed [ru

  3. The method of solution of equations with coefficients that contain measurement errors, using artificial neural network.

    Science.gov (United States)

    Zajkowski, Konrad

    This paper presents an algorithm for solving N -equations of N -unknowns. This algorithm allows to determine the solution in a situation where coefficients A i in equations are burdened with measurement errors. For some values of A i (where i  = 1,…, N ), there is no inverse function of input equations. In this case, it is impossible to determine the solution of equations of classical methods.

  4. An Applied Method for Predicting the Load-Carrying Capacity in Compression of Thin-Wall Composite Structures with Impact Damage

    Science.gov (United States)

    Mitrofanov, O.; Pavelko, I.; Varickis, S.; Vagele, A.

    2018-03-01

    The necessity for considering both strength criteria and postbuckling effects in calculating the load-carrying capacity in compression of thin-wall composite structures with impact damage is substantiated. An original applied method ensuring solution of these problems with an accuracy sufficient for practical design tasks is developed. The main advantage of the method is its applicability in terms of computing resources and the set of initial data required. The results of application of the method to solution of the problem of compression of fragments of thin-wall honeycomb panel damaged by impacts of various energies are presented. After a comparison of calculation results with experimental data, a working algorithm for calculating the reduction in the load-carrying capacity of a composite object with impact damage is adopted.

  5. IMPACT OF COMPRESSED AIR PRESSURE ON GEOMETRIC STRUCTURE OF AISI 1045 STEEL SURFACE AFTER TURNING WITH THE USE OF MQCL METHOD

    Directory of Open Access Journals (Sweden)

    Radoslaw Wojciech Maruda

    2016-06-01

    Full Text Available MQL (Minimum Quantity Lubrication and MQCL (Minimum Quantity Cooling Lubrication methods become alternative solutions for dry machining and deluge cooling conditions. Due to a growing interest in MQCL method, this article discusses the impact of compressed air pressure, which is one of the basic parameters of generating emulsion mist used in MQCL method, on the geometric structure of the surface after turning AISI 1045 carbon steel. This paper presents the results of measurements of machined surface roughness parameters Ra, Rz, RSm as well as roughness profiles and Abbot-Firestone curves. It was found that the increase in the compressed air pressure from 1 to 7 MPa causes an increase in the roughness of the machined surface (the lowest values were obtained at a pressure of 1 MPa. An increase of emulsion mass flow rate also causes an increase in the value of selected parameters of roughness of the machined surface.

  6. A time-released osmotic pump fabricated by compression-coated method: Formulation screen, mechanism research and pharmacokinetic study

    Directory of Open Access Journals (Sweden)

    Tiegang Xin

    2014-08-01

    Full Text Available In this investigation, time-released monolithic osmotic pump (TMOP tablets containing diltiazem hydrochloride (DIL were prepared on the basis of osmotic pumping mechanism. The developed dosage forms were coated by Kollidon®SR-Polyethylene Glycol (PEG mixtures via compression-coated technology instead of spray-coating method to form the outer membrane. For more efficient formulation screening, a three-factor five-level central composite design (CCD was introduced to explore the optimal TMOP formulation during the experiments. The in vitro tests showed that the optimized formulation of DIL-loaded TMOP had a lag time of 4 h and a following 20-h drug release at an approximate zero-order rate. Moreover, the release mechanism was proven based on osmotic pressure and its profile could be well simulated by a dynamic equation. After oral administration by beagle dogs, the comparison of parameters with the TMOP tablets and reference preparations show no significant differences for Cmax (111.56 ± 20.42, 128.38 ± 29.46 ng/ml and AUC0-48 h (1654.97 ± 283.77, 1625.10 ± 313.58 ng h/ml but show significant differences for Tmax (13.00 ± 1.16, 4.00 ± 0.82 h. These pharmacokinetic parameters were consistent with the dissolution tests that the TMOP tablets had turned out to prolong the lag time of DIL release.

  7. Diametral compression behavior of biomedical titanium scaffolds with open, interconnected pores prepared with the space holder method.

    Science.gov (United States)

    Arifvianto, B; Leeflang, M A; Zhou, J

    2017-04-01

    Scaffolds with open, interconnected pores and appropriate mechanical properties are required to provide mechanical support and to guide the formation and development of new tissue in bone tissue engineering. Since the mechanical properties of the scaffold tend to decrease with increasing porosity, a balance must be sought in order to meet these two conflicting requirements. In this research, open, interconnected pores and mechanical properties of biomedical titanium scaffolds prepared by using the space holder method were characterized. Micro-computed tomography (micro-CT) and permeability analysis were carried out to quantify the porous structures and ascertain the presence of open, interconnected pores in the scaffolds fabricated. Diametral compression (DC) tests were performed to generate stress-strain diagrams that could be used to determine the elastic moduli and yield strengths of the scaffolds. Deformation and failure mechanisms involved in the DC tests of the titanium scaffolds were examined. The results of micro-CT and permeability analyses confirmed the presence of open, interconnected pores in the titanium scaffolds with porosity over a range of 31-61%. Among these scaffolds, a maximum specific surface area could be achieved in the scaffold with a total porosity of 5-55%. DC tests showed that the titanium scaffolds with elastic moduli and yield strengths of 0.64-3.47GPa and 28.67-80MPa, respectively, could be achieved. By comprehensive consideration of specific surface area, permeability and mechanical properties, the titanium scaffolds with porosities in a range of 50-55% were recommended to be used in cancellous bone tissue engineering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Investigation of test methods for measuring compressive strength and modulus of two-dimensional carbon-carbon composites

    Science.gov (United States)

    Ohlhorst, Craig W.; Sawyer, James Wayne; Yamaki, Y. Robert

    1989-01-01

    An experimental evaluation has been conducted to ascertain the the usefulness of two techniques for measuring in-plane compressive failure strength and modulus in coated and uncoated carbon-carbon composites. The techniques involved testing specimens with potted ends as well as testing them in a novel clamping fixture; specimen shape, length, gage width, and thickness were the test parameters investigated for both coated and uncoated 0/90 deg and +/-45 deg laminates. It is found that specimen shape does not have a significant effect on the measured compressive properties. The potting of specimen ends results in slightly higher measured compressive strengths than those obtained with the new clamping fixture. Comparable modulus values are obtained by both techniques.

  9. An independent evaluation of a new method for automated interpretation of lung scintigrams using artificial neural networks

    International Nuclear Information System (INIS)

    Holst, H.; Jaerund, A.; Evander, E.; Taegil, K.; Edenbrandt, L.; Maare, K.; Aastroem, K.; Ohlsson, M.

    2001-01-01

    The purpose of this study was to evaluate a new automated method for the interpretation of lung perfusion scintigrams using patients from a hospital other than that where the method was developed, and then to compare the performance of the technique against that of experienced physicians. A total of 1,087 scintigrams from patients with suspected pulmonary embolism comprised the training group. The test group consisted of scintigrams from 140 patients collected in a hospital different to that from which the training group had been drawn. An artificial neural network was trained using 18 automatically obtained features from each set of perfusion scintigrams. The image processing techniques included alignment to templates, construction of quotient images based on the perfusion/template images, and finally calculation of features describing segmental perfusion defects in the quotient images. The templates represented lungs of normal size and shape without any pathological changes. The performance of the neural network was compared with that of three experienced physicians who read the same test scintigrams according to the modified PIOPED criteria using, in addition to perfusion images, ventilation images when available and chest radiographs for all patients. Performances were measured as area under the receiver operating characteristic curve. The performance of the neural network evaluated in the test group was 0.88 (95% confidence limits 0.81-0.94). The performance of the three experienced experts was in the range 0.87-0.93 when using the perfusion images, chest radiographs and ventilation images when available. Perfusion scintigrams can be interpreted regarding the diagnosis of pulmonary embolism by the use of an automated method also in a hospital other than that where it was developed. The performance of this method is similar to that of experienced physicians even though the physicians, in addition to perfusion images, also had access to ventilation images for

  10. Resistance Monitoring of Four Insecticides and a Description of an Artificial Diet Incorporation Method for Chilo suppressalis (Lepidoptera: Crambidae).

    Science.gov (United States)

    Shuijin, Huang; Qiong, Chen; Wenjing, Qin; Yang, Sun; Houguo, Qin

    2017-12-05

    Chilo suppressalis (Walker; Lepidoptera: Crambidae) is one of the most damaging rice pests in China. Insecticides play a major role in its management. We describe how we monitored the resistance of C. suppressalis to four insecticides in seven field populations from Jiangxi, Hubei, and Hunan Provinces, China, in 2014-2016. The topical application method for resistance monitoring was suitable for triazophos, monosultap, and abamectin. The conventional rice seedling dipping method proved ineffective for testing chlorantraniliprole so the new artificial diet incorporation method was substituted. This new method provided more consistent results than the other methods, once baseline toxicity data had been established. All populations had moderate to high resistance to triazophos from 2014 to 2016. Monosultap resistance in two populations increased from low in 2014 to moderate in 2016 and the other five populations showed moderate to high-level resistance throughout. Abamectin resistance in three populations increased from sensitive or low in 2014 to moderate in 2015-2016, and the other populations had moderate to high levels of resistance. Resistance to chlorantraniliprole increased from sensitive or low in 2014 to moderate to high in 2016. These results suggested that resistance management strategies should be developed according to the needs of a specific location. It was suggested that, in these localities, organophosphate insecticides should be prohibited, the application of nereistoxin, macrolide antibiotic, and diamide insecticides should be limited, and other insecticides, including spinetoram and methoxyfenozide, that exhibited no resistance should be used rationally and in rotation to delay resistance development. © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Global search for low-lying crystal structures using the artificial force induced reaction method: A case study on carbon

    Science.gov (United States)

    Takagi, Makito; Taketsugu, Tetsuya; Kino, Hiori; Tateyama, Yoshitaka; Terakura, Kiyoyuki; Maeda, Satoshi

    2017-05-01

    We propose an approach to perform the global search for low-lying crystal structures from first principles, by combining the artificial force induced reaction (AFIR) method and the periodic boundary conditions (PBCs). The AFIR method has been applied extensively to molecular systems to elucidate the mechanism of chemical reactions such as homogeneous catalysis. The present PBC/AFIR approach found 274 local minima for carbon crystals in the C8 unit cell described by the generalized gradient approximation-Perdew-Burke-Ernzerhof functional. Among many newly predicted structures, three low-lying structures, which exhibit somewhat higher energy compared with those previously predicted, such as Cco -C8 (Z -carbon) and M -carbon, are further discussed with calculations of phonon and band dispersion curves. Furthermore, approaches to systematically explore two- or one-dimensional periodic structures are proposed and applied to the C8 unit cell with the slab model. These results suggest that the present approach is highly promising for predicting crystal structures.

  12. Application of stochastic and artificial intelligence methods for nuclear material identification

    International Nuclear Information System (INIS)

    Pozzi, S.; Segovia, F.J.

    1999-01-01

    Nuclear materials safeguard efforts necessitate the use of non-destructive methods to determine the attributes of fissile samples enclosed in special, non-accessible containers. To this end, a large variety of methods has been developed at the Oak Ridge National Laboratory (ORNL) and elsewhere. Usually, a given set of statistics of the stochastic neutron-photon coupled field, such as source-detector, detector-detector cross correlation functions, and multiplicities are measured over a range of known samples to develop calibration algorithms. In this manner, the attributes of unknown samples can be inferred by the use of the calibration results. The organization of this paper is as follows: Section 2 describes the Monte Carlo simulations of source-detector cross correlation functions for a set of uranium metallic samples interrogated by the neutrons and photons from a 252 Cf source. From this database, a set of features is extracted in Section 3. The use of neural networks (NN) and genertic programming to provide sample mass and enrichment values from the input sets of features is illustrated in Sections 4 and 5, respectivelyl. Section 6 is a comparison of the results, while Section 7 is a brief summary of the work

  13. Application of stochastic and artificial intelligence methods for nuclear material identification

    Energy Technology Data Exchange (ETDEWEB)

    Pozzi, S.; Segovia, F.J.

    1999-12-01

    Nuclear materials safeguard efforts necessitate the use of non-destructive methods to determine the attributes of fissile samples enclosed in special, non-accessible containers. To this end, a large variety of methods has been developed at the Oak Ridge National Laboratory (ORNL) and elsewhere. Usually, a given set of statistics of the stochastic neutron-photon coupled field, such as source-detector, detector-detector cross correlation functions, and multiplicities are measured over a range of known samples to develop calibration algorithms. In this manner, the attributes of unknown samples can be inferred by the use of the calibration results. The organization of this paper is as follows: Section 2 describes the Monte Carlo simulations of source-detector cross correlation functions for a set of uranium metallic samples interrogated by the neutrons and photons from a {sup 252}Cf source. From this database, a set of features is extracted in Section 3. The use of neural networks (NN) and genertic programming to provide sample mass and enrichment values from the input sets of features is illustrated in Sections 4 and 5, respectivelyl. Section 6 is a comparison of the results, while Section 7 is a brief summary of the work.

  14. Artificial viscosity method for the design of supercritical airfoils. [Analysis code H

    Energy Technology Data Exchange (ETDEWEB)

    McFadden, G.B.

    1979-07-01

    The need for increased efficiency in the use of our energy resources has stimulated applied research in many areas. Recently progress has been made in the field of aerodynamics, where the development of the supercritical wing promises significant savings in the fuel consumption of aircraft operating near the speed of sound. Computational transonic aerodynamics has proved to be a useful tool in the design and evaluation of these wings. A numerical technique for the design of two-dimensional supercritical wing sections with low wave drag is presented. The method is actually a design mode of the analysis code H developed by Bauer, Garabedian, and Korn. This analysis code gives excellent agreement with experimental results and is used widely by the aircraft industry. The addition of a conceptually simple design version should make this code even more useful to the engineering public.

  15. Investigating the effect of unloading on artificial sandstone behaviour using the Discrete Element Method

    Directory of Open Access Journals (Sweden)

    Huang Yueqin

    2017-01-01

    Full Text Available The Discrete Element Method (DEM was used to simulate the mechanical behaviour of a reservoir sandstone. Triaxial tests were carried out using 3D-DEM to simulate the stress-strain behaviour of a sandstone with comparisons made between the numerical tests and the laboratory tests. The influence of isotropic unloading was investigated, which was found to have impacts on bond breakages and was successfully captured in the 3D shearing processes. It was found that bond breakages correlated strongly with the stress-strain behaviour of the sandstone affecting the peak strength. It was also found that unloading affected the bond breakages, which then changed the mechanical behaviour of sandstone. The tangent stiffnesses of simulated virgin and cored samples under different confining stresses were compared. From the tangent stiffnesses, gross yield envelopes and the yielding surfaces for unloaded samples and virgin samples were plotted and analysed in detail.

  16. [Prophylactics of pyo-inflammatory complications in the wire area during treatment by the method of transosseous compressive-distractive osteosynthesis with probiotic "Sporobacterin liquid"].

    Science.gov (United States)

    Alimov, D V; Solnyshkova, T G; Safronov, A A

    2008-01-01

    Prophylactics of surgical infections is one of the principal problems in using any surgical method, the method of transosseous osteosynthesis included. Preventive treatment is considered to be one of possible ways to decrease the number of pyo-inflammatory complications. However, unjustified antibiotic therapy gives a negative effect and is often followed by side reactions and complications. This experimental investigation presents grounds for using the method of prophylactics of pyo-inflammatory complications in the area of wires in treatment by the method of extrafocal compressive-distractive osteosynthesis with a new generation probiotic "Sporobacterin liquid".

  17. Artificial intelligence/fuzzy logic method for analysis of combined signals from heavy metal chemical sensors

    Energy Technology Data Exchange (ETDEWEB)

    Turek, M. [Institute of Nano- and Biotechnologies (INB), Aachen University of Applied Sciences, Campus Juelich, Juelich (Germany); Institute of Bio- and Nanosystems (IBN), Research Centre Juelich GmbH, Juelich (Germany); Heiden, W.; Riesen, A. [Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin (Germany); Chhabda, T.A. [Institute of Nano- and Biotechnologies (INB), Aachen University of Applied Sciences, Campus Juelich, Juelich (Germany); Schubert, J.; Zander, W. [Institute of Bio- and Nanosystems (IBN), Research Centre Juelich GmbH, Juelich (Germany); Krueger, P. [Institute of Biochemistry and Molecular Biology, RWTH Aachen, Aachen (Germany); Keusgen, M. [Institute for Pharmaceutical Chemistry, Philipps-University Marburg, Marburg (Germany); Schoening, M.J. [Institute of Nano- and Biotechnologies (INB), Aachen University of Applied Sciences, Campus Juelich, Juelich (Germany); Institute of Bio- and Nanosystems (IBN), Research Centre Juelich GmbH, Juelich (Germany)], E-mail: m.j.schoening@fz-juelich.de

    2009-10-30

    The cross-sensitivity of chemical sensors for several metal ions resembles in a way the overlapping sensitivity of some biological sensors, like the optical colour receptors of human retinal cone cells. While it is difficult to assign crisp classification values to measurands based on complex overlapping sensory signals, fuzzy logic offers a possibility to mathematically model such systems. Current work goes into the direction of mixed heavy metal solutions and the combination of fuzzy logic with heavy metal-sensitive, silicon-based chemical sensors for training scenarios of arbitrary sensor/probe combinations in terms of an electronic tongue. Heavy metals play an important role in environmental analysis. As trace elements as well as water impurities released from industrial processes they occur in the environment. In this work, the development of a new fuzzy logic method based on potentiometric measurements performed with three different miniaturised chalcogenide glass sensors in different heavy metal solutions will be presented. The critical validation of the developed fuzzy logic program will be demonstrated by means of measurements in unknown single- and multi-component heavy metal solutions. Limitations of this program and a comparison between calculated and expected values in terms of analyte composition and heavy metal ion concentration will be shown and discussed.

  18. Three dimensional simulation of compressible and incompressible flows through the finite element method; Simulacao tridimensional de escoamentos compressiveis e incompressiveis atraves do metodo dos elementos finitos

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Gustavo Koury

    2004-11-15

    Although incompressible fluid flows can be regarded as a particular case of a general problem, numerical methods and the mathematical formulation aimed to solve compressible and incompressible flows have their own peculiarities, in such a way, that it is generally not possible to attain both regimes with a single approach. In this work, we start from a typically compressible formulation, slightly modified to make use of pressure variables and, through augmenting the stabilising parameters, we end up with a simplified model which is able to deal with a wide range of flow regimes, from supersonic to low speed gas flows. The resulting methodology is flexible enough to allow for the simulation of liquid flows as well. Examples using conservative and pressure variables are shown and the results are compared to those published in the literature, in order to validate the method. (author)

  19. Compressed Counting Meets Compressed Sensing

    OpenAIRE

    Li, Ping; Zhang, Cun-Hui; Zhang, Tong

    2013-01-01

    Compressed sensing (sparse signal recovery) has been a popular and important research topic in recent years. By observing that natural signals are often nonnegative, we propose a new framework for nonnegative signal recovery using Compressed Counting (CC). CC is a technique built on maximally-skewed p-stable random projections originally developed for data stream computations. Our recovery procedure is computationally very efficient in that it requires only one linear scan of the coordinates....

  20. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  1. Comparison of in-situ gamma ray spectrometry measurements with conventional methods in determination natural and artificial nuclides in soil

    International Nuclear Information System (INIS)

    Al-Masri, M. S.; Doubal, A. W.

    2010-12-01

    Two nuclear analytical techniques (In-Situ Gamma ray spectrometry and laboratory gamma ray spectrometry) for determination of natural and artificial radionuclides in soil have been validated. The first technique depends on determination of radioactivity content of representative samples of the studied soil after laboratory preparation, while the second technique is based on direct determination of radioactivity content of soil using in-situ gamma-ray spectrometer. Analytical validation parameter such as detection limits, repeatability, reproducibility in addition to measurement uncertainties were estimated and compared for both techniques. Comparison results have shown that the determination of radioactivity in soil should apply the two techniques together where each of techniques is characterized by its low detection limit and uncertainty suitable for defined application of measurement. Radioactive isotopes in various locations were determined using the two methods by measuring 40 k, 238 U,and 137 Cs. The results showed that there are differences in attenuation factors due to soil moisture content differences; wet weight corrections should be applied when the two techniques are compared. (author)

  2. Facile control of silica nanoparticles using a novel solvent varying method for the fabrication of artificial opal photonic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Weihong [The University of Manchester, School of Materials (United Kingdom); Rigout, Muriel [University of Leeds, School of Design (United Kingdom); Owens, Huw, E-mail: Huw.Owens@manchester.ac.uk [The University of Manchester, School of Materials (United Kingdom)

    2016-12-15

    In this work, the Stöber process was applied to produce uniform silica nanoparticles (SNPs) in the meso-scale size range. The novel aspect of this work was to control the produced silica particle size by only varying the volume of the solvent ethanol used, whilst fixing the other reaction conditions. Using this one-step Stöber-based solvent varying (SV) method, seven batches of SNPs with target diameters ranging from 70 to 400 nm were repeatedly reproduced, and the size distribution in terms of the polydispersity index (PDI) was well maintained (within 0.1). An exponential equation was used to fit the relationship between the particle diameter and ethanol volume. This equation allows the prediction of the amount of ethanol required in order to produce particles of any target diameter within this size range. In addition, it was found that the reaction was completed in approximately 2 h for all batches regardless of the volume of ethanol. Structurally coloured artificial opal photonic crystals (PCs) were fabricated from the prepared SNPs by self-assembly under gravity sedimentation.

  3. Facile control of silica nanoparticles using a novel solvent varying method for the fabrication of artificial opal photonic crystals

    International Nuclear Information System (INIS)

    Gao, Weihong; Rigout, Muriel; Owens, Huw

    2016-01-01

    In this work, the Stöber process was applied to produce uniform silica nanoparticles (SNPs) in the meso-scale size range. The novel aspect of this work was to control the produced silica particle size by only varying the volume of the solvent ethanol used, whilst fixing the other reaction conditions. Using this one-step Stöber-based solvent varying (SV) method, seven batches of SNPs with target diameters ranging from 70 to 400 nm were repeatedly reproduced, and the size distribution in terms of the polydispersity index (PDI) was well maintained (within 0.1). An exponential equation was used to fit the relationship between the particle diameter and ethanol volume. This equation allows the prediction of the amount of ethanol required in order to produce particles of any target diameter within this size range. In addition, it was found that the reaction was completed in approximately 2 h for all batches regardless of the volume of ethanol. Structurally coloured artificial opal photonic crystals (PCs) were fabricated from the prepared SNPs by self-assembly under gravity sedimentation.

  4. A Robust Intelligent Framework for Multiple Response Statistical Optimization Problems Based on Artificial Neural Network and Taguchi Method

    Directory of Open Access Journals (Sweden)

    Ali Salmasnia

    2012-01-01

    Full Text Available An important problem encountered in product or process design is the setting of process variables to meet a required specification of quality characteristics (response variables, called a multiple response optimization (MRO problem. Common optimization approaches often begin with estimating the relationship between the response variable with the process variables. Among these methods, response surface methodology (RSM, due to simplicity, has attracted most attention in recent years. However, in many manufacturing cases, on one hand, the relationship between the response variables with respect to the process variables is far too complex to be efficiently estimated; on the other hand, solving such an optimization problem with accurate techniques is associated with problem. Alternative approach presented in this paper is to use artificial neural network to estimate response functions and meet heuristic algorithms in process optimization. In addition, the proposed approach uses the Taguchi robust parameter design to overcome the common limitation of the existing multiple response approaches, which typically ignore the dispersion effect of the responses. The paper presents a case study to illustrate the effectiveness of the proposed intelligent framework for tackling multiple response optimization problems.

  5. A new method for 3D thinning of hybrid shaped porous media using artificial intelligence. Application to trabecular bone.

    Science.gov (United States)

    Jennane, Rachid; Aufort, Gabriel; Benhamou, Claude Laurent; Ceylan, Murat; Ozbay, Yüksel; Ucan, Osman Nuri

    2012-04-01

    Curve and surface thinning are widely-used skeletonization techniques for modeling objects in three dimensions. In the case of disordered porous media analysis, however, neither is really efficient since the internal geometry of the object is usually composed of both rod and plate shapes. This paper presents an alternative to compute a hybrid shape-dependent skeleton and its application to porous media. The resulting skeleton combines 2D surfaces and 1D curves to represent respectively the plate-shaped and rod-shaped parts of the object. For this purpose, a new technique based on neural networks is proposed: cascade combinations of complex wavelet transform (CWT) and complex-valued artificial neural network (CVANN). The ability of the skeleton to characterize hybrid shaped porous media is demonstrated on a trabecular bone sample. Results show that the proposed method achieves high accuracy rates about 99.78%-99.97%. Especially, CWT (2nd level)-CVANN structure converges to optimum results as high accuracy rate-minimum time consumption.

  6. Development of co-processed excipients in the design and evaluation of atorvastatin calcium tablets by direct compression method.

    Science.gov (United States)

    Pusapati, Ravi Teja; Kumar, Mvr Kalyan; Rapeti, Siva Satyanandam; Murthy, Tegk

    2014-04-01

    Co-processed excipients were prepared to improve the process ability and efficacy of commonly used excipients and to impart multi-functional qualities to the excipients and hence that the tablets with the desired attributes can be produced. In this study, acacia and calcium carbonate (CaCO3) were used to prepare a co-processing excipient suitable for the preparation of atorvastatin calcium tablets. Acacia is used as binder and CaCO3 as filler. CaCO3 also acts as alkalizer and thus suitable to improve the dissolution rate of pH dependent soluble drugs like atorvastatin. The tablets were prepared by direct compression method and the physical properties of tablets such as hardness, friability and dissolution profiles of tablets were evaluated. Acacia was used in the form of mucilage. Various ratios of the co-processing excipients were formulated by granulation technique and the blend properties were evaluated by their Hausner's ratio and Carr's index values. Based on the Kawakita plots, it was found that the formulation with 3% acacia mucilage (0.9 mg acacia and 26.6 mg of CaCO3) showed good fluidity and the formulations with 4% (1.27 mg of acacia and 26.23 mg of CaCO3) and 5% acacia mucilage (1.62 mg of acacia and 25.88 mg of CaCO3) showed more cohesiveness. The formulations include 1-5% of the acacia mucilage as the binding agent. The granules of formulations with low percentage of acacia mucilage (1% and 2%) failed the test for friability. The granules of the formulations with pure acacia (F1) and pure CaCO3 (F2) showed passable flow properties. The formulation with 3% acacia mucilage (F3, 0.9 mg acacia and 26.6 mg of CaCO3) showed least dissolution time (<1 min) and is found as the best formulation among the other formulations containing 4% (F4, 1.27 mg of acacia and 26.23 mg of CaCO3) and 5% (F5, 1.62 mg of acacia and 25.88 mg of CaCO3) acacia mucilage.

  7. An Object-Based Image Analysis Method for Monitoring Land Conversion by Artificial Sprawl Use of RapidEye and IRS Data

    Directory of Open Access Journals (Sweden)

    Maud Balestrat

    2012-02-01

    Full Text Available In France, in the peri-urban context, urban sprawl dynamics are particularly strong with huge population growth as well as a land crisis. The increase and spreading of built-up areas from the city centre towards the periphery takes place to the detriment of natural and agricultural spaces. The conversion of land with agricultural potential is all the more worrying as it is usually irreversible. The French Ministry of Agriculture therefore needs reliable and repeatable spatial-temporal methods to locate and quantify loss of land at both local and national scales. The main objective of this study was to design a repeatable method to monitor land conversion characterized by artificial sprawl: (i We used an object-based image analysis to extract artificial areas from satellite images; (ii We built an artificial patch that consists of aggregating all the peripheral areas that characterize artificial areas. The “artificialized” patch concept is an innovative extension of the urban patch concept, but differs in the nature of its components and in the continuity distance applied; (iii The diachronic analysis of artificial patch maps enables characterization of artificial sprawl. The method was applied at the scale of four departments (similar to provinces along the coast of Languedoc-Roussillon, in the South of France, based on two satellite datasets, one acquired in 1996–1997 (Indian Remote Sensing and the other in 2009 (RapidEye. In the four departments, we measured an increase in artificial areas of from 113,000 ha in 1997 to 133,000 ha in 2009, i.e., an 18% increase in 12 years. The package comes in the form of a 1/15,000 valid cartography, usable at the scale of a commune (the smallest territorial division used for administrative purposes in France that can be adapted to departmental and regional scales. The method is reproducible in homogenous spatial-temporal terms, so that it could be used periodically to assess changes in land conversion

  8. Narrowing of the middle cerebral artery: artificial intelligence methods and comparison of transcranial color coded duplex sonography with conventional TCD.

    Science.gov (United States)

    Swiercz, Miroslaw; Swiat, Maciej; Pawlak, Mikolaj; Weigele, John; Tarasewicz, Roman; Sobolewski, Andrzej; Hurst, Robert W; Mariak, Zenon D; Melhem, Elias R; Krejza, Jaroslaw

    2010-01-01

    The goal of the study was to compare performances of transcranial color-coded duplex sonography (TCCS) and transcranial Doppler sonography (TCD) in the diagnosis of the middle cerebral artery (MCA) narrowing in the same population of patients using statistical and nonstatistical intelligent models for data analysis. We prospectively collected data from 179 consecutive routine digital subtraction angiography (DSA) procedures performed in 111 patients (mean age 54.17+/-14.4 years; 59 women, 52 men) who underwent TCD and TCCS examinations simultaneously. Each patient was examined independently using both ultrasound techniques, 267 M1 segments of MCA were assessed and narrowings were classified as 50% lumen reduction. Diagnostic performance was estimated by two statistical and two artificial neural networks (ANN) classification methods. Separate models were constructed for the TCD and TCCS sonographic data, as well as for detection of "any narrowing" and "severe narrowing" of the MCA. Input for each classifier consisted of the peak-systolic, mean and end-diastolic velocities measured with each sonographic method; the output was MCA narrowing. Arterial narrowings less or equal 50% of lumen reduction were found in 55 and >50% narrowings in 26 out of 267 arteries, as indicated by DSA. In the category of "any narrowing" the rate of correct assignment by all models was 82% to 83% for TCCS and 79% to 81% for TCD. In the diagnosis of >50% narrowing the overall classification accuracy remained in the range of 89% to 90% for TCCS data and 90% to 91% for TCD data. For the diagnosis of any narrowing, the sensitivity of the TCCS was significantly higher than that of the TCD, while for diagnosis of >50% MCA narrowing, sensitivity of the TCCS was similar to sensitivity of the TCD. Our study showed that TCCS outperforms conventional TCD in detection of 50% MCA narrowing. (E-mail: jaroslaw.krejza@uphs.upenn.edu).

  9. Artificial Consciousness or Artificial Intelligence

    OpenAIRE

    Spanache Florin

    2017-01-01

    Artificial intelligence is a tool designed by people for the gratification of their own creative ego, so we can not confuse conscience with intelligence and not even intelligence in its human representation with conscience. They are all different concepts and they have different uses. Philosophically, there are differences between autonomous people and automatic artificial intelligence. This is the difference between intelligence and artificial intelligence, autonomous versus a...

  10. Effect of High-Temperature Curing Methods on the Compressive Strength Development of Concrete Containing High Volumes of Ground Granulated Blast-Furnace Slag

    Directory of Open Access Journals (Sweden)

    Wonsuk Jung

    2017-01-01

    Full Text Available This paper investigates the effect of the high-temperature curing methods on the compressive strength of concrete containing high volumes of ground granulated blast-furnace slag (GGBS. GGBS was used to replace Portland cement at a replacement ratio of 60% by binder mass. The high-temperature curing parameters used in this study were the delay period, temperature rise, peak temperature (PT, peak period, and temperature down. Test results demonstrate that the compressive strength of the samples with PTs of 65°C and 75°C was about 88% higher than that of the samples with a PT of 55°C after 1 day. According to this investigation, there might be optimum high-temperature curing conditions for preparing a concrete containing high volumes of GGBS, and incorporating GGBS into precast concrete mixes can be a very effective tool in increasing the applicability of this by-product.

  11. Entropy Stable Staggered Grid Discontinuous Spectral Collocation Methods of any Order for the Compressible Navier--Stokes Equations

    KAUST Repository

    Parsani, Matteo

    2016-10-04

    Staggered grid, entropy stable discontinuous spectral collocation operators of any order are developed for the compressible Euler and Navier--Stokes equations on unstructured hexahedral elements. This generalization of previous entropy stable spectral collocation work [M. H. Carpenter, T. C. Fisher, E. J. Nielsen, and S. H. Frankel, SIAM J. Sci. Comput., 36 (2014), pp. B835--B867, M. Parsani, M. H. Carpenter, and E. J. Nielsen, J. Comput. Phys., 292 (2015), pp. 88--113], extends the applicable set of points from tensor product, Legendre--Gauss--Lobatto (LGL), to a combination of tensor product Legendre--Gauss (LG) and LGL points. The new semidiscrete operators discretely conserve mass, momentum, energy, and satisfy a mathematical entropy inequality for the compressible Navier--Stokes equations in three spatial dimensions. They are valid for smooth as well as discontinuous flows. The staggered LG and conventional LGL point formulations are compared on several challenging test problems. The staggered LG operators are significantly more accurate, although more costly from a theoretical point of view. The LG and LGL operators exhibit similar robustness, as is demonstrated using test problems known to be problematic for operators that lack a nonlinear stability proof for the compressible Navier--Stokes equations (e.g., discontinuous Galerkin, spectral difference, or flux reconstruction operators).

  12. Compressive beamforming

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Mosegaard, Klaus

    2014-01-01

    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex...

  13. A Hybrid Method for Image Segmentation Based on Artificial Fish Swarm Algorithm and Fuzzy c-Means Clustering.

    Science.gov (United States)

    Ma, Li; Li, Yang; Fan, Suohai; Fan, Runzhu

    2015-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) clustering is one of the popular clustering algorithms for medical image segmentation. However, FCM has the problems of depending on initial clustering centers, falling into local optimal solution easily, and sensitivity to noise disturbance. To solve these problems, this paper proposes a hybrid artificial fish swarm algorithm (HAFSA). The proposed algorithm combines artificial fish swarm algorithm (AFSA) with FCM whose advantages of global optimization searching and parallel computing ability of AFSA are utilized to find a superior result. Meanwhile, Metropolis criterion and noise reduction mechanism are introduced to AFSA for enhancing the convergence rate and antinoise ability. The artificial grid graph and Magnetic Resonance Imaging (MRI) are used in the experiments, and the experimental results show that the proposed algorithm has stronger antinoise ability and higher precision. A number of evaluation indicators also demonstrate that the effect of HAFSA is more excellent than FCM and suppressed FCM (SFCM).

  14. Evaluation of the distortions of the digital chest image caused by the data compression

    International Nuclear Information System (INIS)

    Ando, Yutaka; Kunieda, Etsuo; Ogawa, Koichi; Tukamoto, Nobuhiro; Hashimoto, Shozo; Aoki, Makoto; Kurotani, Kenichi.

    1988-01-01

    The image data compression methods using orthogonal transforms (Discrete cosine transform, Discrete fourier transform, Hadamard transform, Haar transform, Slant transform) were analyzed. From the points of the error and the speed of the data conversion, the discrete cosine transform method (DCT) is superior to the other methods. The block quantization by the DCT for the digital chest image was used. The quality of data compressed and reconstructed images by the score analysis and the ROC curve analysis was examined. The chest image with the esophageal cancer and metastatic lung tumors was evaluated at the 17 checkpoints (the tumor, the vascular markings, the border of the heart and ribs, the mediastinal structures and et al). By our score analysis, the satisfactory ratio of the data compression is 1/5 and 1/10. The ROC analysis using normal chest images superimposed by the artificial coin lesions was made. The ROC curve of the 1/5 compressed ratio is almost as same as the original one. To summarize our study, the image data compression method using the DCT is thought to be useful for the clinical use and the 1/5 compression ratio is a tolerable ratio. (author)

  15. Neutron diffraction study of artificial graphites crystalline anisotropy

    International Nuclear Information System (INIS)

    Lecomte, Marcel

    1961-01-01

    The Saclay spectrometer at E.L.2 has been used to investigate the structural properties of artificial graphite. Information as to the local texture at different points of a small block of graphite has been obtained. The method is more rapid and yields results which are statistically more accurate than those found by X-Ray diffraction. In particular, the method allows the determination of the degree of anisotropy in directions normal to the axis along which the block was compressed during its manufacture. (author) [fr

  16. Mammographic compression in Asian women

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    Objectives To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. Methods We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35–80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Results Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p0.05). Conclusions Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD. PMID:28419125

  17. Control volume based modelling in one space dimension of oscillating, compressible flow in reciprocating machines

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Carlsen, Henrik; Thomsen, Per Grove

    2006-01-01

    We present an approach for modelling unsteady, primarily one-dimensional, compressible flow. The conservation laws for mass, energy, and momentum are applied to a staggered mesh of control volumes and loss mechanisms are included directly as extra terms. Heat transfer, flow friction......, and multidimensional effects are calculated using empirical correlations. Transformations of the conservation equations into new variables, artificial dissipation for dissipating acoustic phenomena, and an asymmetric interpolation method for minimising numerical diffusion and non physical temperature oscillations...

  18. Artificial intelligence in medicine.

    OpenAIRE

    Ramesh, A. N.; Kambhampati, C.; Monson, J. R. T.; Drew, P. J.

    2004-01-01

    INTRODUCTION: Artificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios. METHODS: Medline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of ...

  19. Thermoeconomic optimization of subcooled and superheated vapor compression refrigeration cycle

    International Nuclear Information System (INIS)

    Selbas, Resat; Kizilkan, Onder; Sencan, Arzu

    2006-01-01

    An exergy-based thermoeconomic optimization application is applied to a subcooled and superheated vapor compression refrigeration system. The advantage of using the exergy method of thermoeconomic optimization is that various elements of the system-i.e., condenser, evaporator, subcooling and superheating heat exchangers-can be optimized on their own. The application consists of determining the optimum heat exchanger areas with the corresponding optimum subcooling and superheating temperatures. A cost function is specified for the optimum conditions. All calculations are made for three refrigerants: R22, R134a, and R407c. Thermodynamic properties of refrigerants are formulated using the Artificial Neural Network methodology

  20. Artificial intelligence in medicine.

    Science.gov (United States)

    Ramesh, A. N.; Kambhampati, C.; Monson, J. R. T.; Drew, P. J.

    2004-01-01

    INTRODUCTION: Artificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios. METHODS: Medline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications. RESULTS: The proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings. DISCUSSION: Artificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting. PMID:15333167

  1. Efficacy of Blood Sources and Artificial Blood Feeding Methods in Rearing of Aedes aegypti (Diptera: Culicidae for Sterile Insect Technique and Incompatible Insect Technique Approaches in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Nayana Gunathilaka

    2017-01-01

    Full Text Available Introduction. Selection of the artificial membrane feeding technique and blood meal source has been recognized as key considerations in mass rearing of vectors. Methodology. Artificial membrane feeding techniques, namely, glass plate, metal plate, and Hemotek membrane feeding method, and three blood sources (human, cattle, and chicken were evaluated based on feeding rates, fecundity, and hatching rates of Aedes aegypti. Significance in the variations among blood feeding was investigated by one-way ANOVA, cluster analysis of variance (ANOSIM, and principal coordinates (PCO analysis. Results. Feeding rates of Ae. aegypti significantly differed among the membrane feeding techniques as suggested by one-way ANOVA (p0.05. Conclusions. Metal plate method could be recommended as the most effective membrane feeding technique for mass rearing of Ae. aegypti, due to its high feeding rate and cost effectiveness. Cattle blood could be recommended for mass rearing Ae. aegypti.

  2. Artificial intelligence

    CERN Document Server

    Ennals, J R

    1987-01-01

    Artificial Intelligence: State of the Art Report is a two-part report consisting of the invited papers and the analysis. The editor first gives an introduction to the invited papers before presenting each paper and the analysis, and then concludes with the list of references related to the study. The invited papers explore the various aspects of artificial intelligence. The analysis part assesses the major advances in artificial intelligence and provides a balanced analysis of the state of the art in this field. The Bibliography compiles the most important published material on the subject of

  3. Study of three-dimensional Rayleigh--Taylor instability in compressible fluids through level set method and parallel computation

    International Nuclear Information System (INIS)

    Li, X.L.

    1993-01-01

    Computation of three-dimensional (3-D) Rayleigh--Taylor instability in compressible fluids is performed on a MIMD computer. A second-order TVD scheme is applied with a fully parallelized algorithm to the 3-D Euler equations. The computational program is implemented for a 3-D study of bubble evolution in the Rayleigh--Taylor instability with varying bubble aspect ratio and for large-scale simulation of a 3-D random fluid interface. The numerical solution is compared with the experimental results by Taylor

  4. Measurement method of compressibility and thermal expansion coefficients for density standard liquid at 2329 kg/m3 based on hydrostatic suspension principle

    Science.gov (United States)

    Wang, Jintao; Liu, Ziyong; Xu, Changhong; Li, Zhanhong

    2014-07-01

    The accurate measurement on the compressibility and thermal expansion coefficients of density standard liquid at 2329kg/m3 (DSL-2329) plays an important role in the quality control for silicon single crystal manufacturing. A new method is developed based on hydrostatic suspension principle in order to determine the two coefficients with high measurement accuracy. Two silicon single crystal samples with known density are immersed into a sealed vessel full of DSL-2329. The density of liquid is adjusted with varying liquid temperature and static pressure, so that the hydrostatic suspension of two silicon single crystal samples is achieved. The compression and thermal expansion coefficients are then calculated by using the data of temperature and static pressure at the suspension state. One silicon single crystal sample can be suspended at different state, as long as the liquid temperature and static pressure function linearly according to a certain mathematical relationship. A hydrostatic suspension experimental system is devised with the maximal temperature control error ±50 μK; Silicon single crystal samples can be suspended by adapting the pressure following the PID method. By using the method based on hydrostatic suspension principle, the two key coefficients can be measured at the same time, and measurement precision can be improved due to avoiding the influence of liquid surface tension. This method was further validated experimentally, where the mixture of 1, 2, 3-tribromopropane and 1,2-dibromoethane is used as DSL-2329. The compressibility and thermal expansion coefficients were measured, as 8.5×10-4 K-1 and 5.4×1010 Pa-1, respectively.

  5. Flux Limiter Lattice Boltzmann for Compressible Flows

    International Nuclear Information System (INIS)

    Chen Feng; Li Yingjun; Xu Aiguo; Zhang Guangcai

    2011-01-01

    In this paper, a new flux limiter scheme with the splitting technique is successfully incorporated into a multiple-relaxation-time lattice Boltzmann (LB) model for shacked compressible flows. The proposed flux limiter scheme is efficient in decreasing the artificial oscillations and numerical diffusion around the interface. Due to the kinetic nature, some interface problems being difficult to handle at the macroscopic level can be modeled more naturally through the LB method. Numerical simulations for the Richtmyer-Meshkov instability show that with the new model the computed interfaces are smoother and more consistent with physical analysis. The growth rates of bubble and spike present a satisfying agreement with the theoretical predictions and other numerical simulations. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  6. Effectiveness of artificial intelligence methods in applications to burning optimization and coal mills diagnostics on the basis of IASE's experiences in Turow Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Pollak, J.; Wozniak, A.W.; Dynia, Z.; Lipanowicz, T.

    2004-07-01

    Modern methods referred to as 'artificial intelligence' have been applied to combustion optimization and implementation of selected diagnostic functions for the milling system of a pulverized lignite-fired boiler. The results of combustion optimization have shown significant improvement of efficiency and reduction of NO, emission. Fuzzy logic has been used to develop, among other things, a fan mill overload detection system.

  7. How artificial intelligence tools can be used to assess individual patient risk in cardiovascular disease: problems with the current methods.

    Science.gov (United States)

    Grossi, Enzo

    2006-05-03

    In recent years a number of algorithms for cardiovascular risk assessment has been proposed to the medical community. These algorithms consider a number of variables and express their results as the percentage risk of developing a major fatal or non-fatal cardiovascular event in the following 10 to 20 years The author has identified three major pitfalls of these algorithms, linked to the limitation of the classical statistical approach in dealing with this kind of non linear and complex information. The pitfalls are the inability to capture the disease complexity, the inability to capture process dynamics, and the wide confidence interval of individual risk assessment. Artificial Intelligence tools can provide potential advantage in trying to overcome these limitations. The theoretical background and some application examples related to artificial neural networks and fuzzy logic have been reviewed and discussed. The use of predictive algorithms to assess individual absolute risk of cardiovascular future events is currently hampered by methodological and mathematical flaws. The use of newer approaches, such as fuzzy logic and artificial neural networks, linked to artificial intelligence, seems to better address both the challenge of increasing complexity resulting from a correlation between predisposing factors, data on the occurrence of cardiovascular events, and the prediction of future events on an individual level.

  8. How artificial intelligence tools can be used to assess individual patient risk in cardiovascular disease: problems with the current methods

    Directory of Open Access Journals (Sweden)

    Grossi Enzo

    2006-05-01

    Full Text Available Abstract Background In recent years a number of algorithms for cardiovascular risk assessment has been proposed to the medical community. These algorithms consider a number of variables and express their results as the percentage risk of developing a major fatal or non-fatal cardiovascular event in the following 10 to 20 years Discussion The author has identified three major pitfalls of these algorithms, linked to the limitation of the classical statistical approach in dealing with this kind of non linear and complex information. The pitfalls are the inability to capture the disease complexity, the inability to capture process dynamics, and the wide confidence interval of individual risk assessment. Artificial Intelligence tools can provide potential advantage in trying to overcome these limitations. The theoretical background and some application examples related to artificial neural networks and fuzzy logic have been reviewed and discussed. Summary The use of predictive algorithms to assess individual absolute risk of cardiovascular future events is currently hampered by methodological and mathematical flaws. The use of newer approaches, such as fuzzy logic and artificial neural networks, linked to artificial intelligence, seems to better address both the challenge of increasing complexity resulting from a correlation between predisposing factors, data on the occurrence of cardiovascular events, and the prediction of future events on an individual level.

  9. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  10. Free-surface modelling technology for compressible and violent flows

    CSIR Research Space (South Africa)

    Heyns, Johan A

    2011-06-01

    Full Text Available formulation reduces the degree of numerical smearing while maintaining the interface shape. It involves combining the approaches of blended higher-resolution discretisation and adding an artificial compressive term in a manner which retains the strength...

  11. Artificial Reefs

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — An artificial reef is a human-made underwater structure, typically built to promote marine life in areas with a generally featureless bottom, control erosion, block...

  12. Artificial Metalloenzymes

    NARCIS (Netherlands)

    Rosati, Fiora; Roelfes, Gerard

    Artificial metalloenzymes have emerged as a promising approach to merge the attractive properties of homogeneous catalysis and biocatalysis. The activity and selectivity, including enantioselectivity, of natural metalloenzymes are due to the second coordination sphere interactions provided by the

  13. Artificial sweeteners

    DEFF Research Database (Denmark)

    Raben, Anne Birgitte; Richelsen, Bjørn

    2012-01-01

    Artificial sweeteners can be a helpful tool to reduce energy intake and body weight and thereby risk for diabetes and cardiovascular diseases (CVD). Considering the prevailing diabesity (obesity and diabetes) epidemic, this can, therefore, be an important alternative to natural, calorie......-containing sweeteners. The purpose of this review is to summarize the current evidence on the effect of artificial sweeteners on body weight, appetite, and risk markers for diabetes and CVD in humans....

  14. Temporal compressive sensing systems

    Science.gov (United States)

    Reed, Bryan W.

    2017-12-12

    Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.

  15. Advancement of compressible multiphase flows and sodium-water reaction analysis program SERAPHIM. Validation of a numerical method for the simulation of highly underexpanded jets

    International Nuclear Information System (INIS)

    Uchibori, Akihiro; Ohshima, Hiroyuki; Watanabe, Akira

    2010-01-01

    SERAPHIM is a computer program for the simulation of the compressible multiphase flow involving the sodium-water chemical reaction under a tube failure accident in a steam generator of sodium cooled fast reactors. In this study, the numerical analysis of the highly underexpanded air jets into the air or into the water was performed as a part of validation of the SERAPHIM program. The multi-fluid model, the second-order TVD scheme and the HSMAC method considering a compressibility were used in this analysis. Combining these numerical methods makes it possible to calculate the multiphase flow including supersonic gaseous jets. In the case of the air jet into the air, the calculated pressure, the shape of the jet and the location of a Mach disk agreed with the existing experimental results. The effect of the difference scheme and the mesh resolution on the prediction accuracy was clarified through these analyses. The behavior of the air jet into the water was also reproduced successfully by the proposed numerical method. (author)

  16. Transport properties of LiF under strong compression: modeling using advanced electronic structure methods and classical molecular dynamics

    Science.gov (United States)

    Mattsson, Thomas R.; Jones, Reese; Ward, Donald; Spataru, Catalin; Shulenburger, Luke; Benedict, Lorin X.

    2015-06-01

    Window materials are ubiquitous in shock physics and with high energy density drivers capable of reaching multi-Mbar pressures the use of LiF is increasing. Velocimetry and temperature measurements of a sample through a window are both influenced by the assumed index of refraction and thermal conductivity, respectively. We report on calculations of index of refraction using the many-body theory GW and thermal ionic conductivity using linear response theory and model potentials. The results are expected to increase the accuracy of a broad range of high-pressure shock- and ramp compression experiments. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  17. A compressive sensing-based computational method for the inversion of wide-band ground penetrating radar data

    Science.gov (United States)

    Gelmini, A.; Gottardi, G.; Moriyama, T.

    2017-10-01

    This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.

  18. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  19. Comparison of artificial digestion and Baermann's methods for detection of Trichinella spiralis pre-encapsulated larvae in muscles with low-level infections.

    Science.gov (United States)

    Jiang, Peng; Wang, Zhong-Quan; Cui, Jing; Zhang, Xi

    2012-01-01

    Artificial digestion method is widely used for the detection of Trichinella larvae (mainly the mature larvae, e.g., encapsulated larvae in encapsulated Trichinella) in meat. The previous studies demonstrated that Trichinella spiralis pre-encapsulated larvae (PEL) at 14-18 days postinfection (dpi) had the infectivity to new hosts. However, to our knowledge, there is no report on the detection methods of PEL in meat. The purpose of this study was to compare the efficiency of artificial digestion and Baermann's methods for detection of T. spiralis PEL in meat, and to test the factors affecting the sensitivity of the two methods. Forty-five male Kunming mice were randomly divided into 3 groups (15 mice per group), and each group of mice was orally inoculated with 20, 10, or 5 muscle larvae of T. spiralis, respectively. All infected mice were slaughtered at 18 dpi, and the muscles were minced. The digestion method that was recommended by International Commission on Trichinellosis and Baermann's method were used to detect the PEL in the infected mice. The detection rate of PEL in both mice infected with 20 muscle larvae by digestion and Baermann's method was 100% (15/15); the detection rates of PEL in mice infected with 10 larvae by the two methods just mentioned were 93.33% (14/15) and 100% (15/15), respectively; when the mice infected with 5 larvae were tested, the different detection rate of PEL was achieved by using digestion method (63.33%) and Baermann's method (100%). Additionally, the number of PEL collected from the mice infected with 20, 10, or 5 larvae by Baermann's method was greater than that by digestion methods. The mortality of PEL increased along with the prolongation of digestion duration, because the PEL were not resistant to enzymatic digestion. The results revealed that the Baermann's method is superior to the digestion methods for detection of T. spiralis PEL in muscle samples with low-level infections.

  20. A Hybrid Method for Image Segmentation Based on Artificial Fish Swarm Algorithm and Fuzzy c-Means Clustering

    Directory of Open Access Journals (Sweden)

    Li Ma

    2015-01-01

    Full Text Available Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM clustering is one of the popular clustering algorithms for medical image segmentation. However, FCM has the problems of depending on initial clustering centers, falling into local optimal solution easily, and sensitivity to noise disturbance. To solve these problems, this paper proposes a hybrid artificial fish swarm algorithm (HAFSA. The proposed algorithm combines artificial fish swarm algorithm (AFSA with FCM whose advantages of global optimization searching and parallel computing ability of AFSA are utilized to find a superior result. Meanwhile, Metropolis criterion and noise reduction mechanism are introduced to AFSA for enhancing the convergence rate and antinoise ability. The artificial grid graph and Magnetic Resonance Imaging (MRI are used in the experiments, and the experimental results show that the proposed algorithm has stronger antinoise ability and higher precision. A number of evaluation indicators also demonstrate that the effect of HAFSA is more excellent than FCM and suppressed FCM (SFCM.

  1. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  2. Combined gradient projection/single component artificial force induced reaction (GP/SC-AFIR) method for an efficient search of minimum energy conical intersection (MECI) geometries

    Science.gov (United States)

    Harabuchi, Yu; Taketsugu, Tetsuya; Maeda, Satoshi

    2017-04-01

    We report a new approach to search for structures of minimum energy conical intersection (MECIs) automatically. Gradient projection (GP) method and single component artificial force induced reaction (SC-AFIR) method were combined in the present approach. As case studies, MECIs of benzene and naphthalene between their ground and first excited singlet electronic states (S0/S1-MECIs) were explored. All S0/S1-MECIs reported previously were obtained automatically. Furthermore, the number of force calculations was reduced compared to the one required in the previous search. Improved convergence in a step in which various geometrical displacements are induced by SC-AFIR would contribute to the cost reduction.

  3. Scale adaptive compressive tracking.

    Science.gov (United States)

    Zhao, Pengpeng; Cui, Shaohui; Gao, Min; Fang, Dan

    2016-01-01

    Recently, the compressive tracking (CT) method (Zhang et al. in Proceedings of European conference on computer vision, pp 864-877, 2012) has attracted much attention due to its high efficiency, but it cannot well deal with the scale changing objects due to its constant tracking box. To address this issue, in this paper we propose a scale adaptive CT approach, which adaptively adjusts the scale of tracking box with the size variation of the objects. Our method significantly improves CT in three aspects: Firstly, the scale of tracking box is adaptively adjusted according to the size of the objects. Secondly, in the CT method, all the compressive features are supposed independent and equal contribution to the classifier. Actually, different compressive features have different confidence coefficients. In our proposed method, the confidence coefficients of features are computed and used to achieve different contribution to the classifier. Finally, in the CT method, the learning parameter λ is constant, which will result in large tracking drift on the occasion of object occlusion or large scale appearance variation. In our proposed method, a variable learning parameter λ is adopted, which can be adjusted according to the object appearance variation rate. Extensive experiments on the CVPR2013 tracking benchmark demonstrate the superior performance of the proposed method compared to state-of-the-art tracking algorithms.

  4. New model for prediction binary mixture of antihistamine decongestant using artificial neural networks and least squares support vector machine by spectrophotometry method

    Science.gov (United States)

    Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza

    2017-07-01

    In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.

  5. Exploring the perceptions of physicians, caregivers and families towards artificial nutrition and hydration for people in permanent vegetative state: How can a photo-elicitation method help?

    Directory of Open Access Journals (Sweden)

    Elodie Cretin

    Full Text Available The question of withdrawing artificial nutrition and hydration from people in a permanent vegetative state sparks considerable ethical and legal debate. Therefore, understanding the elements that influence such a decision is crucial. However, exploring perceptions of artificial nutrition and hydration is methodologically challenging for several reasons. First, because of the emotional state of the professionals and family members, who are facing an extremely distressing situation; second, because this question mirrors representations linked to a deep-rooted fear of dying of hunger and thirst; and third, because of taboos surrounding death. We sought to determine the best method to explore such complex situations in depth. This article aims to assess the relevance of the photo-elicitation interview method to analyze the perceptions and attitudes of health professionals and families of people in a permanent vegetative state regarding artificial nutrition and hydration. The photo-elicitation interview method consists in inserting one or more photographs into a research interview. An original set of 60 photos was built using Google Images and participants were asked to choose photos (10 maximum and talk about them. The situations of 32 patients were explored in 23 dedicated centers for people in permanent vegetative state across France. In total, 138 interviews were conducted with health professionals and family members. We found that the photo-elicitation interview method 1 was well accepted by the participants and allowed them to express their emotions constructively, 2 fostered narration, reflexivity and introspection, 3 offered a sufficient "unusual angle" to allow participants to go beyond stereotypes and habits of thinking, and 4 can be replicated in other research areas. The use of visual methods currently constitutes an expanding area of research and this study stressed that this is of special interest to enhance research among populations

  6. Snail Farming in Mature Rubber Plantation : 4. Studies on some Artificial Methods for Hatching of Snail Eggs and Protection of Young Snails during the Dry Season

    Directory of Open Access Journals (Sweden)

    Awah, AA.

    2001-01-01

    Full Text Available Three species of edible land snails of the moist forest belt of Nigeria, Archachatina marginata (Swainson, Archachatina papyracae (Pfeiffer and two phenotypes of Limicolaria species, sometimes named Limicolaria flammae (Muller and Limicolaria aurora (Jay, were used in the study of three methods of artificial hatching of snail eggs and their young ones for the study of two methods of reduction of mortality during the dry season. The results of egg laying performance by the three species of snails showed a significantly (p <0.01 higher population explosion in a given breeding season for L. flammae/aurora than for either A. papyracae or A. marginata. The results of artificial methods for hatching of snail eggs indicated that the use of plastic containers, plus either loose topsoil or cotton wool for the incubator mediums or the use of cellophane containers (bag plus loose topsoil for the incubator medium, were in each case suitable for adoption in successful hatching of snail eggs artificially. Leaking coagulation pans or wooden boxes, half filled with heat sterilized loose topsoil and placed on the ground under shade of rubber tree canopy as dry season protection methods for the snails, were again in each case effective in the reduction of field mortality of the young snails. The survival rates were 100 % ; 90.6 % and 71.2 % for youngs of A. marginata, A. papyracae and L. flammae/aurora respectively. The results further indicated that the dry season protection method deemed optimum for the youngs of A. marginata may not necessarily be optimum for the youngs of either A. papyracae orL. flammae/aurora.

  7. Development of a general thermodynamically consistent projection method for the Navier-Stokes equations and its application to compressible natural convection of real fluids

    Science.gov (United States)

    Cook, Charles R.

    The development of a general method for the direct solution of the Navier-Stokes equations, where no assumptions or modeling are required, with any equation of state, while maintaining thermodynamic equilibrium is the subject. This is accomplished through generalization of the Characteristic Based Split (CBS) method by removing isentropic assumptions and fully coupling the equation of state with the pressure and energy fields. The Modified CBS (MCBS) method is developed in rigor from first principles with the Navier-Stokes equations, where the equation of state is not required to be known or an analytical expression. Thermodynamic equilibrium, or thermodynamic consistency, where the pressure field from the equation of state, p(rho,T), is the same as the dynamic pressure field, is recovered through the implicit treatment of the temperature field during the solution of conservation of energy. Implicit treatment of both the pressure and temperature fields further enhances the MCBS method by permitting the integration over acoustic time scales if desired, achieving acoustic filtering without modification to the underlying governing equations. The MCBS as implemented in a new Finite Element Method (FEM) code is applied to the study of compressible natural convection, where the entirety of Navier-Stokes equations is expressed, with several equations of state. Validation of the MCBS method for incompressible Boussinesq, incompressible thermodynamic Boussinesq, and compressible low-Mach natural convection in a cavity and near wall compressible thermal expansion waves is achieved with exceptional accuracy with the single MCBS implementation. Further, the solution of natural convection in a cavity using RefProp for the equation of state as well as all thermodynamic and transport properties was successfully achieved with the same implementation, providing real fluid results. The case of natural convection in a cavity is further pushed into higher Rayleigh numbers where the

  8. Glytube: a conical tube and parafilm M-based method as a simplified device to artificially blood-feed the dengue vector mosquito, Aedes aegypti.

    Directory of Open Access Journals (Sweden)

    André Luis Costa-da-Silva

    Full Text Available Aedes aegypti, the main vector of dengue virus, requires a blood meal to produce eggs. Although live animals are still the main blood source for laboratory colonies, many artificial feeders are available. These feeders are also the best method for experimental oral infection of Ae. aegypti with Dengue viruses. However, most of them are expensive or laborious to construct. Based on principle of Rutledge-type feeder, a conventional conical tube, glycerol and Parafilm-M were used to develop a simple in-house feeder device. The blood feeding efficiency of this apparatus was compared to a live blood source, mice, and no significant differences (p = 0.1189 were observed between artificial-fed (51.3% of engorgement and mice-fed groups (40.6%. Thus, an easy to assemble and cost-effective artificial feeder, designated "Glytube" was developed in this report. This simple and efficient feeding device can be built with common laboratory materials for research on Ae. aegypti.

  9. Artificial intelligence

    International Nuclear Information System (INIS)

    Perret-Galix, D.

    1992-01-01

    A vivid example of the growing need for frontier physics experiments to make use of frontier technology is in the field of artificial intelligence and related themes. This was reflected in the second international workshop on 'Software Engineering, Artificial Intelligence and Expert Systems in High Energy and Nuclear Physics' which took place from 13-18 January at France Telecom's Agelonde site at La Londe des Maures, Provence. It was the second in a series, the first having been held at Lyon in 1990

  10. Artificial Intelligence

    CERN Document Server

    Warwick, Kevin

    2011-01-01

    if AI is outside your field, or you know something of the subject and would like to know more then Artificial Intelligence: The Basics is a brilliant primer.' - Nick Smith, Engineering and Technology Magazine November 2011 Artificial Intelligence: The Basics is a concise and cutting-edge introduction to the fast moving world of AI. The author Kevin Warwick, a pioneer in the field, examines issues of what it means to be man or machine and looks at advances in robotics which have blurred the boundaries. Topics covered include: how intelligence can be defined whether machines can 'think' sensory

  11. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  12. Comparison of Boolean analysis and standard phylogenetic methods using artificially evolved and natural mt-tRNA sequences from great apes.

    Science.gov (United States)

    Ari, Eszter; Ittzés, Péter; Podani, János; Thi, Quynh Chi Le; Jakó, Eena

    2012-04-01

    Boolean analysis (or BOOL-AN; Jakó et al., 2009. BOOL-AN: A method for comparative sequence analysis and phylogenetic reconstruction. Mol. Phylogenet. Evol. 52, 887-97.), a recently developed method for sequence comparison uses the Iterative Canonical Form of Boolean functions. It considers sequence information in a way entirely different from standard phylogenetic methods (i.e. Maximum Parsimony, Maximum-Likelihood, Neighbor-Joining, and Bayesian analysis). The performance and reliability of Boolean analysis were tested and compared with the standard phylogenetic methods, using artificially evolved - simulated - nucleotide sequences and the 22 mitochondrial tRNA genes of the great apes. At the outset, we assumed that the phylogeny of Hominidae is generally well established, and the guide tree of artificial sequence evolution can also be used as a benchmark. These offer a possibility to compare and test the performance of different phylogenetic methods. Trees were reconstructed by each method from 2500 simulated sequences and 22 mitochondrial tRNA sequences. We also introduced a special re-sampling method for Boolean analysis on permuted sequence sites, the P-BOOL-AN procedure. Considering the reliability values (branch support values of consensus trees and Robinson-Foulds distances) we used for simulated sequence trees produced by different phylogenetic methods, BOOL-AN appeared as the most reliable method. Although the mitochondrial tRNA sequences of great apes are relatively short (59-75 bases long) and the ratio of their constant characters is about 75%, BOOL-AN, P-BOOL-AN and the Bayesian approach produced the same tree-topology as the established phylogeny, while the outcomes of Maximum Parsimony, Maximum-Likelihood and Neighbor-Joining methods were equivocal. We conclude that Boolean analysis is a promising alternative to existing methods of sequence comparison for phylogenetic reconstruction and congruence analysis. Copyright © 2012 Elsevier Inc. All

  13. Steady and Unsteady Numerical Solution of Generalized Newtonian Fluids Flow by Runge-Kutta method

    Science.gov (United States)

    Keslerová, R.; Kozel, K.; Prokop, V.

    2010-09-01

    In this paper the laminar viscous incompressible flow for generalized Newtonian (Newtonian and non-Newtonian) fluids is considered. The governing system of equations is the system of Navier-Stokes equations and the continuity equation. The steady and unsteady numerical solution for this system is computed by finite volume method combined with an artificial compressibility method. For time discretization the explicit multistage Runge-Kutta numerical scheme is considered. Steady state solution is achieved for t→∞ using steady boundary conditions and followed by steady residual behavior. The dual time-stepping method is considered for unsteady computation. The high artificial compressibility coefficient is used in the artificial compressibility method applied in the dual time τ. The steady and unsteady numerical results of Newtonian and non-Newtonian (shear thickening and shear thinning) fluids flow in the branching channel are presented.

  14. Multimodal approach to characterization of hydrophilic matrices manufactured by wet and dry granulation or direct compression methods.

    Science.gov (United States)

    Kulinowski, Piotr; Woyna-Orlewicz, Krzysztof; Obrał, Jadwiga; Rappen, Gerd-Martin; Haznar-Garbacz, Dorota; Węglarz, Władysław P; Jachowicz, Renata; Wyszogrodzka, Gabriela; Klaja, Jolanta; Dorożyński, Przemysław P

    2016-02-29

    The purpose of the research was to investigate the effect of the manufacturing process of the controlled release hydrophilic matrix tablets on their hydration behavior, internal structure and drug release. Direct compression (DC) quetiapine hemifumarate matrices and matrices made of powders obtained by dry granulation (DG) and high shear wet granulation (HS) were prepared. They had the same quantitative composition and they were evaluated using X-ray microtomography, magnetic resonance imaging and biorelevant stress test dissolution. Principal results concerned matrices after 2 h of hydration: (i) layered structure of the DC and DG hydrated tablets with magnetic resonance image intensity decreasing towards the center of the matrix was observed, while in HS matrices layer of lower intensity appeared in the middle of hydrated part; (ii) the DC and DG tablets retained their core and consequently exhibited higher resistance to the physiological stresses during simulation of small intestinal passage than HS formulation. Comparing to DC, HS granulation changed properties of the matrix in terms of hydration pattern and resistance to stress in biorelevant dissolution apparatus. Dry granulation did not change these properties-similar hydration pattern and dissolution in biorelevant conditions were observed for DC and DG matrices. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Development of Ground Coils with Low Eddy Current Loss by Applying the Compression Molding Method after the Coil Winding

    Science.gov (United States)

    Suzuki, Masao; Aiba, Masayuki; Takahashi, Noriyuki; Ota, Satoru; Okada, Shigenori

    In a magnetically levitated transportation (MAGLEV) system, a huge number of ground coils will be required because they must be laid for the whole line. Therefore, stable performance and reduced cost are essential requirements for the ground coil development. On the other hand, because the magnetic field changes when the superconducting magnet passes by, an eddy current will be generated in the conductor of the ground coil and will result in energy loss. The loss not only increases the magnetic resistance for the train running but also brings an increase in the ground coil temperature. Therefore, the reduction of the eddy current loss is extremely important. This study examined ground coils in which both the eddy current loss and temperature increase were small. Furthermore, quantitative comparison for the eddy current loss of various magnet wire samples was performed by bench test. On the basis of the comparison, a round twisted wire having low eddy current loss was selected as an effective ground coil material. In addition, the ground coils were manufactured on trial. A favorable outlook to improve the size accuracy of the winding coil and uneven thickness of molded resin was obtained without reducing the insulation strength between the coil layers by applying a compression molding after winding.

  16. Still image and video compression with MATLAB

    CERN Document Server

    Thyagarajan, K

    2010-01-01

    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  17. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  18. Artificial Intelligence.

    Science.gov (United States)

    Wash, Darrel Patrick

    1989-01-01

    Making a machine seem intelligent is not easy. As a consequence, demand has been rising for computer professionals skilled in artificial intelligence and is likely to continue to go up. These workers develop expert systems and solve the mysteries of machine vision, natural language processing, and neural networks. (Editor)

  19. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-02-01

    Data compression has become one of the cornerstones of modern astronomical data analysis, with the vast majority of analyses compressing large raw datasets down to a manageable number of informative summaries. In this paper we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  20. Optimization of DRASTIC method by supervised committee machine artificial intelligence to assess groundwater vulnerability for Maragheh-Bonab plain aquifer, Iran

    Science.gov (United States)

    Fijani, Elham; Nadiri, Ata Allah; Asghari Moghaddam, Asghar; Tsai, Frank T.-C.; Dixon, Barnali

    2013-10-01

    Contamination of wells with nitrate-N (NO3-N) poses various threats to human health. Contamination of groundwater is a complex process and full of uncertainty in regional scale. Development of an integrative vulnerability assessment methodology can be useful to effectively manage (including prioritization of limited resource allocation to monitor high risk areas) and protect this valuable freshwater source. This study introduces a supervised committee machine with artificial intelligence (SCMAI) model to improve the DRASTIC method for groundwater vulnerability assessment for the Maragheh-Bonab plain aquifer in Iran. Four different AI models are considered in the SCMAI model, whose input is the DRASTIC parameters. The SCMAI model improves the committee machine artificial intelligence (CMAI) model by replacing the linear combination in the CMAI with a nonlinear supervised ANN framework. To calibrate the AI models, NO3-N concentration data are divided in two datasets for the training and validation purposes. The target value of the AI models in the training step is the corrected vulnerability indices that relate to the first NO3-N concentration dataset. After model training, the AI models are verified by the second NO3-N concentration dataset. The results show that the four AI models are able to improve the DRASTIC method. Since the best AI model performance is not dominant, the SCMAI model is considered to combine the advantages of individual AI models to achieve the optimal performance. The SCMAI method re-predicts the groundwater vulnerability based on the different AI model prediction values. The results show that the SCMAI outperforms individual AI models and committee machine with artificial intelligence (CMAI) model. The SCMAI model ensures that no water well with high NO3-N levels would be classified as low risk and vice versa. The study concludes that the SCMAI model is an effective model to improve the DRASTIC model and provides a confident estimate of the