WorldWideScience

Sample records for artificial compressibility method

  1. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-01-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data. (author)

  2. Preconditioned characteristic boundary conditions based on artificial compressibility method for solution of incompressible flows

    Science.gov (United States)

    Hejranfar, Kazem; Parseh, Kaveh

    2017-09-01

    The preconditioned characteristic boundary conditions based on the artificial compressibility (AC) method are implemented at artificial boundaries for the solution of two- and three-dimensional incompressible viscous flows in the generalized curvilinear coordinates. The compatibility equations and the corresponding characteristic variables (or the Riemann invariants) are mathematically derived and then applied as suitable boundary conditions in a high-order accurate incompressible flow solver. The spatial discretization of the resulting system of equations is carried out by the fourth-order compact finite-difference (FD) scheme. In the preconditioning applied here, the value of AC parameter in the flow field and also at the far-field boundary is automatically calculated based on the local flow conditions to enhance the robustness and performance of the solution algorithm. The code is fully parallelized using the Concurrency Runtime standard and Parallel Patterns Library (PPL) and its performance on a multi-core CPU is analyzed. The incompressible viscous flows around a 2-D circular cylinder, a 2-D NACA0012 airfoil and also a 3-D wavy cylinder are simulated and the accuracy and performance of the preconditioned characteristic boundary conditions applied at the far-field boundaries are evaluated in comparison to the simplified boundary conditions and the non-preconditioned characteristic boundary conditions. It is indicated that the preconditioned characteristic boundary conditions considerably improve the convergence rate of the solution of incompressible flows compared to the other boundary conditions and the computational costs are significantly decreased.

  3. A relaxation-projection method for compressible flows. Part II: Artificial heat exchanges for multiphase shocks

    International Nuclear Information System (INIS)

    Petitpas, Fabien; Franquet, Erwin; Saurel, Richard; Le Metayer, Olivier

    2007-01-01

    The relaxation-projection method developed in Saurel et al. [R. Saurel, E. Franquet, E. Daniel, O. Le Metayer, A relaxation-projection method for compressible flows. Part I: The numerical equation of state for the Euler equations, J. Comput. Phys. (2007) 822-845] is extended to the non-conservative hyperbolic multiphase flow model of Kapila et al. [A.K. Kapila, Menikoff, J.B. Bdzil, S.F. Son, D.S. Stewart, Two-phase modeling of deflagration to detonation transition in granular materials: reduced equations, Physics of Fluids 13(10) (2001) 3002-3024]. This model has the ability to treat multi-temperatures mixtures evolving with a single pressure and velocity and is particularly interesting for the computation of interface problems with compressible materials as well as wave propagation in heterogeneous mixtures. The non-conservative character of this model poses however computational challenges in the presence of shocks. The first issue is related to the Riemann problem resolution that necessitates shock jump conditions. Thanks to the Rankine-Hugoniot relations proposed and validated in Saurel et al. [R. Saurel, O. Le Metayer, J. Massoni, S. Gavrilyuk, Shock jump conditions for multiphase mixtures with stiff mechanical relaxation, Shock Waves 16 (3) (2007) 209-232] exact and approximate 2-shocks Riemann solvers are derived. However, the Riemann solver is only a part of a numerical scheme and non-conservative variables pose extra difficulties for the projection or cell average of the solution. It is shown that conventional Godunov schemes are unable to converge to the exact solution for strong multiphase shocks. This is due to the incorrect partition of the energies or entropies in the cell averaged mixture. To circumvent this difficulty a specific Lagrangian scheme is developed. The correct partition of the energies is achieved by using an artificial heat exchange in the shock layer. With the help of an asymptotic analysis this heat exchange takes a similar form as

  4. A relaxation-projection method for compressible flows. Part II: Artificial heat exchanges for multiphase shocks

    Science.gov (United States)

    Petitpas, Fabien; Franquet, Erwin; Saurel, Richard; Le Metayer, Olivier

    2007-08-01

    The relaxation-projection method developed in Saurel et al. [R. Saurel, E. Franquet, E. Daniel, O. Le Metayer, A relaxation-projection method for compressible flows. Part I: The numerical equation of state for the Euler equations, J. Comput. Phys. (2007) 822-845] is extended to the non-conservative hyperbolic multiphase flow model of Kapila et al. [A.K. Kapila, Menikoff, J.B. Bdzil, S.F. Son, D.S. Stewart, Two-phase modeling of deflagration to detonation transition in granular materials: reduced equations, Physics of Fluids 13(10) (2001) 3002-3024]. This model has the ability to treat multi-temperatures mixtures evolving with a single pressure and velocity and is particularly interesting for the computation of interface problems with compressible materials as well as wave propagation in heterogeneous mixtures. The non-conservative character of this model poses however computational challenges in the presence of shocks. The first issue is related to the Riemann problem resolution that necessitates shock jump conditions. Thanks to the Rankine-Hugoniot relations proposed and validated in Saurel et al. [R. Saurel, O. Le Metayer, J. Massoni, S. Gavrilyuk, Shock jump conditions for multiphase mixtures with stiff mechanical relaxation, Shock Waves 16 (3) (2007) 209-232] exact and approximate 2-shocks Riemann solvers are derived. However, the Riemann solver is only a part of a numerical scheme and non-conservative variables pose extra difficulties for the projection or cell average of the solution. It is shown that conventional Godunov schemes are unable to converge to the exact solution for strong multiphase shocks. This is due to the incorrect partition of the energies or entropies in the cell averaged mixture. To circumvent this difficulty a specific Lagrangian scheme is developed. The correct partition of the energies is achieved by using an artificial heat exchange in the shock layer. With the help of an asymptotic analysis this heat exchange takes a similar form as

  5. Robustly Fitting and Forecasting Dynamical Data With Electromagnetically Coupled Artificial Neural Network: A Data Compression Method.

    Science.gov (United States)

    Wang, Ziyin; Liu, Mandan; Cheng, Yicheng; Wang, Rubin

    2017-06-01

    In this paper, a dynamical recurrent artificial neural network (ANN) is proposed and studied. Inspired from a recent research in neuroscience, we introduced nonsynaptic coupling to form a dynamical component of the network. We mathematically proved that, with adequate neurons provided, this dynamical ANN model is capable of approximating any continuous dynamic system with an arbitrarily small error in a limited time interval. Its extreme concise Jacobian matrix makes the local stability easy to control. We designed this ANN for fitting and forecasting dynamic data and obtained satisfied results in simulation. The fitting performance is also compared with those of both the classic dynamic ANN and the state-of-the-art models. Sufficient trials and the statistical results indicated that our model is superior to those have been compared. Moreover, we proposed a robust approximation problem, which asking the ANN to approximate a cluster of input-output data pairs in large ranges and to forecast the output of the system under previously unseen input. Our model and learning scheme proposed in this paper have successfully solved this problem, and through this, the approximation becomes much more robust and adaptive to noise, perturbation, and low-order harmonic wave. This approach is actually an efficient method for compressing massive external data of a dynamic system into the weight of the ANN.

  6. Treatment of fully enclosed FSI using artificial compressibility

    CSIR Research Space (South Africa)

    Bogaers, Alfred EJ

    2013-07-01

    Full Text Available artificial compressibility (AC), whereby the fluid equations are modified to allow for compressibility which internally incorporates an approximation of the system volume change as a function of pressure....

  7. Extending the robustness and efficiency of artificial compressibility for partitioned fluid-structure interactions

    CSIR Research Space (South Africa)

    Bogaers, Alfred EJ

    2015-01-01

    Full Text Available In this paper we introduce the idea of combining artificial compressibility (AC) with quasi-Newton (QN) methods to solve strongly coupled, fully/quasi-enclosed fluid-structure interaction (FSI) problems. Partitioned, incompressible, FSI based...

  8. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  9. Bacterial DNA Sequence Compression Models Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Armando J. Pinho

    2013-08-01

    Full Text Available It is widely accepted that the advances in DNA sequencing techniques have contributed to an unprecedented growth of genomic data. This fact has increased the interest in DNA compression, not only from the information theory and biology points of view, but also from a practical perspective, since such sequences require storage resources. Several compression methods exist, and particularly, those using finite-context models (FCMs have received increasing attention, as they have been proven to effectively compress DNA sequences with low bits-per-base, as well as low encoding/decoding time-per-base. However, the amount of run-time memory required to store high-order finite-context models may become impractical, since a context-order as low as 16 requires a maximum of 17.2 x 109 memory entries. This paper presents a method to reduce such a memory requirement by using a novel application of artificial neural networks (ANN to build such probabilistic models in a compact way and shows how to use them to estimate the probabilities. Such a system was implemented, and its performance compared against state-of-the art compressors, such as XM-DNA (expert model and FCM-Mx (mixture of finite-context models , as well as with general-purpose compressors. Using a combination of order-10 FCM and ANN, similar encoding results to those of FCM, up to order-16, are obtained using only 17 megabytes of memory, whereas the latter, even employing hash-tables, uses several hundreds of megabytes.

  10. Stability of Bifurcating Stationary Solutions of the Artificial Compressible System

    Science.gov (United States)

    Teramoto, Yuka

    2018-02-01

    The artificial compressible system gives a compressible approximation of the incompressible Navier-Stokes system. The latter system is obtained from the former one in the zero limit of the artificial Mach number ɛ which is a singular limit. The sets of stationary solutions of both systems coincide with each other. It is known that if a stationary solution of the incompressible system is asymptotically stable and the velocity field of the stationary solution satisfies an energy-type stability criterion, then it is also stable as a solution of the artificial compressible one for sufficiently small ɛ . In general, the range of ɛ shrinks when the spectrum of the linearized operator for the incompressible system approaches to the imaginary axis. This can happen when a stationary bifurcation occurs. It is proved that when a stationary bifurcation from a simple eigenvalue occurs, the range of ɛ can be taken uniformly near the bifurcation point to conclude the stability of the bifurcating solution as a solution of the artificial compressible system.

  11. Survey of numerical methods for compressible fluids

    Energy Technology Data Exchange (ETDEWEB)

    Sod, G A

    1977-06-01

    The finite difference methods of Godunov, Hyman, Lax-Wendroff (two-step), MacCormack, Rusanov, the upwind scheme, the hybrid scheme of Harten and Zwas, the antidiffusion method of Boris and Book, and the artificial compression method of Harten are compared with the random choice known as Glimm's method. The methods are used to integrate the one-dimensional equations of gas dynamics for an inviscid fluid. The results are compared and demonstrate that Glimm's method has several advantages. 16 figs., 4 tables.

  12. Artificial neural network does better spatiotemporal compressive sampling

    Science.gov (United States)

    Lee, Soo-Young; Hsu, Charles; Szu, Harold

    2012-06-01

    Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.

  13. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  14. Subband Coding Methods for Seismic Data Compression

    Science.gov (United States)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  15. [Artificial muscle and its prospect in application for direct cardiac compression assist].

    Science.gov (United States)

    Dong, Jing; Yang, Ming; Zheng, Zhejun; Yan, Guozheng

    2008-12-01

    Artificial heart is an effective device in solving insufficient native heart supply for heart transplant, and the research and application of novel actuators play an important role in the development of artificial heart. In this paper, artificial muscle is introduced as the actuators of direct cardiac compression assist, and some of its parameters are compared with those of native heart muscle. The open problems are also discussed.

  16. Prediction of compressibility parameters of the soils using artificial neural network.

    Science.gov (United States)

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  17. Estimation of mechanical properties of nanomaterials using artificial intelligence methods

    Science.gov (United States)

    Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.

    2014-09-01

    Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.

  18. Comparative Study on Theoretical and Machine Learning Methods for Acquiring Compressed Liquid Densities of 1,1,1,2,3,3,3-Heptafluoropropane (R227ea via Song and Mason Equation, Support Vector Machine, and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Hao Li

    2016-01-01

    Full Text Available 1,1,1,2,3,3,3-Heptafluoropropane (R227ea is a good refrigerant that reduces greenhouse effects and ozone depletion. In practical applications, we usually have to know the compressed liquid densities at different temperatures and pressures. However, the measurement requires a series of complex apparatus and operations, wasting too much manpower and resources. To solve these problems, here, Song and Mason equation, support vector machine (SVM, and artificial neural networks (ANNs were used to develop theoretical and machine learning models, respectively, in order to predict the compressed liquid densities of R227ea with only the inputs of temperatures and pressures. Results show that compared with the Song and Mason equation, appropriate machine learning models trained with precise experimental samples have better predicted results, with lower root mean square errors (RMSEs (e.g., the RMSE of the SVM trained with data provided by Fedele et al. [1] is 0.11, while the RMSE of the Song and Mason equation is 196.26. Compared to advanced conventional measurements, knowledge-based machine learning models are proved to be more time-saving and user-friendly.

  19. Logarithmic compression methods for spectral data

    Science.gov (United States)

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  20. Using an artificial neural network to predict carbon dioxide compressibility factor at high pressure and temperature

    Energy Technology Data Exchange (ETDEWEB)

    Mohagheghian, Erfan [Memorial University of Newfoundland, St. John' s (Canada); Zafarian-Rigaki, Habiballah; Motamedi-Ghahfarrokhi, Yaser; Hemmati-Sarapardeh, Abdolhossein [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of)

    2015-10-15

    Carbon dioxide injection, which is widely used as an enhanced oil recovery (EOR) method, has the potential of being coupled with CO{sub 2} sequestration and reducing the emission of greenhouse gas. Hence, knowing the compressibility factor of carbon dioxide is of a vital significance. Compressibility factor (Z-factor) is traditionally measured through time consuming, expensive and cumbersome experiments. Hence, developing a fast, robust and accurate model for its estimation is necessary. In this study, a new reliable model on the basis of feed forward artificial neural networks is presented to predict CO{sub 2} compressibility factor. Reduced temperature and pressure were selected as the input parameters of the proposed model. To evaluate and compare the results of the developed model with pre-existing models, both statistical and graphical error analyses were employed. The results indicated that the proposed model is more reliable and accurate compared to pre-existing models in a wide range of temperature (up to 1,273.15 K) and pressure (up to 140MPa). Furthermore, by employing the relevancy factor, the effect of pressure and temprature on the Z-factor of CO{sub 2} was compared for below and above the critical pressure of CO{sub 2}, and the physcially expected trends were observed. Finally, to identify the probable outliers and applicability domain of the proposed ANN model, both numerical and graphical techniques based on Leverage approach were performed. The results illustrated that only 1.75% of the experimental data points were located out of the applicability domain of the proposed model. As a result, the developed model is reliable for the prediction of CO{sub 2} compressibility factor.

  1. Compressive sampling by artificial neural networks for video

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt

    2011-06-01

    We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.

  2. Artificial intelligence methods for diagnostic

    International Nuclear Information System (INIS)

    Dourgnon-Hanoune, A.; Porcheron, M.; Ricard, B.

    1996-01-01

    To assist in diagnosis of its nuclear power plants, the Research and Development Division of Electricite de France has been developing skills in Artificial Intelligence for about a decade. Different diagnostic expert systems have been designed. Among them, SILEX for control rods cabinet troubleshooting, DIVA for turbine generator diagnosis, DIAPO for reactor coolant pump diagnosis. This know how in expert knowledge modeling and acquisition is direct result of experience gained during developments and of a more general reflection on knowledge based system development. We have been able to reuse this results for other developments such as a guide for auxiliary rotating machines diagnosis. (authors)

  3. Quality by design approach: application of artificial intelligence techniques of tablets manufactured by direct compression.

    Science.gov (United States)

    Aksu, Buket; Paradkar, Anant; de Matas, Marcel; Ozer, Ozgen; Güneri, Tamer; York, Peter

    2012-12-01

    The publication of the International Conference of Harmonization (ICH) Q8, Q9, and Q10 guidelines paved the way for the standardization of quality after the Food and Drug Administration issued current Good Manufacturing Practices guidelines in 2003. "Quality by Design", mentioned in the ICH Q8 guideline, offers a better scientific understanding of critical process and product qualities using knowledge obtained during the life cycle of a product. In this scope, the "knowledge space" is a summary of all process knowledge obtained during product development, and the "design space" is the area in which a product can be manufactured within acceptable limits. To create the spaces, artificial neural networks (ANNs) can be used to emphasize the multidimensional interactions of input variables and to closely bind these variables to a design space. This helps guide the experimental design process to include interactions among the input variables, along with modeling and optimization of pharmaceutical formulations. The objective of this study was to develop an integrated multivariate approach to obtain a quality product based on an understanding of the cause-effect relationships between formulation ingredients and product properties with ANNs and genetic programming on the ramipril tablets prepared by the direct compression method. In this study, the data are generated through the systematic application of the design of experiments (DoE) principles and optimization studies using artificial neural networks and neurofuzzy logic programs.

  4. Methods for Sampling and Measurement of Compressed Air Contaminants

    International Nuclear Information System (INIS)

    Stroem, L.

    1976-10-01

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  5. Methods for Sampling and Measurement of Compressed Air Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Stroem, L

    1976-10-15

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  6. Lagrangian particle method for compressible fluid dynamics

    Science.gov (United States)

    Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang

    2018-06-01

    A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.

  7. Meshless Method for Simulation of Compressible Flow

    Science.gov (United States)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow

  8. Prediction of compression strength of high performance concrete using artificial neural networks

    International Nuclear Information System (INIS)

    Torre, A; Moromi, I; Garcia, F; Espinoza, P; Acuña, L

    2015-01-01

    High-strength concrete is undoubtedly one of the most innovative materials in construction. Its manufacture is simple and is carried out starting from essential components (water, cement, fine and aggregates) and a number of additives. Their proportions have a high influence on the final strength of the product. This relations do not seem to follow a mathematical formula and yet their knowledge is crucial to optimize the quantities of raw materials used in the manufacture of concrete. Of all mechanical properties, concrete compressive strength at 28 days is most often used for quality control. Therefore, it would be important to have a tool to numerically model such relationships, even before processing. In this aspect, artificial neural networks have proven to be a powerful modeling tool especially when obtaining a result with higher reliability than knowledge of the relationships between the variables involved in the process. This research has designed an artificial neural network to model the compressive strength of concrete based on their manufacturing parameters, obtaining correlations of the order of 0.94

  9. Assessment of high-resolution methods for numerical simulations of compressible turbulence with shock waves

    International Nuclear Information System (INIS)

    Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.

    2010-01-01

    Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.

  10. Superplastic boronizing of duplex stainless steel under dual compression method

    International Nuclear Information System (INIS)

    Jauhari, I.; Yusof, H.A.M.; Saidan, R.

    2011-01-01

    Highlights: → Superplastic boronizing. → Dual compression method has been developed. → Hard boride layer. → Bulk deformation was significantly thicker the boronized layer. → New data on boronizing could be expanded the application of DSS in industries. - Abstract: In this work, SPB of duplex stainless steel (DSS) under compression method is studied with the objective to produce ultra hard and thick boronized layer using minimal amount of boron powder and at a much faster boronizing time as compared to the conventional process. SPB is conducted under dual compression methods. In the first method DSS is boronized using a minimal amount of boron powder under a fix pre-strained compression condition throughout the process. The compression strain is controlled in such a way that plastic deformation is restricted at the surface asperities of the substrate in contact with the boron powder. In the second method, the boronized specimen taken from the first mode is compressed superplastically up to a certain compressive strain under a certain strain rate condition. The process in the second method is conducted without the present of boron powder. As compared with the conventional boronizing process, through this SPB under dual compression methods, a much harder and thicker boronized layer thickness is able to be produced using a minimal amount of boron powder.

  11. Superplastic boronizing of duplex stainless steel under dual compression method

    Energy Technology Data Exchange (ETDEWEB)

    Jauhari, I., E-mail: iswadi@um.edu.my [Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia); Yusof, H.A.M.; Saidan, R. [Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia)

    2011-10-25

    Highlights: {yields} Superplastic boronizing. {yields} Dual compression method has been developed. {yields} Hard boride layer. {yields} Bulk deformation was significantly thicker the boronized layer. {yields} New data on boronizing could be expanded the application of DSS in industries. - Abstract: In this work, SPB of duplex stainless steel (DSS) under compression method is studied with the objective to produce ultra hard and thick boronized layer using minimal amount of boron powder and at a much faster boronizing time as compared to the conventional process. SPB is conducted under dual compression methods. In the first method DSS is boronized using a minimal amount of boron powder under a fix pre-strained compression condition throughout the process. The compression strain is controlled in such a way that plastic deformation is restricted at the surface asperities of the substrate in contact with the boron powder. In the second method, the boronized specimen taken from the first mode is compressed superplastically up to a certain compressive strain under a certain strain rate condition. The process in the second method is conducted without the present of boron powder. As compared with the conventional boronizing process, through this SPB under dual compression methods, a much harder and thicker boronized layer thickness is able to be produced using a minimal amount of boron powder.

  12. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  13. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  14. Structural Dynamic Response Compressing Technique in Bridges using a Cochlea-inspired Artificial Filter Bank (CAFB)

    International Nuclear Information System (INIS)

    Heo, G; Jeon, J; Son, B; Kim, C; Jeon, S; Lee, C

    2016-01-01

    In this study, a cochlea-inspired artificial filter bank (CAFB) was developed to efficiently obtain dynamic response of a structure, and a dynamic response measurement of a cable-stayed bridge model was also carried out to evaluate the performance of the developed CAFB. The developed CAFB used a band-pass filter optimizing algorithm (BOA) and peakpicking algorithm (PPA) to select and compress dynamic response signal containing the modal information which was significant enough. The CAFB was then optimized about the El-Centro earthquake wave which was often used in the construction research, and the software implementation of CAFB was finally embedded in the unified structural management system (USMS). For the evaluation of the developed CAFB, a real time dynamic response experiment was performed on a cable-stayed bridge model, and the response of the cable-stayed bridge model was measured using both the traditional wired system and the developed CAFB-based USMS. The experiment results showed that the compressed dynamic response acquired by the CAFB-based USMS matched significantly with that of the traditional wired system while still carrying sufficient modal information of the cable-stayed bridge. (paper)

  15. A Streamlined Artificial Variable Free Version of Simplex Method

    OpenAIRE

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new ...

  16. Determination of deformation and strength characteristics of artificial geomaterial having step-shaped discontinuities under uniaxial compression

    Science.gov (United States)

    Tsoy, PA

    2018-03-01

    In order to determine the empirical relationship between the linear dimensions of step-shaped macrocracks in geomaterials as well as deformation and strength characteristics of geomaterials (ultimate strength, modulus of deformation) under uniaxial compression, the artificial flat alabaster specimens with the through discontinuities have been manufactured and subjected to a series of the related physical tests.

  17. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  18. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  19. Robust steganographic method utilizing properties of MJPEG compression standard

    Directory of Open Access Journals (Sweden)

    Jakub Oravec

    2015-06-01

    Full Text Available This article presents design of steganographic method, which uses video container as cover data. Video track was recorded by webcam and was further encoded by compression standard MJPEG. Proposed method also takes in account effects of lossy compression. The embedding process is realized by switching places of transform coefficients, which are computed by Discrete Cosine Transform. The article contains possibilities, used techniques, advantages and drawbacks of chosen solution. The results are presented at the end of the article.

  20. Application of PDF methods to compressible turbulent flows

    Science.gov (United States)

    Delarue, B. J.; Pope, S. B.

    1997-09-01

    A particle method applying the probability density function (PDF) approach to turbulent compressible flows is presented. The method is applied to several turbulent flows, including the compressible mixing layer, and good agreement is obtained with experimental data. The PDF equation is solved using a Lagrangian/Monte Carlo method. To accurately account for the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. The mean pressure, the determination of which has been the object of active research over the last few years, is obtained directly from the particle properties. It is therefore not necessary to link the PDF solver with a finite-volume type solver. The stochastic differential equations (SDE) which model the evolution of particle properties are based on existing second-order closures for compressible turbulence, limited in application to low turbulent Mach number flows. Tests are conducted in decaying isotropic turbulence to compare the performances of the PDF method with the Reynolds-stress closures from which it is derived, and in homogeneous shear flows, at which stage comparison with direct numerical simulation (DNS) data is conducted. The model is then applied to the plane compressible mixing layer, reproducing the well-known decrease in the spreading rate with increasing compressibility. It must be emphasized that the goal of this paper is not as much to assess the performance of models of compressibility effects, as it is to present an innovative and consistent PDF formulation designed for turbulent inhomogeneous compressible flows, with the aim of extending it further to deal with supersonic reacting flows.

  1. Investigating low-frequency compression using the Grid method

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Dau, Torsten; MacDonald, Ewen

    2016-01-01

    in literature. Moreover, slopes of the low-level portions of the BM I/O functions estimated at 500 Hz were examined, to determine whether the 500-Hz off-frequency forward masking curves were affected by compression. Overall, the collected data showed a trend confirming the compressive behaviour. However......There is an ongoing discussion about whether the amount of cochlear compression in humans at low frequencies (below 1 kHz) is as high as that at higher frequencies. It is controversial whether the compression affects the slope of the off-frequency forward masking curves at those frequencies. Here......, the Grid method with a 2-interval 1-up 3-down tracking rule was applied to estimate forward masking curves at two characteristic frequencies: 500 Hz and 4000 Hz. The resulting curves and the corresponding basilar membrane input-output (BM I/O) functions were found to be comparable to those reported...

  2. A measurement method for piezoelectric material properties under longitudinal compressive stress–-a compression test method for thin piezoelectric materials

    International Nuclear Information System (INIS)

    Kang, Lae-Hyong; Lee, Dae-Oen; Han, Jae-Hung

    2011-01-01

    We introduce a new compression test method for piezoelectric materials to investigate changes in piezoelectric properties under the compressive stress condition. Until now, compression tests of piezoelectric materials have been generally conducted using bulky piezoelectric ceramics and pressure block. The conventional method using the pressure block for thin piezoelectric patches, which are used in unimorph or bimorph actuators, is prone to unwanted bending and buckling. In addition, due to the constrained boundaries at both ends, the observed piezoelectric behavior contains boundary effects. In order to avoid these problems, the proposed method employs two guide plates with initial longitudinal tensile stress. By removing the tensile stress after bonding a piezoelectric material between the guide layers, longitudinal compressive stress is induced in the piezoelectric layer. Using the compression test specimens, two important properties, which govern the actuation performance of the piezoelectric material, the piezoelectric strain coefficients and the elastic modulus, are measured to evaluate the effects of applied electric fields and re-poling. The results show that the piezoelectric strain coefficient d 31 increases and the elastic modulus decreases when high voltage is applied to PZT5A, and the compression in the longitudinal direction decreases the piezoelectric strain coefficient d 31 but does not affect the elastic modulus. We also found that the re-poling of the piezoelectric material increases the elastic modulus, but the piezoelectric strain coefficient d 31 is not changed much (slightly increased) by re-poling

  3. Word aligned bitmap compression method, data structure, and apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  4. Word aligned bitmap compression method, data structure, and apparatus

    Science.gov (United States)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  5. Convergence of a residual based artificial viscosity finite element method

    KAUST Repository

    Nazarov, Murtazo

    2013-02-01

    We present a residual based artificial viscosity finite element method to solve conservation laws. The Galerkin approximation is stabilized by only residual based artificial viscosity, without any least-squares, SUPG, or streamline diffusion terms. We prove convergence of the method, applied to a scalar conservation law in two space dimensions, toward an unique entropy solution for implicit time stepping schemes. © 2012 Elsevier B.V. All rights reserved.

  6. Technical note: New table look-up lossless compression method ...

    African Journals Online (AJOL)

    Technical note: New table look-up lossless compression method based on binary index archiving. ... International Journal of Engineering, Science and Technology ... This paper intends to present a common use archiver, made up following the dictionary technique and using the index archiving method as a simple and ...

  7. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  8. Compressible cavitation with stochastic field method

    Science.gov (United States)

    Class, Andreas; Dumond, Julien

    2012-11-01

    Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.

  9. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  10. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2006-01-01

    is a modification of the traditional method, the modification consisting of the introduction of circular fan stress fields. To ensure proper behaviour for the service load the -value ( = cot, where  is the angle relative to the beam axis of the uniaxial concrete compression) chosen should not be too large...

  11. METHOD AND APPARATUS FOR INSPECTION OF COMPRESSED DATA PACKAGES

    DEFF Research Database (Denmark)

    2008-01-01

    to be transferred over the data network. The method comprises the steps of: a) extracting payload data from the payload part of the package, b) appending the extracted payload data to a stream of data, c) probing the data package header so as to determine the compression scheme that is applied to the payload data...

  12. A streamlined artificial variable free version of simplex method.

    Directory of Open Access Journals (Sweden)

    Syed Inayatullah

    Full Text Available This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  13. A streamlined artificial variable free version of simplex method.

    Science.gov (United States)

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  14. Review of Artificial Abrasion Test Methods for PV Module Technology

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Muller, Matt T. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Simpson, Lin J. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-08-01

    This review is intended to identify the method or methods--and the basic details of those methods--that might be used to develop an artificial abrasion test. Methods used in the PV literature were compared with their closest implementation in existing standards. Also, meetings of the International PV Quality Assurance Task Force Task Group 12-3 (TG12-3, which is concerned with coated glass) were used to identify established test methods. Feedback from the group, which included many of the authors from the PV literature, included insights not explored within the literature itself. The combined experience and examples from the literature are intended to provide an assessment of the present industry practices and an informed path forward. Recommendations toward artificial abrasion test methods are then identified based on the experiences in the literature and feedback from the PV community. The review here is strictly focused on abrasion. Assessment methods, including optical performance (e.g., transmittance or reflectance), surface energy, and verification of chemical composition were not examined. Methods of artificially soiling PV modules or other specimens were not examined. The weathering of artificial or naturally soiled specimens (which may ultimately include combined temperature and humidity, thermal cycling and ultraviolet light) were also not examined. A sense of the purpose or application of an abrasion test method within the PV industry should, however, be evident from the literature.

  15. Combustion engine variable compression ratio apparatus and method

    Science.gov (United States)

    Lawrence,; Keith, E [Peoria, IL; Strawbridge, Bryan E [Dunlap, IL; Dutart, Charles H [Washington, IL

    2006-06-06

    An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.

  16. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2005-01-01

    This paper presents a new design method, which is a modification of the diagonal compression field method, the modification consisting of the introduction of circular fan stress fields. The traditional method does not allow changes of the concrete compression direction throughout a given beam...... if equilibrium is strictly required. This is conservative, since it is not possible fully to utilize the concrete strength in regions with low shear stresses. The larger inclination (the smaller -value) of the uniaxial concrete stress the more transverse shear reinforcement is needed; hence it would be optimal...... if the -value for a given beam could be set to a low value in regions with high shear stresses and thereafter increased in regions with low shear stresses. Thus the shear reinforcement would be reduced and the concrete strength would be utilized in a better way. In the paper it is shown how circular fan stress...

  17. A new method of artificial latent fingerprint creation using artificial sweat and inkjet printer.

    Science.gov (United States)

    Hong, Sungwook; Hong, Ingi; Han, Aleum; Seo, Jin Yi; Namgung, Juyoung

    2015-12-01

    In order to study fingerprinting in the field of forensic science, it is very important to have two or more latent fingerprints with identical chemical composition and intensity. However, it is impossible to obtain identical fingerprints, in reality, because fingerprinting comes out slightly differently every time. A previous research study had proposed an artificial fingerprint creation method in which inkjet ink was replaced with amino acids and sodium chloride solution: the components of human sweat. But, this method had some drawbacks: divalent cations were not added while formulating the artificial sweat solution, and diluted solutions were used for creating weakly deposited latent fingerprint. In this study, a method was developed for overcoming the drawbacks of the methods used in the previous study. Several divalent cations were added in this study because the amino acid-ninhydrin (or some of its analogues) complex is known to react with divalent cations to produce a photoluminescent product; and, similarly, the amino acid-1,2-indanedione complex is known to be catalyzed by a small amount of zinc ions to produce a highly photoluminescent product. Also, in this study, a new technique was developed which enables to adjust the intensity when printing the latent fingerprint patterns. In this method, image processing software is used to control the intensity of the master fingerprint patterns, which adjusts the printing intensity of the latent fingerprints. This new method opened the way to produce a more realistic artificial fingerprint in various strengths with one artificial sweat working solution. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Image Signal Transfer Method in Artificial Retina using Laser

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, I.Y.; Lee, B.H.; Kim, S.J. [Seoul National University, Seoul (Korea)

    2002-05-01

    Recently, the research on artificial retina for the blind is active. In this paper a new optical link method for the retinal prosthesis is proposed. Laser diode system was chosen to transfer image into the eye in this project and the new optical system was designed and evaluated. The use of laser diode array in artificial retina system makes system simple for lack of signal processing part inside of the eyeball. Designed optical system is enough to focus laser diode array on photodiode array in 20X20 application. (author). 11 refs., 7 figs., 2 tabs.

  19. A GPU-accelerated implicit meshless method for compressible flows

    Science.gov (United States)

    Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng

    2018-05-01

    This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.

  20. Quinary excitation method for pulse compression ultrasound measurements.

    Science.gov (United States)

    Cowell, D M J; Freear, S

    2008-04-01

    A novel switched excitation method for linear frequency modulated excitation of ultrasonic transducers in pulse compression systems is presented that is simple to realise, yet provides reduced signal sidelobes at the output of the matched filter compared to bipolar pseudo-chirp excitation. Pulse compression signal sidelobes are reduced through the use of simple amplitude tapering at the beginning and end of the excitation duration. Amplitude tapering using switched excitation is realised through the use of intermediate voltage switching levels, half that of the main excitation voltages. In total five excitation voltages are used creating a quinary excitation system. The absence of analogue signal generation and power amplifiers renders the excitation method attractive for applications with requirements such as a high channel count or low cost per channel. A systematic study of switched linear frequency modulated excitation methods with simulated and laboratory based experimental verification is presented for 2.25 MHz non-destructive testing immersion transducers. The signal to sidelobe noise level of compressed waveforms generated using quinary and bipolar pseudo-chirp excitation are investigated for transmission through a 0.5m water and kaolin slurry channel. Quinary linear frequency modulated excitation consistently reduces signal sidelobe power compared to bipolar excitation methods. Experimental results for transmission between two 2.25 MHz transducers separated by a 0.5m channel of water and 5% kaolin suspension shows improvements in signal to sidelobe noise power in the order of 7-8 dB. The reported quinary switched method for linear frequency modulated excitation provides improved performance compared to pseudo-chirp excitation without the need for high performance excitation amplifiers.

  1. Numerical study of turbulent heat transfer from confined impinging jets using a pseudo-compressibility method

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, M.; Rautaheimo, P.; Siikonen, T.

    1997-12-31

    A numerical investigation is carried out to predict the turbulent fluid flow and heat transfer characteristics of two-dimensional single and three impinging slot jets. Two low-Reynolds-number {kappa}-{epsilon} models, namely the classical model of Chien and the explicit algebraic stress model of Gatski and Speziale, are considered in the simulation. A cell-centered finite-volume scheme combined with an artificial compressibility approach is employed to solve the flow equations, using a diagonally dominant alternating direction implicit (DDADI) time integration method. A fully upwinded second order spatial differencing is adopted to approximate the convective terms. Roe`s damping term is used to calculate the flux on the cell face. A multigrid method is utilized for the acceleration of convergence. On average, the heat transfer coefficients predicted by both models show good agreement with the experimental results. (orig.) 17 refs.

  2. Biometric and Emotion Identification: An ECG Compression Based Method

    Directory of Open Access Journals (Sweden)

    Susana Brás

    2018-04-01

    Full Text Available We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG. The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1 conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2 conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3 identification of the ECG record class, using a 1-NN (nearest neighbor classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  3. Biometric and Emotion Identification: An ECG Compression Based Method.

    Science.gov (United States)

    Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J

    2018-01-01

    We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  4. Biometric and Emotion Identification: An ECG Compression Based Method

    Science.gov (United States)

    Brás, Susana; Ferreira, Jacqueline H. T.; Soares, Sandra C.; Pinho, Armando J.

    2018-01-01

    We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model. PMID:29670564

  5. On the estimation method of compressed air consumption during pneumatic caisson sinking

    OpenAIRE

    平川, 修治; ヒラカワ, シュウジ; Shuji, HIRAKAWA

    1990-01-01

    There are several methods in estimation of compressed air consumption during pneumatic caisson sinking. It is re uired in the estimation of compressed air consumption by the methods under the same conditions. In this paper, it is proposed the methods which is able to estimate accurately the compressed air consumption during pnbumatic caissons sinking at this moment.

  6. A Finite Element Method for Simulation of Compressible Cavitating Flows

    Science.gov (United States)

    Shams, Ehsan; Yang, Fan; Zhang, Yu; Sahni, Onkar; Shephard, Mark; Oberai, Assad

    2016-11-01

    This work focuses on a novel approach for finite element simulations of multi-phase flows which involve evolving interface with phase change. Modeling problems, such as cavitation, requires addressing multiple challenges, including compressibility of the vapor phase, interface physics caused by mass, momentum and energy fluxes. We have developed a mathematically consistent and robust computational approach to address these problems. We use stabilized finite element methods on unstructured meshes to solve for the compressible Navier-Stokes equations. Arbitrary Lagrangian-Eulerian formulation is used to handle the interface motions. Our method uses a mesh adaptation strategy to preserve the quality of the volumetric mesh, while the interface mesh moves along with the interface. The interface jump conditions are accurately represented using a discontinuous Galerkin method on the conservation laws. Condensation and evaporation rates at the interface are thermodynamically modeled to determine the interface velocity. We will present initial results on bubble cavitation the behavior of an attached cavitation zone in a separated boundary layer. We acknowledge the support from Army Research Office (ARO) under ARO Grant W911NF-14-1-0301.

  7. Artificial urinary conduit construction using tissue engineering methods.

    Science.gov (United States)

    Kloskowski, Tomasz; Pokrywczyńska, Marta; Drewa, Tomasz

    2015-01-01

    Incontinent urinary diversion using an ileal conduit is the most popular method used by urologists after bladder cystectomy resulting from muscle invasive bladder cancer. The use of gastrointestinal tissue is related to a series of complications with the necessity of surgical procedure extension which increases the time of surgery. Regenerative medicine together with tissue engineering techniques gives hope for artificial urinary conduit construction de novo without affecting the ileum. In this review we analyzed history of urinary diversion together with current attempts in urinary conduit construction using tissue engineering methods. Based on literature and our own experience we presented future perspectives related to the artificial urinary conduit construction. A small number of papers in the field of tissue engineered urinary conduit construction indicates that this topic requires more attention. Three main factors can be distinguished to resolve this topic: proper scaffold construction along with proper regeneration of both the urothelium and smooth muscle layers. Artificial urinary conduit has a great chance to become the first commercially available product in urology constructed by regenerative medicine methods.

  8. Iterative methods for compressible Navier-Stokes and Euler equations

    Energy Technology Data Exchange (ETDEWEB)

    Tang, W.P.; Forsyth, P.A.

    1996-12-31

    This workshop will focus on methods for solution of compressible Navier-Stokes and Euler equations. In particular, attention will be focused on the interaction between the methods used to solve the non-linear algebraic equations (e.g. full Newton or first order Jacobian) and the resulting large sparse systems. Various types of block and incomplete LU factorization will be discussed, as well as stability issues, and the use of Newton-Krylov methods. These techniques will be demonstrated on a variety of model transonic and supersonic airfoil problems. Applications to industrial CFD problems will also be presented. Experience with the use of C++ for solution of large scale problems will also be discussed. The format for this workshop will be four fifteen minute talks, followed by a roundtable discussion.

  9. Development of a diagnostic expert system for eddy current data analysis using applied artificial intelligence methods

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Yan, W.; Henry, G.

    1999-01-01

    A diagnostic expert system that integrates database management methods, artificial neural networks, and decision-making using fuzzy logic has been developed for the automation of steam generator eddy current test (ECT) data analysis. The new system, known as EDDYAI, considers the following key issues: (1) digital eddy current test data calibration, compression, and representation; (2) development of robust neural networks with low probability of misclassification for flaw depth estimation; (3) flaw detection using fuzzy logic; (4) development of an expert system for database management, compilation of a trained neural network library, and a decision module; and (5) evaluation of the integrated approach using eddy current data. The implementation to field test data includes the selection of proper feature vectors for ECT data analysis, development of a methodology for large eddy current database management, artificial neural networks for flaw depth estimation, and a fuzzy logic decision algorithm for flaw detection. A large eddy current inspection database from the Electric Power Research Institute NDE Center is being utilized in this research towards the development of an expert system for steam generator tube diagnosis. The integration of ECT data pre-processing as part of the data management, fuzzy logic flaw detection technique, and tube defect parameter estimation using artificial neural networks are the fundamental contributions of this research. (orig.)

  10. Development of a diagnostic expert system for eddy current data analysis using applied artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Upadhyaya, B.R.; Yan, W. [Tennessee Univ., Knoxville, TN (United States). Dept. of Nuclear Engineering; Behravesh, M.M. [Electric Power Research Institute, Palo Alto, CA (United States); Henry, G. [EPRI NDE Center, Charlotte, NC (United States)

    1999-09-01

    A diagnostic expert system that integrates database management methods, artificial neural networks, and decision-making using fuzzy logic has been developed for the automation of steam generator eddy current test (ECT) data analysis. The new system, known as EDDYAI, considers the following key issues: (1) digital eddy current test data calibration, compression, and representation; (2) development of robust neural networks with low probability of misclassification for flaw depth estimation; (3) flaw detection using fuzzy logic; (4) development of an expert system for database management, compilation of a trained neural network library, and a decision module; and (5) evaluation of the integrated approach using eddy current data. The implementation to field test data includes the selection of proper feature vectors for ECT data analysis, development of a methodology for large eddy current database management, artificial neural networks for flaw depth estimation, and a fuzzy logic decision algorithm for flaw detection. A large eddy current inspection database from the Electric Power Research Institute NDE Center is being utilized in this research towards the development of an expert system for steam generator tube diagnosis. The integration of ECT data pre-processing as part of the data management, fuzzy logic flaw detection technique, and tube defect parameter estimation using artificial neural networks are the fundamental contributions of this research. (orig.)

  11. The study of diagnostic accuracy of chest nodules by using different compression methods

    International Nuclear Information System (INIS)

    Liang Zhigang; Kuncheng, L.I.; Zhang Jinghong; Liu Shuliang

    2005-01-01

    Background: The purpose of this study was to compare the diagnostic accuracy of small nodules in the chest by using different compression methods. Method: Two radiologists with 5 years experience twice interpreted 39 chest images by using lossless and lossy compression methods. The time interval was 3 weeks. Each time the radiologists interpreted one kind of compressed images. The image browser used the Unisight software provided by Atlastiger Company in Shanghai. The interpreting results were analyzed by the ROCKIT software and the ROC curves were painted by Excel 2002. Results: In studies of receiver operating characteristics for scoring the presence or absence of nodules, the images with lossy compression method showed no statistical difference as compared with the images with lossless compression method. Conclusion: The diagnostic accuracy of chest nodules by using the lossless and lossy compression methods had no significant difference, we could use the lossy compression method to transmit and archive the chest images with nodules

  12. Method for Calculation of Steam-Compression Heat Transformers

    Directory of Open Access Journals (Sweden)

    S. V. Zditovetckaya

    2012-01-01

    Full Text Available The paper considers a method for joint numerical analysis of cycle parameters and heatex-change equipment of steam-compression heat transformer contour that takes into account a non-stationary operational mode and irreversible losses in devices and pipeline contour. The method has been realized in the form of the software package and can be used while making design or selection of a heat transformer with due account of a coolant and actual equipment being included in its structure.The paper presents investigation results revealing influence of pressure loss in an evaporator and a condenser from the side of the coolant caused by a friction and local resistance on power efficiency of the heat transformer which is operating in the mode of refrigerating and heating installation and a thermal pump. Actually obtained operational parameters of the thermal pump in the nominal and off-design operatinal modes depend on the structure of the concrete contour equipment.

  13. Artificially lengthened and constricted vocal tract in vocal training methods.

    Science.gov (United States)

    Bele, Irene Velsvik

    2005-01-01

    It is common practice in vocal training to make use of vocal exercise techniques that involve partial occlusion of the vocal tract. Various techniques are used; some of them form an occlusion within the front part of the oral cavity or at the lips. Another vocal exercise technique involves lengthening the vocal tract; for example, the method of phonation into small tubes. This essay presents some studies made on the effects of various vocal training methods that involve an artificially lengthened and constricted vocal tract. The influence of sufficient acoustic impedance on vocal fold vibration and economical voice production is presented.

  14. Compression ratio of municipal solid waste simulation using artificial neural network and adaptive neurofuzzy system

    Directory of Open Access Journals (Sweden)

    Maryam Mokhtari

    2014-07-01

    Full Text Available The compression ratio of Municipal Solid Waste (MSW is an essential parameter for evaluation of waste settlement. Since it is relatively time-consuming to determine compression ratio from oedometer tests and there exist difficulties associated with working on waste materials, it will be useful to develop models based on waste physical properties. Therefore, present research attempts to develop proper prediction models using ANFIS and ANN models. The compression ratio was modeled as a function of the physical properties of waste including dry unit weight, water content, and biodegradable organic content. A reliable experimental database of oedometer tests, taken from the literature, was employed to train and test the ANN and ANFIS models. The performance of the developed models was investigated according to different statistical criteria (i.e. correlation coefficient, root mean squared error, and mean absolute error recommended by researchers. The final models have demonstrated the correlation coefficients higher than 90% and low error values; so, they have capability for acceptable prediction of municipal solid waste compression ratio. Furthermore, the values of performance measures obtained for ANN and ANFIS models indicate that the ANFIS model performs better than ANN model.   Resumen El índice de compresión de residuos sólidos es un parámetro esencial para la evaluación del asentamiento de un basurero municipal. Debido al desgaste de tiempo para determinar el índice de compresión a partir de pruebas edométricas y debido a las dificultades asociadas al trabajo con materiales desechados es necesario desarrollar modelos basados en las propiedades físicas de los desechos solidos. Además, la presente investigación pretende  desarrollar modelos de predicción apropiados a partir de los esquemas ANFIS y ANN. El índice de comprensión se modeló como una función de propiedades físicas de desechos que incluyen el peso seco de una

  15. The production of fully deacetylated chitosan by compression method

    Directory of Open Access Journals (Sweden)

    Xiaofei He

    2016-03-01

    Full Text Available Chitosan’s activities are significantly affected by degree of deacetylation (DDA, while fully deacetylated chitosan is difficult to produce in a large scale. Therefore, this paper introduces a compression method for preparing 100% deacetylated chitosan with less environmental pollution. The product is characterized by XRD, FT-IR, UV and HPLC. The 100% fully deacetylated chitosan is produced in low-concentration alkali and high-pressure conditions, which only requires 15% alkali solution and 1:10 chitosan powder to NaOH solution ratio under 0.11–0.12 MPa for 120 min. When the alkali concentration varied from 5% to 15%, the chitosan with ultra-high DDA value (up to 95% is produced.

  16. A method of loss free compression for the data of nuclear spectrum

    International Nuclear Information System (INIS)

    Sun Mingshan; Wu Shiying; Chen Yantao; Xu Zurun

    2000-01-01

    A new method of loss free compression based on the feature of the data of nuclear spectrum is provided, from which a practicable algorithm is successfully derived. A compression rate varying from 0.50 to 0.25 is obtained and the distribution of the processed data becomes even more suitable to be reprocessed by another compression such as Huffman Code to improve the compression rate

  17. Statistical Analysis of Compression Methods for Storing Binary Image for Low-Memory Systems

    Directory of Open Access Journals (Sweden)

    Roman Slaby

    2013-01-01

    Full Text Available The paper is focused on the statistical comparison of the selected compression methods which are used for compression of the binary images. The aim is to asses, which of presented compression method for low-memory system requires less number of bytes of memory. For assessment of the success rates of the input image to binary image the correlation functions are used. Correlation function is one of the methods of OCR algorithm used for the digitization of printed symbols. Using of compression methods is necessary for systems based on low-power micro-controllers. The data stream saving is very important for such systems with limited memory as well as the time required for decoding the compressed data. The success rate of the selected compression algorithms is evaluated using the basic characteristics of the exploratory analysis. The searched samples represent the amount of bytes needed to compress the test images, representing alphanumeric characters.

  18. Reasoning methods in medical consultation systems: artificial intelligence approaches.

    Science.gov (United States)

    Shortliffe, E H

    1984-01-01

    It has been argued that the problem of medical diagnosis is fundamentally ill-structured, particularly during the early stages when the number of possible explanations for presenting complaints can be immense. This paper discusses the process of clinical hypothesis evocation, contrasts it with the structured decision making approaches used in traditional computer-based diagnostic systems, and briefly surveys the more open-ended reasoning methods that have been used in medical artificial intelligence (AI) programs. The additional complexity introduced when an advice system is designed to suggest management instead of (or in addition to) diagnosis is also emphasized. Example systems are discussed to illustrate the key concepts.

  19. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    Science.gov (United States)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  20. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Science.gov (United States)

    Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad

    2014-06-01

    Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  1. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    Directory of Open Access Journals (Sweden)

    Shahoo Maleki

    2014-06-01

    Full Text Available Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR and Back-Propagation Neural Network (BPNN. Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  2. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  3. Methods for compressible multiphase flows and their applications

    Science.gov (United States)

    Kim, H.; Choe, Y.; Kim, H.; Min, D.; Kim, C.

    2018-06-01

    This paper presents an efficient and robust numerical framework to deal with multiphase real-fluid flows and their broad spectrum of engineering applications. A homogeneous mixture model incorporated with a real-fluid equation of state and a phase change model is considered to calculate complex multiphase problems. As robust and accurate numerical methods to handle multiphase shocks and phase interfaces over a wide range of flow speeds, the AUSMPW+_N and RoeM_N schemes with a system preconditioning method are presented. These methods are assessed by extensive validation problems with various types of equation of state and phase change models. Representative realistic multiphase phenomena, including the flow inside a thermal vapor compressor, pressurization in a cryogenic tank, and unsteady cavitating flow around a wedge, are then investigated as application problems. With appropriate physical modeling followed by robust and accurate numerical treatments, compressible multiphase flow physics such as phase changes, shock discontinuities, and their interactions are well captured, confirming the suitability of the proposed numerical framework to wide engineering applications.

  4. Handwritten Javanese Character Recognition Using Several Artificial Neural Network Methods

    Directory of Open Access Journals (Sweden)

    Gregorius Satia Budhi

    2015-07-01

    Full Text Available Javanese characters are traditional characters that are used to write the Javanese language. The Javanese language is a language used by many people on the island of Java, Indonesia. The use of Javanese characters is diminishing more and more because of the difficulty of studying the Javanese characters themselves. The Javanese character set consists of basic characters, numbers, complementary characters, and so on. In this research we have developed a system to recognize Javanese characters. Input for the system is a digital image containing several handwritten Javanese characters. Preprocessing and segmentation are performed on the input image to get each character. For each character, feature extraction is done using the ICZ-ZCZ method. The output from feature extraction will become input for an artificial neural network. We used several artificial neural networks, namely a bidirectional associative memory network, a counterpropagation network, an evolutionary network, a backpropagation network, and a backpropagation network combined with chi2. From the experimental results it can be seen that the combination of chi2 and backpropagation achieved better recognition accuracy than the other methods.

  5. Spectral Element Method for the Simulation of Unsteady Compressible Flows

    Science.gov (United States)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

  6. Comparative Survey of Ultrasound Images Compression Methods Dedicated to a Tele-Echography Robotic System

    National Research Council Canada - National Science Library

    Delgorge, C

    2001-01-01

    .... For the purpose of this work, we selected seven compression methods : Fourier Transform, Discrete Cosine Transform, Wavelets, Quadtrees Transform, Fractals, Histogram Thresholding, and Run Length Coding...

  7. Improved artificial bee colony algorithm based gravity matching navigation method.

    Science.gov (United States)

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  8. A modified compressible smoothed particle hydrodynamics method and its application on the numerical simulation of low and high velocity impacts

    International Nuclear Information System (INIS)

    Amanifard, N.; Haghighat Namini, V.

    2012-01-01

    In this study a Modified Compressible Smoothed Particle Hydrodynamics method is introduced which is applicable in problems involving shock wave structures and elastic-plastic deformations of solids. As a matter of fact, algorithm of the method is based on an approach which descritizes the momentum equation into three parts and solves each part separately and calculates their effects on the velocity field and displacement of particles. The most exclusive feature of the method is exactly removing artificial viscosity of the formulations and representing good compatibility with other reasonable numerical methods without any rigorous numerical fractures or tensile instabilities while Modified Compressible Smoothed Particle Hydrodynamics does not use any extra modifications. Two types of problems involving elastic-plastic deformations and shock waves are presented here to demonstrate the capability of Modified Compressible Smoothed Particle Hydrodynamics in simulation of such problems and its ability to capture shock. The problems that are proposed here are low and high velocity impacts between aluminum projectiles and semi infinite aluminum beams. Elastic-perfectly plastic model is chosen for constitutive model of the aluminum and the results of simulations are compared with other reasonable studies in these cases.

  9. Analysis of a discrete element method and coupling with a compressible fluid flow method

    International Nuclear Information System (INIS)

    Monasse, L.

    2011-01-01

    This work aims at the numerical simulation of compressible fluid/deformable structure interactions. In particular, we have developed a partitioned coupling algorithm between a Finite Volume method for the compressible fluid and a Discrete Element method capable of taking into account fractures in the solid. A survey of existing fictitious domain methods and partitioned algorithms has led to choose an Embedded Boundary method and an explicit coupling scheme. We first showed that the Discrete Element method used for the solid yielded the correct macroscopic behaviour and that the symplectic time-integration scheme ensured the preservation of energy. We then developed an explicit coupling algorithm between a compressible inviscid fluid and an un-deformable solid. Mass, momentum and energy conservation and consistency properties were proved for the coupling scheme. The algorithm was then extended to the coupling with a deformable solid, in the form of a semi implicit scheme. Finally, we applied this method to unsteady inviscid flows around moving structures: comparisons with existing numerical and experimental results demonstrate the excellent accuracy of our method. (author) [fr

  10. Acceleration methods for multi-physics compressible flow

    Science.gov (United States)

    Peles, Oren; Turkel, Eli

    2018-04-01

    In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation

  11. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    Science.gov (United States)

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  12. Methods of compression of digital holograms, based on 1-level wavelet transform

    International Nuclear Information System (INIS)

    Kurbatova, E A; Cheremkhin, P A; Evtikhiev, N N

    2016-01-01

    To reduce the size of memory required for storing information about 3D-scenes and to decrease the rate of hologram transmission, digital hologram compression can be used. Compression of digital holograms by wavelet transforms is among most powerful methods. In the paper the most popular wavelet transforms are considered and applied to the digital hologram compression. Obtained values of reconstruction quality and hologram's diffraction efficiencies are compared. (paper)

  13. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  14. Method of controlling coherent synchroton radiation-driven degradation of beam quality during bunch length compression

    Science.gov (United States)

    Douglas, David R [Newport News, VA; Tennant, Christopher D [Williamsburg, VA

    2012-07-10

    A method of avoiding CSR induced beam quality defects in free electron laser operation by a) controlling the rate of compression and b) using a novel means of integrating the compression with the remainder of the transport system: both are accomplished by means of dispersion modulation. A large dispersion is created in the penultimate dipole magnet of the compression region leading to rapid compression; this large dispersion is demagnified and dispersion suppression performed in a final small dipole. As a result, the bunch is short for only a small angular extent of the transport, and the resulting CSR excitation is small.

  15. Compressed Sensing Methods in Radio Receivers Exposed to Noise and Interference

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek

    , there is a problem of interference, which makes digitization of radio receivers even more dicult. High-order low-pass lters are needed to remove interfering signals and secure a high-quality reception. In the mid-2000s a new method of signal acquisition, called compressed sensing, emerged. Compressed sensing...... the downconverted baseband signal and interference, may be replaced by low-order lters. Additional digital signal processing is a price to pay for this feature. Hence, the signal processing is moved from the analog to the digital domain. Filtering compressed sensing, which is a new application of compressed sensing...

  16. Space Environment Modelling with the Use of Artificial Intelligence Methods

    Science.gov (United States)

    Lundstedt, H.; Wintoft, P.; Wu, J.-G.; Gleisner, H.; Dovheden, V.

    1996-12-01

    Space based technological systems are affected by the space weather in many ways. Several severe failures of satellites have been reported at times of space storms. Our society also increasingly depends on satellites for communication, navigation, exploration, and research. Predictions of the conditions in the satellite environment have therefore become very important. We will here present predictions made with the use of artificial intelligence (AI) techniques, such as artificial neural networks (ANN) and hybrids of AT methods. We are developing a space weather model based on intelligence hybrid systems (IHS). The model consists of different forecast modules, each module predicts the space weather on a specific time-scale. The time-scales range from minutes to months with the fundamental time-scale of 1-5 minutes, 1-3 hours, 1-3 days, and 27 days. Solar and solar wind data are used as input data. From solar magnetic field measurements, either made on the ground at Wilcox Solar Observatory (WSO) at Stanford, or made from space by the satellite SOHO, solar wind parameters can be predicted and modelled with ANN and MHD models. Magnetograms from WSO are available on a daily basis. However, from SOHO magnetograms will be available every 90 minutes. SOHO magnetograms as input to ANNs will therefore make it possible to even predict solar transient events. Geomagnetic storm activity can today be predicted with very high accuracy by means of ANN methods using solar wind input data. However, at present real-time solar wind data are only available during part of the day from the satellite WIND. With the launch of ACE in 1997, solar wind data will on the other hand be available during 24 hours per day. The conditions of the satellite environment are not only disturbed at times of geomagnetic storms but also at times of intense solar radiation and highly energetic particles. These events are associated with increased solar activity. Predictions of these events are therefore

  17. A study on measurement on artificial radiation dose rate using the response matrix method

    International Nuclear Information System (INIS)

    Kidachi, Hiroshi; Ishikawa, Yoichi; Konno, Tatsuya

    2004-01-01

    We examined accuracy and stability of estimated artificial dose contribution which is distinguished from natural background gamma-ray dose rate using Response Matrix method. Irradiation experiments using artificial gamma-ray sources indicated that there was a linear relationship between observed dose rate and estimated artificial dose contribution, when irradiated artificial gamma-ray dose rate was higher than about 2 nGy/h. Statistical and time-series analyses of long term data made it clear that estimated artificial contribution showed almost constant values under no artificial influence from the nuclear power plants. However, variations of estimated artificial dose contribution were infrequently observed due to of rainfall, detector maintenance operation and occurrence of calibration error. Some considerations on the factors to these variations were made. (author)

  18. Unsteady aerodynamic coefficients obtained by a compressible vortex lattice method.

    OpenAIRE

    Fabiano Hernandes

    2009-01-01

    Unsteady solutions for the aerodynamic coefficients of a thin airfoil in compressible subsonic or supersonic flows are studied. The lift, the pitch moment, and pressure coefficients are obtained numerically for the following motions: the indicial response (unit step function) of the airfoil, i.e., a sudden change in the angle of attack; a thin airfoil penetrating into a sharp edge gust (for several gust speed ratios); a thin airfoil penetrating into a one-minus-cosine gust and sinusoidal gust...

  19. Artificial Intelligence: Bayesian versus Heuristic Method for Diagnostic Decision Support.

    Science.gov (United States)

    Elkin, Peter L; Schlegel, Daniel R; Anderson, Michael; Komm, Jordan; Ficheur, Gregoire; Bisson, Leslie

    2018-04-01

    Evoking strength is one of the important contributions of the field of Biomedical Informatics to the discipline of Artificial Intelligence. The University at Buffalo's Orthopedics Department wanted to create an expert system to assist patients with self-diagnosis of knee problems and to thereby facilitate referral to the right orthopedic subspecialist. They had two independent sports medicine physicians review 469 cases. A board-certified orthopedic sports medicine practitioner, L.B., reviewed any disagreements until a gold standard diagnosis was reached. For each case, the patients entered 126 potential answers to 26 questions into a Web interface. These were modeled by an expert sports medicine physician and the answers were reviewed by L.B. For each finding, the clinician specified the sensitivity (term frequency) and both specificity (Sp) and the heuristic evoking strength (ES). Heuristics are methods of reasoning with only partial evidence. An expert system was constructed that reflected the posttest odds of disease-ranked list for each case. We compare the accuracy of using Sp to that of using ES (original model, p  < 0.0008; term importance * disease importance [DItimesTI] model, p  < 0.0001: Wilcoxon ranked sum test). For patient referral assignment, Sp in the DItimesTI model was superior to the use of ES. By the fifth diagnosis, the advantage was lost and so there is no difference between the techniques when serving as a reminder system. Schattauer GmbH Stuttgart.

  20. Using artificial intelligence methods to design new conducting polymers

    Directory of Open Access Journals (Sweden)

    Ronaldo Giro

    2003-12-01

    Full Text Available In the last years the possibility of creating new conducting polymers exploring the concept of copolymerization (different structural monomeric units has attracted much attention from experimental and theoretical points of view. Due to the rich carbon reactivity an almost infinite number of new structures is possible and the procedure of trial and error has been the rule. In this work we have used a methodology able of generating new structures with pre-specified properties. It combines the use of negative factor counting (NFC technique with artificial intelligence methods (genetic algorithms - GAs. We present the results for a case study for poly(phenylenesulfide phenyleneamine (PPSA, a copolymer formed by combination of homopolymers: polyaniline (PANI and polyphenylenesulfide (PPS. The methodology was successfully applied to the problem of obtaining binary up to quinternary disordered polymeric alloys with a pre-specific gap value or exhibiting metallic properties. It is completely general and can be in principle adapted to the design of new classes of materials with pre-specified properties.

  1. Alteration of blue pigment in artificial iris in ocular prosthesis: effect of paint, drying method and artificial aging.

    Science.gov (United States)

    Goiato, Marcelo Coelho; Fernandes, Aline Úrsula Rocha; dos Santos, Daniela Micheline; Hadadd, Marcela Filié; Moreno, Amália; Pesqueira, Aldiéris Alves

    2011-02-01

    The artificial iris is the structure responsible for the dissimulation and aesthetics of ocular prosthesis. The objective of the present study was to evaluate the color stability of artificial iris of microwaveable polymerized ocular prosthesis, as a function of paint type, drying method and accelerated aging. A total of 40 discs of microwaveable polymerized acrylic resin were fabricated, and divided according to the blue paint type (n = 5): hydrosoluble acrylic, nitrocellulose automotive, hydrosoluble gouache and oil paints. Paints where dried either at natural or at infrared light bulb method. Each specimen was constituted of one disc in colorless acrylic resin and another colored with a basic sclera pigment. Painting was performed in one surface of one of the discs. The specimens were submitted to an artificial aging chamber under ultraviolet light, during 1008 h. A reflective spectrophotometer was used to evaluate color changes. Data were evaluated by 3-way repeated-measures ANOVA and the Tukey HSD test (α = 0.05). All paints suffered color alteration. The oil paint presented the highest color resistance to artificial aging regardless of drying method. Copyright © 2010 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  2. Method for compression molding of thermosetting plastics utilizing a temperature gradient across the plastic to cure the article

    Science.gov (United States)

    Heier, W. C. (Inventor)

    1974-01-01

    A method is described for compression molding of thermosetting plastics composition. Heat is applied to the compressed load in a mold cavity and adjusted to hold molding temperature at the interface of the cavity surface and the compressed compound to produce a thermal front. This thermal front advances into the evacuated compound at mean right angles to the compression load and toward a thermal fence formed at the opposite surface of the compressed compound.

  3. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  4. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  5. The boundary data immersion method for compressible flows with application to aeroacoustics

    Energy Technology Data Exchange (ETDEWEB)

    Schlanderer, Stefan C., E-mail: stefan.schlanderer@unimelb.edu.au [Faculty for Engineering and the Environment, University of Southampton, SO17 1BJ Southampton (United Kingdom); Weymouth, Gabriel D., E-mail: G.D.Weymouth@soton.ac.uk [Faculty for Engineering and the Environment, University of Southampton, SO17 1BJ Southampton (United Kingdom); Sandberg, Richard D., E-mail: richard.sandberg@unimelb.edu.au [Department of Mechanical Engineering, University of Melbourne, Melbourne VIC 3010 (Australia)

    2017-03-15

    This paper introduces a virtual boundary method for compressible viscous fluid flow that is capable of accurately representing moving bodies in flow and aeroacoustic simulations. The method is the compressible extension of the boundary data immersion method (BDIM, Maertens & Weymouth (2015), ). The BDIM equations for the compressible Navier–Stokes equations are derived and the accuracy of the method for the hydrodynamic representation of solid bodies is demonstrated with challenging test cases, including a fully turbulent boundary layer flow and a supersonic instability wave. In addition we show that the compressible BDIM is able to accurately represent noise radiation from moving bodies and flow induced noise generation without any penalty in allowable time step.

  6. Image-Based Compression Method of Three-Dimensional Range Data with Texture

    OpenAIRE

    Chen, Xia; Bell, Tyler; Zhang, Song

    2017-01-01

    Recently, high speed and high accuracy three-dimensional (3D) scanning techniques and commercially available 3D scanning devices have made real-time 3D shape measurement and reconstruction possible. The conventional mesh representation of 3D geometry, however, results in large file sizes, causing difficulties for its storage and transmission. Methods for compressing scanned 3D data therefore become desired. This paper proposes a novel compression method which stores 3D range data within the c...

  7. A new method of on-line multiparameter amplitude analysis with compression

    International Nuclear Information System (INIS)

    Morhac, M.; matousek, V.

    1996-01-01

    An algorithm of one-line multidimensional amplitude analysis with compression using fast adaptive orthogonal transform is presented in the paper. The method is based on a direct modification of multiplication coefficients of the signal flow graph of the fast Cooley-Tukey's algorithm. The coefficients are modified according to a reference vector representing the processed data. The method has been tested to compress three parameter experimental nuclear data. The efficiency of the derived adaptive transform is compared with classical orthogonal transforms. (orig.)

  8. Soft computing methods for estimating the uniaxial compressive strength of intact rock from index tests

    Czech Academy of Sciences Publication Activity Database

    Mishra, A. Deepak; Srigyan, M.; Basu, A.; Rokade, P. J.

    2015-01-01

    Roč. 80, December 2015 (2015), s. 418-424 ISSN 1365-1609 Institutional support: RVO:68145535 Keywords : uniaxial compressive strength * rock indices * fuzzy inference system * artificial neural network * adaptive neuro-fuzzy inference system Subject RIV: DH - Mining, incl. Coal Mining Impact factor: 2.010, year: 2015 http://ac.els-cdn.com/S1365160915300708/1-s2.0-S1365160915300708-main.pdf?_tid=318a7cec-8929-11e5-a3b8-00000aacb35f&acdnat=1447324752_2a9d947b573773f88da353a16f850eac

  9. Control Systems for Hyper-Redundant Robots Based on Artificial Potential Method

    Directory of Open Access Journals (Sweden)

    Mihaela Florescu

    2015-06-01

    Full Text Available This paper presents the control method of hyper-redundant robots based on the artificial potential approach. The principles of this method are shown and a suggestive example is offered. Then, the artificial potential method is applied to the case of a tentacle robot starting from the dynamic model of the robot. In addition, a series of results that are obtained through simulation is presented.

  10. A method of automatic control of the process of compressing pyrogas in olefin production

    Energy Technology Data Exchange (ETDEWEB)

    Podval' niy, M.L.; Bobrovnikov, N.R.; Kotler, L.D.; Shib, L.M.; Tuchinskiy, M.R.

    1982-01-01

    In the known method of automatically controlling the process of compressing pyrogas in olefin production by regulating the supply of cooling agents to the interstage coolers of the compression unit depending on the flow of hydrocarbons to the compression unit, to raise performance by lowering deposition of polymers on the flow through surfaces of the equipment, the coolant supply is also regulated as a function of the flows of hydrocarbons from the upper and lower parts of the demethanizer and the bottoms of the stripping tower. The coolant supply is regulated proportional to the difference between the flow of stripping tower bottoms and the ratio of the hydrocarbon flow from the upper and lower parts of the demethanizer to the hydrocarbon flow in the compression unit. With an increase in the proportion of light hydrocarbons (sum of upper and lower demethanizer products) in the total flow of pyrogas going to compression, the flow of coolant to the compression unit is reduced. Condensation of the given fractions in the separators, their amount in condensate going through the piping to the stripping tower, is reduced. With the reduction in the proportion of light hydrocarbons in the pyrogas, the flow of coolant is increased, thus improving condensation of heavy hydrocarbons in the separators and removing them from the compression unit in the bottoms of the stripping tower.

  11. Generalised synchronisation of spatiotemporal chaos using feedback control method and phase compression

    International Nuclear Information System (INIS)

    Xing-Yuan, Wang; Na, Zhang

    2010-01-01

    Coupled map lattices are taken as examples to study the synchronisation of spatiotemporal chaotic systems. First, a generalised synchronisation of two coupled map lattices is realised through selecting an appropriate feedback function and appropriate range of feedback parameter. Based on this method we use the phase compression method to extend the range of the parameter. So, we integrate the feedback control method with the phase compression method to implement the generalised synchronisation and obtain an exact range of feedback parameter. This technique is simple to implement in practice. Numerical simulations show the effectiveness and the feasibility of the proposed program. (general)

  12. Analysis of time integration methods for the compressible two-fluid model for pipe flow simulations

    NARCIS (Netherlands)

    B. Sanderse (Benjamin); I. Eskerud Smith (Ivar); M.H.W. Hendrix (Maurice)

    2017-01-01

    textabstractIn this paper we analyse different time integration methods for the two-fluid model and propose the BDF2 method as the preferred choice to simulate transient compressible multiphase flow in pipelines. Compared to the prevailing Backward Euler method, the BDF2 scheme has a significantly

  13. A novel full-field experimental method to measure the local compressibility of gas diffusion media

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Yeh-Hung; Li, Yongqiang [Electrochemical Energy Research Lab, GM R and D, Honeoye Falls, NY 14472 (United States); Rock, Jeffrey A. [GM Powertrain, Honeoye Falls, NY 14472 (United States)

    2010-05-15

    The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 {mu}m, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm x 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray trademark TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells. (author)

  14. A novel full-field experimental method to measure the local compressibility of gas diffusion media

    Science.gov (United States)

    Lai, Yeh-Hung; Li, Yongqiang; Rock, Jeffrey A.

    The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 μm, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm × 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray™ TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells.

  15. A method of vehicle license plate recognition based on PCANet and compressive sensing

    Science.gov (United States)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  16. Artificial Neural Network Method at PT Buana Intan Gemilang

    Directory of Open Access Journals (Sweden)

    Shadika

    2017-01-01

    Full Text Available The textile industry is one of the industries that provide high export value by occupying the third position in Indonesia. The process of inspection on traditional textile enterprises by relying on human vision that takes an average scanning time of 19.87 seconds. Each roll of cloth should be inspected twice to avoid missed defects. This inspection process causes the buildup at the inspection station. This study proposes the automation of inspection systems using the Artificial Neural Network (ANN. The input for ANN comes from GLCM extraction. The automation system on the defect inspection resulted in a detection time of 0.56 seconds. The degree of accuracy gained in classifying the three types of defects is 88.7%. Implementing an automated inspection system results in faster processing time.

  17. Compression-RSA: New approach of encryption and decryption method

    Science.gov (United States)

    Hung, Chang Ee; Mandangan, Arif

    2013-04-01

    Rivest-Shamir-Adleman (RSA) cryptosystem is a well known asymmetric cryptosystem and it has been applied in a very wide area. Many researches with different approaches have been carried out in order to improve the security and performance of RSA cryptosystem. The enhancement of the performance of RSA cryptosystem is our main interest. In this paper, we propose a new method to increase the efficiency of RSA by shortening the number of plaintext before it goes under encryption process without affecting the original content of the plaintext. Concept of simple Continued Fraction and the new special relationship between it and Euclidean Algorithm have been applied on this newly proposed method. By reducing the number of plaintext-ciphertext, the encryption-decryption processes of a secret message can be accelerated.

  18. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications.

    Science.gov (United States)

    Ma, JiaLi; Zhang, TanTan; Dong, MingChui

    2015-05-01

    This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

  19. Comparison of two solution ways of district heating control: Using analysis methods, using artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Balate, J.; Sysala, T. [Technical Univ., Zlin (Czech Republic). Dept. of Automation and Control Technology

    1997-12-31

    The District Heating Systems - DHS (Centralized Heat Supply Systems - CHSS) are being developed in large cities in accordance with their growth. The systems are formed by enlarging networks of heat distribution to consumers and at the same time they interconnect the heat sources gradually built. The heat is distributed to the consumers through the circular networks, that are supplied by several cooperating heat sources, that means by power and heating plants and heating plants. The complicated process of heat production technology and supply requires the system approach when solving the concept of automatized control. The paper deals with comparison of the solution way using the analysis methods and using the artificial intelligence methods. (orig.)

  20. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  1. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  2. Case report of deep vein thrombosis caused by artificial urinary sphincter reservoir compressing right external iliac vein

    Directory of Open Access Journals (Sweden)

    Marcus J Yip

    2015-01-01

    Full Text Available Artificial urinary sphincters (AUSs are commonly used after radical prostatectomy for those who are incontinent of urine. However, they are associated with complications, the most common being reservoir uprising or migration. We present a unique case of occlusive external iliac and femoral vein obstruction by the AUS reservoir causing thrombosis. Deflation of the reservoir and anticoagulation has, thus far, not been successful at decreasing thrombus burden. We present this case as a rare, but significant surgical complication; explore the risk factors that may have contributed, and other potential endovascular therapies to address this previously unreported AUS complication.

  3. Retrofit device and method to improve humidity control of vapor compression cooling systems

    Science.gov (United States)

    Roth, Robert Paul; Hahn, David C.; Scaringe, Robert P.

    2016-08-16

    A method and device for improving moisture removal capacity of a vapor compression system is disclosed. The vapor compression system is started up with the evaporator blower initially set to a high speed. A relative humidity in a return air stream is measured with the evaporator blower operating at the high speed. If the measured humidity is above the predetermined high relative humidity value, the evaporator blower speed is reduced from the initially set high speed to the lowest possible speed. The device is a control board connected with the blower and uses a predetermined change in measured relative humidity to control the blower motor speed.

  4. A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    T. Celine Therese Jenny

    2010-11-01

    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  5. Diagnostic methods and interpretation of the experiments on microtarget compression in the Iskra-4 device

    International Nuclear Information System (INIS)

    Kochemasov, G.G.

    1992-01-01

    Studies on the problem of laser fusion, which is mainly based on experiments conducted in the Iskra-4 device are reviewed. Different approaches to solution of the problem of DT-fuel ignition, methods of diagnostics of characteristics of laser radiation and plasma, occurring on microtarget heating and compression, are considered

  6. The Effects of Different Curing Methods on the Compressive Strength of Terracrete

    Directory of Open Access Journals (Sweden)

    O. Alake

    2009-01-01

    Full Text Available This research evaluated the effects of different curing methods on the compressive strength of terracrete. Several tests that included sieve analysis were carried out on constituents of terracrete (granite and laterite to determine their particle size distribution and performance criteria tests to determine compressive strength of terracrete cubes for 7 to 35 days of curing. Sand, foam-soaked, tank and open methods of curing were used and the study was carried out under controlled temperature. Sixty cubes of 100 × 100 × 100mm sized cubes were cast using a mix ratio of 1 part of cement, 1½ part of latrite, and 3 part of coarse aggregate (granite proportioned by weight and water – cement ratio of 0.62. The result of the various compressive strengths of the cubes showed that out of the four curing methods, open method of curing was the best because the cubes gained the highest average compressive strength of 10.3N/mm2 by the 35th day.

  7. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    Science.gov (United States)

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  8. Reaction kinetics, reaction products and compressive strength of ternary activators activated slag designed by Taguchi method

    NARCIS (Netherlands)

    Yuan, B.; Yu, Q.L.; Brouwers, H.J.H.

    2015-01-01

    This study investigates the reaction kinetics, the reaction products and the compressive strength of slag activated by ternary activators, namely waterglass, sodium hydroxide and sodium carbonate. Nine mixtures are designed by the Taguchi method considering the factors of sodium carbonate content

  9. Semi-implicit method for three-dimensional compressible MHD simulation

    International Nuclear Information System (INIS)

    Harned, D.S.; Kerner, W.

    1984-03-01

    A semi-implicit method for solving the full compressible MHD equations in three dimensions is presented. The method is unconditionally stable with respect to the fast compressional modes. The time step is instead limited by the slower shear Alfven motion. The computing time required for one time step is essentially the same as for explicit methods. Linear stability limits are derived and verified by three-dimensional tests on linear waves in slab geometry. (orig.)

  10. About a method for compressing x-ray computed microtomography data

    Science.gov (United States)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  11. Technical Note: Validation of two methods to determine contact area between breast and compression paddle in mammography

    NARCIS (Netherlands)

    Branderhorst, Woutjan; de Groot, Jerry E.; van Lier, Monique G. J. T. B.; Highnam, Ralph P.; den Heeten, Gerard J.; Grimbergen, Cornelis A.

    2017-01-01

    Purpose: To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. Methods: For a

  12. Comparison between Two Methods for Diagnosis of Trichinellosis: Trichinoscopy and Artificial Digestion

    Directory of Open Access Journals (Sweden)

    María Laura Vignau

    1997-09-01

    Full Text Available Two direct methods for the diagnosis of trichinellosis were compared: trichinoscopy and artificial digestion. Muscles from 17 wistar rats, orally infected with 500 Trichinella spiralis encysted larvae were examined. From each of the following muscles: diaphragm, tongue, masseters, intercostals, triceps brachialis and cuadriceps femoralis, 648,440 larvae from 1 g samples were recovered. The linear correlation between trichinoscopy and artificial digestion was very high and significant (r=0.94, p< 0.0001, showing that both methods for the detection of muscular larvae did not differ significantly. In both methods, significant differences were found in the distribution of larvae per gramme of muscle

  13. An ROI multi-resolution compression method for 3D-HEVC

    Science.gov (United States)

    Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan

    2017-09-01

    3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.

  14. A novel method for estimating soil precompression stress from uniaxial confined compression tests

    DEFF Research Database (Denmark)

    Lamandé, Mathieu; Schjønning, Per; Labouriau, Rodrigo

    2017-01-01

    . Stress-strain curves were obtained by performing uniaxial, confined compression tests on undisturbed soil cores for three soil types at three soil water potentials. The new method performed better than the Gompertz fitting method in estimating precompression stress. The values of precompression stress...... obtained from the new method were linearly related to the maximum stress experienced by the soil samples prior to the uniaxial, confined compression test at each soil condition with a slope close to 1. Precompression stress determined with the new method was not related to soil type or dry bulk density......The concept of precompression stress is used for estimating soil strength of relevance to fieldtraffic. It represents the maximum stress experienced by the soil. The most recently developed fitting method to estimate precompression stress (Gompertz) is based on the assumption of an S-shape stress...

  15. Finite Element Analysis of Increasing Column Section and CFRP Reinforcement Method under Different Axial Compression Ratio

    Science.gov (United States)

    Jinghai, Zhou; Tianbei, Kang; Fengchi, Wang; Xindong, Wang

    2017-11-01

    Eight less stirrups in the core area frame joints are simulated by ABAQUS finite element numerical software. The composite reinforcement method is strengthened with carbon fiber and increasing column section, the axial compression ratio of reinforced specimens is 0.3, 0.45 and 0.6 respectively. The results of the load-displacement curve, ductility and stiffness are analyzed, and it is found that the different axial compression ratio has great influence on the bearing capacity of increasing column section strengthening method, and has little influence on carbon fiber reinforcement method. The different strengthening schemes improve the ultimate bearing capacity and ductility of frame joints in a certain extent, composite reinforcement joints strengthening method to improve the most significant, followed by increasing column section, reinforcement method of carbon fiber reinforced joints to increase the minimum.

  16. A blended pressure/density based method for the computation of incompressible and compressible flows

    International Nuclear Information System (INIS)

    Rossow, C.-C.

    2003-01-01

    An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation

  17. A practical method for estimating maximum shear modulus of cemented sands using unconfined compressive strength

    Science.gov (United States)

    Choo, Hyunwook; Nam, Hongyeop; Lee, Woojin

    2017-12-01

    The composition of naturally cemented deposits is very complicated; thus, estimating the maximum shear modulus (Gmax, or shear modulus at very small strains) of cemented sands using the previous empirical formulas is very difficult. The purpose of this experimental investigation is to evaluate the effects of particle size and cement type on the Gmax and unconfined compressive strength (qucs) of cemented sands, with the ultimate goal of estimating Gmax of cemented sands using qucs. Two sands were artificially cemented using Portland cement or gypsum under varying cement contents (2%-9%) and relative densities (30%-80%). Unconfined compression tests and bender element tests were performed, and the results from previous studies of two cemented sands were incorporated in this study. The results of this study demonstrate that the effect of particle size on the qucs and Gmax of four cemented sands is insignificant, and the variation of qucs and Gmax can be captured by the ratio between volume of void and volume of cement. qucs and Gmax of sand cemented with Portland cement are greater than those of sand cemented with gypsum. However, the relationship between qucs and Gmax of the cemented sand is not affected by the void ratio, cement type and cement content, revealing that Gmax of the complex naturally cemented soils with unknown in-situ void ratio, cement type and cement content can be estimated using qucs.

  18. Artificial intelligence methods applied for quantitative analysis of natural radioactive sources

    International Nuclear Information System (INIS)

    Medhat, M.E.

    2012-01-01

    Highlights: ► Basic description of artificial neural networks. ► Natural gamma ray sources and problem of detections. ► Application of neural network for peak detection and activity determination. - Abstract: Artificial neural network (ANN) represents one of artificial intelligence methods in the field of modeling and uncertainty in different applications. The objective of the proposed work was focused to apply ANN to identify isotopes and to predict uncertainties of their activities of some natural radioactive sources. The method was tested for analyzing gamma-ray spectra emitted from natural radionuclides in soil samples detected by a high-resolution gamma-ray spectrometry based on HPGe (high purity germanium). The principle of the suggested method is described, including, relevant input parameters definition, input data scaling and networks training. It is clear that there is satisfactory agreement between obtained and predicted results using neural network.

  19. Artificial ligamentous joints:Methods, materials and characteristics

    OpenAIRE

    Hockings, Nick; Iravani, Pejman; Bowen, Chris

    2014-01-01

    This paper presents a novel method for making ligamentous articulations for robots. Ligamentous joints are widely found in animals, but they have been of limited appli- cation in robotics due to lack of analogous synthetic materials. The method presented combines 3D printing, tow laying and thermoplastic welding which enables manufacturing of this type of structure.

  20. Numerical simulation of compressible two-phase flow using a diffuse interface method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Daramizadeh, A.

    2013-01-01

    Highlights: ► Compressible two-phase gas–gas and gas–liquid flows simulation are conducted. ► Interface conditions contain shock wave and cavitations. ► A high-resolution diffuse interface method is investigated. ► The numerical results exhibit very good agreement with experimental results. -- Abstract: In this article, a high-resolution diffuse interface method is investigated for simulation of compressible two-phase gas–gas and gas–liquid flows, both in the presence of shock wave and in flows with strong rarefaction waves similar to cavitations. A Godunov method and HLLC Riemann solver is used for discretization of the Kapila five-equation model and a modified Schmidt equation of state (EOS) is used to simulate the cavitation regions. This method is applied successfully to some one- and two-dimensional compressible two-phase flows with interface conditions that contain shock wave and cavitations. The numerical results obtained in this attempt exhibit very good agreement with experimental results, as well as previous numerical results presented by other researchers based on other numerical methods. In particular, the algorithm can capture the complex flow features of transient shocks, such as the material discontinuities and interfacial instabilities, without any oscillation and additional diffusion. Numerical examples show that the results of the method presented here compare well with other sophisticated modeling methods like adaptive mesh refinement (AMR) and local mesh refinement (LMR) for one- and two-dimensional problems

  1. Applicability of finite element method to collapse analysis of steel connection under compression

    International Nuclear Information System (INIS)

    Zhou, Zhiguang; Nishida, Akemi; Kuwamura, Hitoshi

    2010-01-01

    It is often necessary to study the collapse behavior of steel connections. In this study, the limit load of the steel pyramid-to-tube socket connection subjected to uniform compression was investigated by means of FEM and experiment. The steel connection was modeled using 4-node shell element. Three kinds of analysis were conducted: linear buckling, nonlinear buckling and modified Riks method analysis. For linear buckling analysis the linear eigenvalue analysis was done. For nonlinear buckling analysis, eigenvalue analysis was performed for buckling load in a nonlinear manner based on the incremental stiffness matrices, and nonlinear material properties and large displacement were considered. For modified Riks method analysis compressive load was loaded by using the modified Riks method, and nonlinear material properties and large displacement were considered. The results of FEM analyses were compared with the experimental results. It shows that nonlinear buckling and modified Riks method analyses are more accurate than linear buckling analysis because they employ nonlinear, large-deflection analysis to estimate buckling loads. Moreover, the calculated limit loads from nonlinear buckling and modified Riks method analysis are close. It can be concluded that modified Riks method analysis is more effective for collapse analysis of steel connection under compression. At last, modified Riks method analysis is used to do the parametric studies of the thickness of the pyramid. (author)

  2. An artificial compressibility CBS method for modelling heat transfer and fluid flow in heterogeneous porous materials

    CSIR Research Space (South Africa)

    Malan, AG

    2011-08-01

    Full Text Available to modelling both forced convection as well as heat transfer and fluid flow through heterogeneous saturated porous materials via an edge-based finite volume discretization scheme. A volume-averaged set of local thermal disequilibrium governing equations...

  3. A discrete fibre dispersion method for excluding fibres under compression in the modelling of fibrous tissues.

    Science.gov (United States)

    Li, Kewei; Ogden, Ray W; Holzapfel, Gerhard A

    2018-01-01

    Recently, micro-sphere-based methods derived from the angular integration approach have been used for excluding fibres under compression in the modelling of soft biological tissues. However, recent studies have revealed that many of the widely used numerical integration schemes over the unit sphere are inaccurate for large deformation problems even without excluding fibres under compression. Thus, in this study, we propose a discrete fibre dispersion model based on a systematic method for discretizing a unit hemisphere into a finite number of elementary areas, such as spherical triangles. Over each elementary area, we define a representative fibre direction and a discrete fibre density. Then, the strain energy of all the fibres distributed over each elementary area is approximated based on the deformation of the representative fibre direction weighted by the corresponding discrete fibre density. A summation of fibre contributions over all elementary areas then yields the resultant fibre strain energy. This treatment allows us to exclude fibres under compression in a discrete manner by evaluating the tension-compression status of the representative fibre directions only. We have implemented this model in a finite-element programme and illustrate it with three representative examples, including simple tension and simple shear of a unit cube, and non-homogeneous uniaxial extension of a rectangular strip. The results of all three examples are consistent and accurate compared with the previously developed continuous fibre dispersion model, and that is achieved with a substantial reduction of computational cost. © 2018 The Author(s).

  4. Convergence of a residual based artificial viscosity finite element method

    KAUST Repository

    Nazarov, Murtazo

    2013-01-01

    . We prove convergence of the method, applied to a scalar conservation law in two space dimensions, toward an unique entropy solution for implicit time stepping schemes. © 2012 Elsevier B.V. All rights reserved.

  5. Application of artificial intelligence methods for prediction of steel mechanical properties

    Directory of Open Access Journals (Sweden)

    Z. Jančíková

    2008-10-01

    Full Text Available The target of the contribution is to outline possibilities of applying artificial neural networks for the prediction of mechanical steel properties after heat treatment and to judge their perspective use in this field. The achieved models enable the prediction of final mechanical material properties on the basis of decisive parameters influencing these properties. By applying artificial intelligence methods in combination with mathematic-physical analysis methods it will be possible to create facilities for designing a system of the continuous rationalization of existing and also newly developing industrial technologies.

  6. An Improved Ghost-cell Immersed Boundary Method for Compressible Inviscid Flow Simulations

    KAUST Repository

    Chi, Cheng

    2015-05-01

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. In addition, a shock sensor is in- troduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently. The improved ghost-cell method is validated against five test cases: (a) double Mach reflections on a ramp, (b) supersonic flows in a wind tunnel with a forward- facing step, (c) supersonic flows over a circular cylinder, (d) smooth Prandtl-Meyer expansion flows, and (e) steady shock-induced combustion over a wedge. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Implementation of the improved ghost-cell method in reacting Euler flows further validates its general applicability for compressible flow simulations.

  7. A time-domain method to generate artificial time history from a given reference response spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Gang Sik [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Song, Oh Seop [Dept. of Mechanical Engineering, Chungnam National University, Daejeon (Korea, Republic of)

    2016-06-15

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance.

  8. A time-domain method to generate artificial time history from a given reference response spectrum

    International Nuclear Information System (INIS)

    Shin, Gang Sik; Song, Oh Seop

    2016-01-01

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance

  9. New Method for Leakage Detection by Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohammad Attari

    2018-03-01

    Full Text Available Nowadays water loss has been turned into a global concern and on the other hand the demand for water is increasing. This problem has made the demand management and consumption pattern reform necessary. One of the most important methods for managing water consumption is to decrease the water loss. In this study by using neural networks, a new method is presented to specify the location and quantity of leakages in water distribution networks.  In this method, by producing the training data and applying it to neural network, the network is able to determine approximate location and quantity of nodal leakage with receiving the nodal pressure. Production of training data is carried out by applying assumed leakage to specific nodes in the network and calculating the new nodal pressures. The results show that by minimum use of hydraulic data taken from pressures, not only this method can determine the location of nodal leakages, but also it can specify the amount of leakage on each node with reasonable accuracy.

  10. Development of a geopolymer solidification method for radioactive wastes by compression molding and heat curing

    International Nuclear Information System (INIS)

    Shimoda, Chiaki; Matsuyama, Kanae; Okabe, Hirofumi; Kaneko, Masaaki; Miyamoto, Shinya

    2017-01-01

    Geopolymer solidification is a good method for managing waste because of it is inexpensive as compared with vitrification and has a reduced risk of hydrogen generation. In general, when geopolymers are made, water is added to the geopolymer raw materials, and then the slurry is mixed, poured into a mold, and cured. However, it is difficult to control the reaction because, depending on the types of materials, the viscosity can immediately increase after mixing. Slurries of geopolymers easily attach to the agitating wing of the mixer and easily clog the plumbing during transportation. Moreover, during long-term storage of solidified wastes containing concentrated radionuclides in a sealed container without vents, the hydrogen concentration in the container increases over time. Therefore, a simple method using as little water as possible is needed. In this work, geopolymer solidification by compression molding was studied. As compared with the usual methods, it provides a simple and stable method for preparing waste for long-term storage. From investigations performed before and after solidification by compression molding, it was shown that the crystal structure changed. From this result, it was concluded that the geopolymer reaction proceeded during compression molding. This method (1) reduces the energy needed for drying, (2) has good workability, (3) reduces the overall volume, and (4) reduces hydrogen generation. (author)

  11. Methods for determining the carrying capacity of eccentrically compressed concrete elements

    Directory of Open Access Journals (Sweden)

    Starishko Ivan Nikolaevich

    2014-04-01

    Full Text Available The author presents the results of calculations of eccentrically compressed elements in the ultimate limit state of bearing capacity, taking into account all possiblestresses in the longitudinal reinforcement from the R to the R , caused by different values of eccentricity longitudinal force. The method of calculation is based on the simultaneous solution of the equilibrium equations of the longitudinal forces and internal forces with the equilibrium equations of bending moments in the ultimate limit state of the normal sections. Simultaneous solution of these equations, as well as additional equations, reflecting the stress-strain limit state elements, leads to the solution of a cubic equation with respect to height of uncracked concrete, or with respect to the carrying capacity. According to the author it is a significant advantage over the existing methods, in which the equilibrium equations using longitudinal forces obtained one value of the height, and the equilibrium equations of bending moments - another. Theoretical studies of the author, in this article and the reasons to calculate specific examples showed that a decrease in the eccentricity of the longitudinal force in the limiting state of eccentrically compressed concrete elements height uncracked concrete height increases, the tension in the longitudinal reinforcement area gradually (not abruptly goes from a state of tension compression, and load-bearing capacity of elements it increases, which is also confirmed by the experimental results. Designed journalist calculations of eccentrically compressed elements for 4 cases of eccentric compression, instead of 2 - as set out in the regulations, fully cover the entire spectrum of possible cases of the stress-strain limit state elements that comply with the European standards for reinforced concrete, in particular Eurocode 2 (2003.

  12. Developing energy forecasting model using hybrid artificial intelligence method

    Institute of Scientific and Technical Information of China (English)

    Shahram Mollaiy-Berneti

    2015-01-01

    An important problem in demand planning for energy consumption is developing an accurate energy forecasting model. In fact, it is not possible to allocate the energy resources in an optimal manner without having accurate demand value. A new energy forecasting model was proposed based on the back-propagation (BP) type neural network and imperialist competitive algorithm. The proposed method offers the advantage of local search ability of BP technique and global search ability of imperialist competitive algorithm. Two types of empirical data regarding the energy demand (gross domestic product (GDP), population, import, export and energy demand) in Turkey from 1979 to 2005 and electricity demand (population, GDP, total revenue from exporting industrial products and electricity consumption) in Thailand from 1986 to 2010 were investigated to demonstrate the applicability and merits of the present method. The performance of the proposed model is found to be better than that of conventional back-propagation neural network with low mean absolute error.

  13. Data Collection Method for Mobile Control Sink Node in Wireless Sensor Network Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Ling Yongfa

    2016-01-01

    Full Text Available The paper proposes a mobile control sink node data collection method in the wireless sensor network based on compressive sensing. This method, with regular track, selects the optimal data collection points in the monitoring area via the disc method, calcu-lates the shortest path by using the quantum genetic algorithm, and hence determines the data collection route. Simulation results show that this method has higher network throughput and better energy efficiency, capable of collecting a huge amount of data with balanced energy consumption in the network.

  14. Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation

    Science.gov (United States)

    Arotaritei, D.; Rotariu, C.

    2015-09-01

    In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).

  15. Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Myoung Keon [Agency for Defense Development, Daejeon (Korea, Republic of); Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2016-10-15

    This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)

  16. Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method

    International Nuclear Information System (INIS)

    Lee, Myoung Keon; Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon

    2016-01-01

    This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)

  17. Key technical issues associated with a method of pulse compression. Final technical report

    International Nuclear Information System (INIS)

    Hunter, R.O. Jr.

    1980-06-01

    Key technical issues for angular multiplexing as a method of pulse compression in a 100 KJ KrF laser have been studied. Environmental issues studied include seismic vibrations man-made vibrations, air propagation, turbulence, and thermal gradient-induced density fluctuations. These studies have been incorporated in the design of mirror mounts and an alignment system, both of which are reported. A design study and performance analysis of the final amplifier have been undertaken. The pulse compression optical train has been designed and assessed as to its performance. Individual components are described and analytical relationships between the optical component size, surface quality, damage threshold and final focus properties are derived. The optical train primary aberrations are obtained and a method for aberration minimization is presented. Cost algorithms for the mirrors, mounts, and electrical hardware are integrated into a cost model to determine system costs as a function of pulse length, aperture size, and spot size

  18. Three dimensional simulation of compressible and incompressible flows through the finite element method

    International Nuclear Information System (INIS)

    Costa, Gustavo Koury

    2004-11-01

    Although incompressible fluid flows can be regarded as a particular case of a general problem, numerical methods and the mathematical formulation aimed to solve compressible and incompressible flows have their own peculiarities, in such a way, that it is generally not possible to attain both regimes with a single approach. In this work, we start from a typically compressible formulation, slightly modified to make use of pressure variables and, through augmenting the stabilising parameters, we end up with a simplified model which is able to deal with a wide range of flow regimes, from supersonic to low speed gas flows. The resulting methodology is flexible enough to allow for the simulation of liquid flows as well. Examples using conservative and pressure variables are shown and the results are compared to those published in the literature, in order to validate the method. (author)

  19. Survey of artificial intelligence methods for detection and identification of component faults in nuclear power plants

    International Nuclear Information System (INIS)

    Reifman, J.

    1997-01-01

    A comprehensive survey of computer-based systems that apply artificial intelligence methods to detect and identify component faults in nuclear power plants is presented. Classification criteria are established that categorize artificial intelligence diagnostic systems according to the types of computing approaches used (e.g., computing tools, computer languages, and shell and simulation programs), the types of methodologies employed (e.g., types of knowledge, reasoning and inference mechanisms, and diagnostic approach), and the scope of the system. The major issues of process diagnostics and computer-based diagnostic systems are identified and cross-correlated with the various categories used for classification. Ninety-five publications are reviewed

  20. Expansion and compression shock wave calculation in pipes with the C.V.M. numerical method

    International Nuclear Information System (INIS)

    Raymond, P.; Caumette, P.; Le Coq, G.; Libmann, M.

    1983-03-01

    The Control Variables Method for fluid transients computations has been used to compute expansion and compression shock waves propagations. In this paper, first analytical solutions for shock wave and rarefaction wave propagation are detailed. Then after a rapid description of the C.V.M. technique and its stability and monotonicity properties, we will present some results about standard shock tube problem, reflection of shock wave, finally a comparison between experimental results obtained on the ELF facility and calculations is given

  1. Novel approach to the fabrication of an artificial small bone using a combination of sponge replica and electrospinning methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yang-Hee; Lee, Byong-Taek, E-mail: lbt@sch.ac.kr [Department of Biomedical Engineering and Materials, School of Medicine, Soonchunhyang University 366-1, Ssangyong-dong, Cheonan, Chungnam 330-090 (Korea, Republic of)

    2011-06-15

    In this study, a novel artificial small bone consisting of ZrO{sub 2}-biphasic calcium phosphate/polymethylmethacrylate-polycaprolactone-hydroxyapatite (ZrO{sub 2}-BCP/PMMA-PCL-HAp) was fabricated using a combination of sponge replica and electrospinning methods. To mimic the cancellous bone, the ZrO{sub 2}/BCP scaffold was composed of three layers, ZrO{sub 2}, ZrO{sub 2}/BCP and BCP, fabricated by the sponge replica method. The PMMA-PCL fibers loaded with HAp powder were wrapped around the ZrO{sub 2}/BCP scaffold using the electrospinning process. To imitate the Haversian canal region of the bone, HAp-loaded PMMA-PCL fibers were wrapped around a steel wire of 0.3 mm diameter. As a result, the bundles of fiber wrapped around the wires imitated the osteon structure of the cortical bone. Finally, the ZrO{sub 2}/BCP scaffold was surrounded by HAp-loaded PMMA-PCL composite bundles. After removal of the steel wires, the ZrO{sub 2}/BCP scaffold and bundles of HAp-loaded PMMA-PCL formed an interconnected structure resembling the human bone. Its diameter, compressive strength and porosity were approximately 12 mm, 5 MPa and 70%, respectively, and the viability of MG-63 osteoblast-like cells was determined to be over 90% by the MTT (3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide) assay. This artificial bone shows excellent cytocompatibility and is a promising bone regeneration material.

  2. Depicting mass flow rate of R134a /LPG refrigerant through straight and helical coiled adiabatic capillary tubes of vapor compression refrigeration system using artificial neural network approach

    Science.gov (United States)

    Gill, Jatinder; Singh, Jagdev

    2018-07-01

    In this work, an experimental investigation is carried out with R134a and LPG refrigerant mixture for depicting mass flow rate through straight and helical coil adiabatic capillary tubes in a vapor compression refrigeration system. Various experiments were conducted under steady-state conditions, by changing capillary tube length, inner diameter, coil diameter and degree of subcooling. The results showed that mass flow rate through helical coil capillary tube was found lower than straight capillary tube by about 5-16%. Dimensionless correlation and Artificial Neural Network (ANN) models were developed to predict mass flow rate. It was found that dimensionless correlation and ANN model predictions agreed well with experimental results and brought out an absolute fraction of variance of 0.961 and 0.988, root mean square error of 0.489 and 0.275 and mean absolute percentage error of 4.75% and 2.31% respectively. The results suggested that ANN model shows better statistical prediction than dimensionless correlation model.

  3. [Evaluation of artificial digestion method on inspection of meat for Trichinella spiralis contamination and influence of the method on muscle larvae recovery].

    Science.gov (United States)

    Wang, Guo-Ying; Du, Jing-Fang; Dun, Guo-Qing; Sun, Wei-Li; Wang, Jin-Xi

    2011-04-01

    To evaluate the effect of artificial digestion method on inspection of meat for Trichinella spiralis contamination and its influence on activity and infectivity of muscle larvae. The mice were inoculated orally with 100 muscle larvae of T. spiralis and sacrificed on the 30th day following the infection. The muscle larvae of T. spiralis were recovered by three different test protocols employing variations of the artificial digestion method, i.e. the first test protocol evaluating digestion for 2 hours (magnetic stirrer method), the second test protocol evaluating digestion for 12 hours, and the third test protocol evaluating digestion for 20 hours. Each test group included ten samples, and each of which included 300 encapsulated larvae. Meanwhile, the activity of the recovered muscle larvae was also assessed. Forty mice were randomly divided into a control group and three digestion groups, so 4 groups (with 10 mice per group) in total. In the control group, each mouse was orally inoculated with 100 encapsulated larvae of T. spiralis. In all of the digestion test groups, each mouse was orally inoculated with 100 muscle larvae of T. spiralis. The larvae were then recovered from the different three test groups by the artificial digestion protocol variations. All the infected mice were sacrificed on the 30th day following the infection, and the muscle larvae of T. spiralis were examined respectively by the diaphragm compression method and the magnetic stirrer method. The muscle larvae detection rates were 78.47%, 76.73%, and 68.63%, the death rates were 0.59%, 4.60%, and 7.43%, and the reduction rates were 60.56%, 61.94%, and 73.07%, in the Test Group One (2-hour digestion), Test Group Two (12-hour digestion) and Test Group Three (20-hour digestion), respectively. The magnetic stirrer method (2-hour digestion method) is superior to both 12-hour digestion and 20-hour digestion methods when assessed by the detection rate, activity and infectivity of muscle larvae.

  4. Methodes spectrales paralleles et applications aux simulations de couches de melange compressibles

    OpenAIRE

    Male , Jean-Michel; Fezoui , Loula ,

    1993-01-01

    La resolution des equations de Navier-Stokes en methodes spectrales pour des ecoulements compressibles peut etre assez gourmande en temps de calcul. On etudie donc ici la parallelisation d'un tel algorithme et son implantation sur une machine massivement parallele, la connection-machine CM-2. La methode spectrale s'adapte bien aux exigences du parallelisme massif, mais l'un des outils de base de cette methode, la transformee de Fourier rapide (lorsqu'elle doit etre appliquee sur les deux dime...

  5. An evaluation of an organically bound tritium measurement method in artificial and natural urine

    International Nuclear Information System (INIS)

    Trivedi, A.; Duong, T.

    1993-03-01

    The accurate measurement of tritium in urine in the form of tritiated water (HTO) as well as in organic forms (organically bound tritium (OBT)) is an essential step in assessing tritium exposures correctly. Exchange between HTO and OBT, arising intrinsically in the separation of HTO from urine samples, is a source of error in determining the concentration of OBT using the low-temperature distillation (LTD) bioassay method. The accuracy and precision of OBT measurements using the LTD method was investigated using spiked natural and artificial urine samples. The relative bias for most of the measurements was less than 25%. The choice of testing matrix, artificial urine versus human urine, made little difference: the precisions for each urine type were similar. The appropriateness of the use of artificial urine for testing purposes was judged using a ratio of performance indices. Based on this evaluation, the artificial urine is a suitable test matrix for intercomparisons of OBT in urine measurements. It is further concluded that the LTD method is reliable for measuring OBT in urine samples. (author). 7 refs., 6 tabs

  6. An evaluation of an organically bound tritium measurement method in artificial and natural urine

    Energy Technology Data Exchange (ETDEWEB)

    Trivedi, A; Duong, T

    1993-03-01

    The accurate measurement of tritium in urine in the form of tritiated water (HTO) as well as in organic forms (organically bound tritium (OBT)) is an essential step in assessing tritium exposures correctly. Exchange between HTO and OBT, arising intrinsically in the separation of HTO from urine samples, is a source of error in determining the concentration of OBT using the low-temperature distillation (LTD) bioassay method. The accuracy and precision of OBT measurements using the LTD method was investigated using spiked natural and artificial urine samples. The relative bias for most of the measurements was less than 25%. The choice of testing matrix, artificial urine versus human urine, made little difference: the precisions for each urine type were similar. The appropriateness of the use of artificial urine for testing purposes was judged using a ratio of performance indices. Based on this evaluation, the artificial urine is a suitable test matrix for intercomparisons of OBT in urine measurements. It is further concluded that the LTD method is reliable for measuring OBT in urine samples. (author). 7 refs., 6 tabs.

  7. Compressed sensing of ECG signal for wireless system with new fast iterative method.

    Science.gov (United States)

    Tawfic, Israa; Kayhan, Sema

    2015-12-01

    Recent experiments in wireless body area network (WBAN) show that compressive sensing (CS) is a promising tool to compress the Electrocardiogram signal ECG signal. The performance of CS is based on algorithms use to reconstruct exactly or approximately the original signal. In this paper, we present two methods work with absence and presence of noise, these methods are Least Support Orthogonal Matching Pursuit (LS-OMP) and Least Support Denoising-Orthogonal Matching Pursuit (LSD-OMP). The algorithms achieve correct support recovery without requiring sparsity knowledge. We derive an improved restricted isometry property (RIP) based conditions over the best known results. The basic procedures are done by observational and analytical of a different Electrocardiogram signal downloaded them from PhysioBankATM. Experimental results show that significant performance in term of reconstruction quality and compression rate can be obtained by these two new proposed algorithms, and help the specialist gathering the necessary information from the patient in less time if we use Magnetic Resonance Imaging (MRI) application, or reconstructed the patient data after sending it through the network. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. A compressed sensing based method with support refinement for impulse noise cancelation in DSL

    KAUST Repository

    Quadeer, Ahmed Abdul

    2013-06-01

    This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.

  9. Uniaxial Compressive Strength and Fracture Mode of Lake Ice at Moderate Strain Rates Based on a Digital Speckle Correlation Method for Deformation Measurement

    Directory of Open Access Journals (Sweden)

    Jijian Lian

    2017-05-01

    Full Text Available Better understanding of the complex mechanical properties of ice is the foundation to predict the ice fail process and avoid potential ice threats. In the present study, uniaxial compressive strength and fracture mode of natural lake ice are investigated over moderate strain-rate range of 0.4–10 s−1 at −5 °C and −10 °C. The digital speckle correlation method (DSCM is used for deformation measurement through constructing artificial speckle on ice sample surface in advance, and two dynamic load cells are employed to measure the dynamic load for monitoring the equilibrium of two ends’ forces under high-speed loading. The relationships between uniaxial compressive strength and strain-rate, temperature, loading direction, and air porosity are investigated, and the fracture mode of ice at moderate rates is also discussed. The experimental results show that there exists a significant difference between true strain-rate and nominal strain-rate derived from actuator displacement under dynamic loading conditions. Over the employed strain-rate range, the dynamic uniaxial compressive strength of lake ice shows positive strain-rate sensitivity and decreases with increasing temperature. Ice obtains greater strength values when it is with lower air porosity and loaded vertically. The fracture mode of ice seems to be a combination of splitting failure and crushing failure.

  10. SOLVING TRANSPORT LOGISTICS PROBLEMS IN A VIRTUAL ENTERPRISE THROUGH ARTIFICIAL INTELLIGENCE METHODS

    OpenAIRE

    PAVLENKO, Vitaliy; PAVLENKO, Tetiana; MOROZOVA, Olga; KUZNETSOVA, Anna; VOROPAI, Olena

    2017-01-01

    The paper offers a solution to the problem of material flow allocation within a virtual enterprise by using artificial intelligence methods. The research is based on the use of fuzzy relations when planning for optimal transportation modes to deliver components for manufactured products. The Fuzzy Logic Toolbox is used to determine the optimal route for transportation of components for manufactured products. The methods offered have been exemplified in the present research. The authors have b...

  11. A REVIEW OF VIBRATION MACHINE DIAGNOSTICS BY USING ARTIFICIAL INTELLIGENCE METHODS

    Directory of Open Access Journals (Sweden)

    Grover Zurita

    2016-09-01

    Full Text Available In the industry, gears and rolling bearings failures are one of the foremost causes of breakdown in rotating machines, reducing availability time of the production and resulting in costly systems downtime. Therefore, there are growing demands for vibration condition based monitoring of gears and bearings, and any method in order to improve the effectiveness, reliability, and accuracy of the bearing faults diagnosis ought to be evaluated. In order to perform machine diagnosis efficiently, researchers have extensively investigated different advanced digital signal processing techniques and artificial intelligence methods to accurately extract fault characteristics from vibration signals. The main goal of this article is to present the state-of-the-art development in vibration analysis for machine diagnosis based on artificial intelligence methods.

  12. A theoretical global optimization method for vapor-compression refrigeration systems based on entransy theory

    International Nuclear Information System (INIS)

    Xu, Yun-Chao; Chen, Qun

    2013-01-01

    The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases

  13. AN ENCODING METHOD FOR COMPRESSING GEOGRAPHICAL COORDINATES IN 3D SPACE

    Directory of Open Access Journals (Sweden)

    C. Qian

    2017-09-01

    Full Text Available This paper proposed an encoding method for compressing geographical coordinates in 3D space. By the way of reducing the length of geographical coordinates, it helps to lessen the storage size of geometry information. In addition, the encoding algorithm subdivides the whole space according to octree rules, which enables progressive transmission and loading. Three main steps are included in this method: (1 subdividing the whole 3D geographic space based on octree structure, (2 resampling all the vertices in 3D models, (3 encoding the coordinates of vertices with a combination of Cube Index Code (CIC and Geometry Code. A series of geographical 3D models were applied to evaluate the encoding method. The results showed that this method reduced the storage size of most test data by 90 % or even more under the condition of a speed of encoding and decoding. In conclusion, this method achieved a remarkable compression rate in vertex bit size with a steerable precision loss. It shall be of positive meaning to the web 3d map storing and transmission.

  14. An improved ghost-cell immersed boundary method for compressible flow simulations

    KAUST Repository

    Chi, Cheng

    2016-05-20

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. A sensor is introduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently in the Cartesian grid system. The improved ghost-cell method is validated against four test cases: (a) double Mach reflections on a ramp, (b) smooth Prandtl-Meyer expansion flows, (c) supersonic flows in a wind tunnel with a forward-facing step, and (d) supersonic flows over a circular cylinder. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Compressible flow modelling in unstructured mesh topologies using numerical methods developed for incompressible flows

    International Nuclear Information System (INIS)

    Caruso, A.; Mechitoua, N.; Duplex, J.

    1995-01-01

    The R and D thermal hydraulic codes, notably the finite difference codes Melodie (2D) and ESTET (3D) or the 2D and 3D versions of the finite element code N3S were initially developed for incompressible, possibly dilatable, turbulent flows, i.e. those where density is not pressure-dependent. Subsequent minor modifications to these finite difference code algorithms enabled extension of their scope to subsonic compressible flows. The first applications in both single-phase and two flow contexts have now been completed. This paper presents the techniques used to adapt these algorithms for the processing of compressible flows in an N3S type finite element code, whereby complex geometries normally difficult to model in finite difference meshes could be successfully dealt with. The development of version 3.0 of he N3S code led to dilatable flow calculations at lower cost. On this basis, a 2-D prototype version of N3S was programmed, tested and validated, drawing maximum benefit from Cray vectorization possibilities and from physical, numerical or data processing experience with other fluid dynamics codes, such as Melodie, ESTET or TELEMAC. The algorithms are the same as those used in finite difference codes, but their formulation is variational. The first part of the paper deals with the fundamental equations involved, expressed in basic form, together with the associated digital method. The modifications to the k-epsilon turbulence model extended to compressible flows are also described. THe second part presents the algorithm used, indicating the additional terms required by the extension. The third part presents the equations in integral form and the associated matrix systems. The solutions adopted for calculation of the compressibility related terms are indicated. Finally, a few representative applications and test cases are discussed. These include subsonic, but also transsonic and supersonic cases, showing the shock responses of the digital method. The application of

  16. The direct Discontinuous Galerkin method for the compressible Navier-Stokes equations on arbitrary grids

    Science.gov (United States)

    Yang, Xiaoquan; Cheng, Jian; Liu, Tiegang; Luo, Hong

    2015-11-01

    The direct discontinuous Galerkin (DDG) method based on a traditional discontinuous Galerkin (DG) formulation is extended and implemented for solving the compressible Navier-Stokes equations on arbitrary grids. Compared to the widely used second Bassi-Rebay (BR2) scheme for the discretization of diffusive fluxes, the DDG method has two attractive features: first, it is simple to implement as it is directly based on the weak form, and therefore there is no need for any local or global lifting operator; second, it can deliver comparable results, if not better than BR2 scheme, in a more efficient way with much less CPU time. Two approaches to perform the DDG flux for the Navier- Stokes equations are presented in this work, one is based on conservative variables, the other is based on primitive variables. In the implementation of the DDG method for arbitrary grid, the definition of mesh size plays a critical role as the formation of viscous flux explicitly depends on the geometry. A variety of test cases are presented to demonstrate the accuracy and efficiency of the DDG method for discretizing the viscous fluxes in the compressible Navier-Stokes equations on arbitrary grids.

  17. Compressed sensing method for human activity recognition using tri-axis accelerometer on mobile phone

    Institute of Scientific and Technical Information of China (English)

    Song Hui; Wang Zhongmin

    2017-01-01

    The diversity in the phone placements of different mobile users' dailylife increases the difficulty of recognizing human activities by using mobile phone accelerometer data.To solve this problem,a compressed sensing method to recognize human activities that is based on compressed sensing theory and utilizes both raw mobile phone accelerometer data and phone placement information is proposed.First,an over-complete dictionary matrix is constructed using sufficient raw tri-axis acceleration data labeled with phone placement information.Then,the sparse coefficient is evaluated for the samples that need to be tested by resolving L1 minimization.Finally,residual values are calculated and the minimum value is selected as the indicator to obtain the recognition results.Experimental results show that this method can achieve a recognition accuracy reaching 89.86%,which is higher than that of a recognition method that does not adopt the phone placement information for the recognition process.The recognition accuracy of the proposed method is effective and satisfactory.

  18. Nuclear power plant monitoring and fault diagnosis methods based on the artificial intelligence technique

    International Nuclear Information System (INIS)

    Yoshikawa, S.; Saiki, A.; Ugolini, D.; Ozawa, K.

    1996-01-01

    The main objective of this paper is to develop an advanced diagnosis system based on the artificial intelligence technique to monitor the operation and to improve the operational safety of nuclear power plants. Three different methods have been elaborated in this study: an artificial neural network local diagnosis (NN ds ) scheme that acting at the component level discriminates between normal and abnormal transients, a model-based diagnostic reasoning mechanism that combines a physical causal network model-based knowledge compiler (KC) that generates applicable diagnostic rules from widely accepted physical knowledge compiler (KC) that generates applicable diagnostic rules from widely accepted physical knowledge. Although the three methods have been developed and verified independently, they are highly correlated and, when connected together, form a effective and robust diagnosis and monitoring tool. (authors)

  19. A comparative analysis of the cryo-compression and cryo-adsorption hydrogen storage methods

    Energy Technology Data Exchange (ETDEWEB)

    Petitpas, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Benard, P [Universite du Quebec a Trois-Rivieres (Canada); Klebanoff, L E [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Xiao, J [Universite du Quebec a Trois-Rivieres (Canada); Aceves, S M [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-07-01

    While conventional low-pressure LH₂ dewars have existed for decades, advanced methods of cryogenic hydrogen storage have recently been developed. These advanced methods are cryo-compression and cryo-adsorption hydrogen storage, which operate best in the temperature range 30–100 K. We present a comparative analysis of both approaches for cryogenic hydrogen storage, examining how pressure and/or sorbent materials are used to effectively increase onboard H₂ density and dormancy. We start by reviewing some basic aspects of LH₂ properties and conventional means of storing it. From there we describe the cryo-compression and cryo-adsorption hydrogen storage methods, and then explore the relationship between them, clarifying the materials science and physics of the two approaches in trying to solve the same hydrogen storage task (~5–8 kg H₂, typical of light duty vehicles). Assuming that the balance of plant and the available volume for the storage system in the vehicle are identical for both approaches, the comparison focuses on how the respective storage capacities, vessel weight and dormancy vary as a function of temperature, pressure and type of cryo-adsorption material (especially, powder MOF-5 and MIL-101). By performing a comparative analysis, we clarify the science of each approach individually, identify the regimes where the attributes of each can be maximized, elucidate the properties of these systems during refueling, and probe the possible benefits of a combined “hybrid” system with both cryo-adsorption and cryo-compression phenomena operating at the same time. In addition the relationships found between onboard H₂ capacity, pressure vessel and/or sorbent mass and dormancy as a function of rated pressure, type of sorbent material and fueling conditions are useful as general designing guidelines in future engineering efforts using these two hydrogen storage approaches.

  20. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  1. Method and apparatus for optimizing operation of a power generating plant using artificial intelligence techniques

    Science.gov (United States)

    Wroblewski, David [Mentor, OH; Katrompas, Alexander M [Concord, OH; Parikh, Neel J [Richmond Heights, OH

    2009-09-01

    A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.

  2. Stabilization study on a wet-granule tableting method for a compression-sensitive benzodiazepine receptor agonist.

    Science.gov (United States)

    Fujita, Megumi; Himi, Satoshi; Iwata, Motokazu

    2010-03-01

    SX-3228, 6-benzyl-3-(5-methoxy-1,3,4-oxadiazol-2-yl)-5,6,7,8-tetrahydro-1,6-naphthyridin-2(1H)-one, is a newly-synthesized benzodiazepine receptor agonist intended to be developed as a tablet preparation. This compound, however, becomes chemically unstable due to decreased crystallinity when it undergoes mechanical treatments such as grinding and compression. A wet-granule tableting method, where wet granules are compressed before being dried, was therefore investigated as it has the advantage of producing tablets of sufficient hardness at quite low compression pressures. The results of the stability testing showed that the drug substance was chemically considerably more stable in wet-granule compression tablets compared to conventional tablets. Furthermore, the drug substance was found to be relatively chemically stable in wet-granule compression tablets even when high compression pressure was used and the effect of this pressure was small. After investigating the reason for this excellent stability, it became evident that near-isotropic pressure was exerted on the crystals of the drug substance because almost all the empty spaces in the tablets were occupied with water during the wet-granule compression process. Decreases in crystallinity of the drug substance were thus small, making the drug substance chemically stable in the wet-granule compression tablets. We believe that this novel approach could be useful for many other compounds that are destabilized by mechanical treatments.

  3. An asymptotic preserving multidimensional ALE method for a system of two compressible flows coupled with friction

    Science.gov (United States)

    Del Pino, S.; Labourasse, E.; Morel, G.

    2018-06-01

    We present a multidimensional asymptotic preserving scheme for the approximation of a mixture of compressible flows. Fluids are modelled by two Euler systems of equations coupled with a friction term. The asymptotic preserving property is mandatory for this kind of model, to derive a scheme that behaves well in all regimes (i.e. whatever the friction parameter value is). The method we propose is defined in ALE coordinates, using a Lagrange plus remap approach. This imposes a multidimensional definition and analysis of the scheme.

  4. Sizing of Compression Coil Springs Gas Regulators Using Modern Methods CAD and CAE

    Directory of Open Access Journals (Sweden)

    Adelin Ionel Tuţă

    2010-10-01

    Full Text Available This paper presents a method for compression coil springs sizing by gas regulators composition, using CAD techniques (Computer Aided Design and CAE (Computer Aided Engineering. Sizing is to optimize the functioning of the regulators under dynamic industrial and house-hold. Gas regulator is a device that automatically and continuously adjusted to maintain pre-set limits on output gas pressure at varying flow and input pressure. The performances of the pressure regulators like automatic systems depend on their behaviour under dynamic opera-tion. Time constant optimization of pneumatic actuators, which drives gas regulators, leads to a better functioning under their dynamic.

  5. Performance of Ruecking's Word-compression Method When Applied to Machine Retrieval from a Library Catalog

    Directory of Open Access Journals (Sweden)

    Ben-Ami Lipetz

    1969-12-01

    Full Text Available F. H. Ruecking's word-compression algorithm for retrieval of bibliographic data from computer stores was tested for performance in matching user-supplied, unedited bibliographic data to the bibliographic data contained in a library catalog. The algorithm was tested by manual simulation, using data derived from 126 case studies of successful manual searches of the card catalog at Sterling Memorial Library, Yale University. The algorithm achieved 70% recall in comparison to conventional searching. Its accepta- bility as a substitute for conventional catalog searching methods is ques- tioned unless recall performance can be improved, either by use of the algorithm alone or in combination with other algorithms.

  6. High-speed photographic methods for compression dynamics investigation of laser irradiated shell target

    International Nuclear Information System (INIS)

    Basov, N.G.; Kologrivov, A.A.; Krokhin, O.N.; Rupasov, A.A.; Shikanov, A.S.

    1979-01-01

    Three methods are described for a high-speed diagnostics of compression dynamics of shell targets being spherically laser-heated on the installation ''Kal'mar''. The first method is based on the direct investigation of the space-time evolution of the critical-density region for Nd-laser emission (N sub(e) asymptotically equals 10 21 I/cm 3 ) by means of the streak photography of plasma image in the second-harmonic light. The second method involves investigation of time evolution of the second-harmonic spectral distribution by means of a spectrograph coupled with a streak camera. The use of a special laser pulse with two time-distributed intensity maxima for the irradiation of shell targets, and the analysis of the obtained X-ray pin-hole pictures constitute the basis of the third method. (author)

  7. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  8. Method for data compression by associating complex numbers with files of data values

    Science.gov (United States)

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  9. Feasibility of gas-discharge and optical methods of creating artificial ozone layers of the earth

    International Nuclear Information System (INIS)

    Batanov, G.M.; Kossyi, I.A.; Matveev, A.A.; Silakov, V.P.

    1996-01-01

    Gas-discharge (microwave) and optical (laser) methods of generating large-scale artificial ozone layers in the stratosphere are analyzed. A kinetic model is developed to calculate the plasma-chemical consequences of discharges localized in the stratosphere. Computations and simple estimates indicate that, in order to implement gas-discharge and optical methods, the operating power of ozone-producing sources should be comparable to or even much higher than the present-day power production throughout the world. Consequently, from the engineering and economic standpoints, microwave and laser methods cannot be used to repair large-scale ozone 'holes'

  10. Compression method of anastomosis of large intestines by implants with memory of shape: alternative to traditional sutures

    Directory of Open Access Journals (Sweden)

    F. Sh. Aliev

    2015-01-01

    Full Text Available Research objective. To prove experimentally the possibility of forming a compression colonic anastomoses using nickel-titanium devices in comparison with traditional methods of anastomosis. Materials and methods. In experimental studies the quality of the compression anastomosis of the colon in comparison with sutured and stapled anastomoses was performed. There were three experimental groups in mongrel dogs formed: in the 1st series (n = 30 compression anastomoses nickel-titanium implants were formed; in the 2nd (n = 25 – circular stapling anastomoses; in the 3rd (n = 25 – ligature way to Mateshuk– Lambert. In the experiment the physical durability, elasticity, and biological tightness, morphogenesis colonic anastomoses were studied. Results. Optimal sizes of compression devices are 32 × 18 and 28 × 15 mm with a wire diameter of 2.2 mm, the force of winding compression was 740 ± 180 g/mm2. Compression suture has a higher physical durability compared to stapled (W = –33.0; p < 0.05 and sutured (W = –28.0; p < 0.05, higher elasticity (p < 0.05 in all terms of tests and biological tightness since 3 days (p < 0.001 after surgery. The regularities of morphogenesis colonic anastomoses allocated by 4 periods of the regeneration of intestinal suture. Conclusion. Obtained experimental data of the use of compression anastomosis of the colon by the nickel-titanium devices are the convincing arguments for their clinical application. 

  11. A Schur complement method for compressible two-phase flow models

    International Nuclear Information System (INIS)

    Dao, Thu-Huyen; Ndjinga, Michael; Magoules, Frederic

    2014-01-01

    In this paper, we will report our recent efforts to apply a Schur complement method for nonlinear hyperbolic problems. We use the finite volume method and an implicit version of the Roe approximate Riemann solver. With the interface variable introduced in [4] in the context of single phase flows, we are able to simulate two-fluid models ([12]) with various schemes such as upwind, centered or Rusanov. Moreover, we introduce a scaling strategy to improve the condition number of both the interface system and the local systems. Numerical results for the isentropic two-fluid model and the compressible Navier-Stokes equations in various 2D and 3D configurations and various schemes show that our method is robust and efficient. The scaling strategy considerably reduces the number of GMRES iterations in both interface system and local system resolutions. Comparisons of performances with classical distributed computing with up to 218 processors are also reported. (authors)

  12. Minimal invasive stabilization of osteoporotic vertebral compression fractures. Methods and preinterventional diagnostics

    International Nuclear Information System (INIS)

    Grohs, J.G.; Krepler, P.

    2004-01-01

    Minimal invasive stabilizations represent a new alternative for the treatment of osteoporotic compression fractures. Vertebroplasty and balloon kyphoplasty are two methods to enhance the strength of osteoporotic vertebral bodies by the means of cement application. Vertebroplasty is the older and technically easier method. The balloon kyphoplasty is the newer and more expensive method which does not only improve pain but also restores the sagittal profile of the spine. By balloon kyphoplasty the height of 101 fractured vertebral bodies could be increased up to 90% and the wedge decreased from 12 to 7 degrees. Pain was reduced from 7,2 to 2,5 points. The Oswestry disability index decreased from 60 to 26 points. This effects persisted over a period of two years. Cement leakage occurred in only 2% of vertebral bodies. Fractures of adjacent vertebral bodies were found in 11%. Good preinterventional diagnostics and intraoperative imaging are necessary to make the balloon kyphoplasty a successful application. (orig.) [de

  13. Linearly decoupled energy-stable numerical methods for multi-component two-phase compressible flow

    KAUST Repository

    Kou, Jisheng

    2017-12-06

    In this paper, for the first time we propose two linear, decoupled, energy-stable numerical schemes for multi-component two-phase compressible flow with a realistic equation of state (e.g. Peng-Robinson equation of state). The methods are constructed based on the scalar auxiliary variable (SAV) approaches for Helmholtz free energy and the intermediate velocities that are designed to decouple the tight relationship between velocity and molar densities. The intermediate velocities are also involved in the discrete momentum equation to ensure a consistency relationship with the mass balance equations. Moreover, we propose a component-wise SAV approach for a multi-component fluid, which requires solving a sequence of linear, separate mass balance equations. We prove that the methods have the unconditional energy-dissipation feature. Numerical results are presented to verify the effectiveness of the proposed methods.

  14. Experimental Study on the Compressive Strength of Big Mobility Concrete with Nondestructive Testing Method

    Directory of Open Access Journals (Sweden)

    Huai-Shuai Shang

    2012-01-01

    Full Text Available An experimental study of C20, C25, C30, C40, and C50 big mobility concrete cubes that came from laboratory and construction site was completed. Nondestructive testing (NDT was carried out using impact rebound hammer (IRH techniques to establish a correlation between the compressive strengths and the rebound number. The local curve for measuring strength of the regression method is set up and its superiority is proved. The rebound method presented is simple, quick, and reliable and covers wide ranges of concrete strengths. The rebound method can be easily applied to concrete specimens as well as existing concrete structures. The final results were compared with previous ones from the literature and also with actual results obtained from samples extracted from existing structures.

  15. Selectively Lossy, Lossless, and/or Error Robust Data Compression Method

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Lossless compression techniques provide efficient compression of hyperspectral satellite data. The present invention combines the advantages of a clustering with...

  16. A Space-Frequency Data Compression Method for Spatially Dense Laser Doppler Vibrometer Measurements

    Directory of Open Access Journals (Sweden)

    José Roberto de França Arruda

    1996-01-01

    Full Text Available When spatially dense mobility shapes are measured with scanning laser Doppler vibrometers, it is often impractical to use phase-separation modal parameter estimation methods due to the excessive number of highly coupled modes and to the prohibitive computational cost of processing huge amounts of data. To deal with this problem, a data compression method using Chebychev polynomial approximation in the frequency domain and two-dimensional discrete Fourier series approximation in the spatial domain, is proposed in this article. The proposed space-frequency regressive approach was implemented and verified using a numerical simulation of a free-free-free-free suspended rectangular aluminum plate. To make the simulation more realistic, the mobility shapes were synthesized by modal superposition using mode shapes obtained experimentally with a scanning laser Doppler vibrometer. A reduced and smoothed model, which takes advantage of the sinusoidal spatial pattern of structural mobility shapes and the polynomial frequency-domain pattern of the mobility shapes, is obtained. From the reduced model, smoothed curves with any desired frequency and spatial resolution can he produced whenever necessary. The procedure can he used either to generate nonmodal models or to compress the measured data prior to modal parameter extraction.

  17. Proposed Sandia frequency shift for anti-islanding detection method based on artificial immune system

    Directory of Open Access Journals (Sweden)

    A.Y. Hatata

    2018-03-01

    Full Text Available Sandia frequency shift (SFS is one of the active anti-islanding detection methods that depend on frequency drift to detect an islanding condition for inverter-based distributed generation. The non-detection zone (NDZ of the SFS method depends to a great extent on its parameters. Improper adjusting of these parameters may result in failure of the method. This paper presents a proposed artificial immune system (AIS-based technique to obtain optimal parameters of SFS anti-islanding detection method. The immune system is highly distributed, highly adaptive, and self-organizing in nature, maintains a memory of past encounters, and has the ability to continually learn about new encounters. The proposed method generates less total harmonic distortion (THD than the conventional SFS, which results in faster island detection and better non-detection zone. The performance of the proposed method is derived analytically and simulated using Matlab/Simulink. Two case studies are used to verify the proposed method. The first case includes a photovoltaic (PV connected to grid and the second includes a wind turbine connected to grid. The deduced optimized parameter setting helps to achieve the “non-islanding inverter” as well as least potential adverse impact on power quality. Keywords: Anti-islanding detection, Sandia frequency shift (SFS, Non-detection zone (NDZ, Total harmonic distortion (THD, Artificial immune system (AIS, Clonal selection algorithm

  18. Continuous surveillance of transformers using artificial intelligence methods; Surveillance continue des transformateurs: application des methodes d'intelligence artificielle

    Energy Technology Data Exchange (ETDEWEB)

    Schenk, A.; Germond, A. [Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Boss, P.; Lorin, P. [ABB Secheron SA, Geneve (Switzerland)

    2000-07-01

    The article describes a new method for the continuous surveillance of power transformers based on the application of artificial intelligence (AI) techniques. An experimental pilot project on a specially equipped, strategically important power transformer is described. Traditional surveillance methods and the use of mathematical models for the prediction of faults are described. The article describes the monitoring equipment used in the pilot project and the AI principles such as self-organising maps that are applied. The results obtained from the pilot project and methods for their graphical representation are discussed.

  19. The Artificial Neural Networks Based on Scalarization Method for a Class of Bilevel Biobjective Programming Problem

    Science.gov (United States)

    Chen, Zhong; Liu, June; Li, Xiong

    2017-01-01

    A two-stage artificial neural network (ANN) based on scalarization method is proposed for bilevel biobjective programming problem (BLBOP). The induced set of the BLBOP is firstly expressed as the set of minimal solutions of a biobjective optimization problem by using scalar approach, and then the whole efficient set of the BLBOP is derived by the proposed two-stage ANN for exploring the induced set. In order to illustrate the proposed method, seven numerical examples are tested and compared with results in the classical literature. Finally, a practical problem is solved by the proposed algorithm. PMID:29312446

  20. SOLVING TRANSPORT LOGISTICS PROBLEMS IN A VIRTUAL ENTERPRISE THROUGH ARTIFICIAL INTELLIGENCE METHODS

    Directory of Open Access Journals (Sweden)

    Vitaliy PAVLENKO

    2017-06-01

    Full Text Available The paper offers a solution to the problem of material flow allocation within a virtual enterprise by using artificial intelligence methods. The research is based on the use of fuzzy relations when planning for optimal transportation modes to deliver components for manufactured products. The Fuzzy Logic Toolbox is used to determine the optimal route for transportation of components for manufactured products. The methods offered have been exemplified in the present research. The authors have built a simulation model for component transportation and delivery for manufactured products using the Simulink graphical environment for building models.

  1. Estimating Penetration Resistance in Agricultural Soils of Ardabil Plain Using Artificial Neural Network and Regression Methods

    Directory of Open Access Journals (Sweden)

    Gholam Reza Sheykhzadeh

    2017-02-01

    Full Text Available Introduction: Penetration resistance is one of the criteria for evaluating soil compaction. It correlates with several soil properties such as vehicle trafficability, resistance to root penetration, seedling emergence, and soil compaction by farm machinery. Direct measurement of penetration resistance is time consuming and difficult because of high temporal and spatial variability. Therefore, many different regressions and artificial neural network pedotransfer functions have been proposed to estimate penetration resistance from readily available soil variables such as particle size distribution, bulk density (Db and gravimetric water content (θm. The lands of Ardabil Province are one of the main production regions of potato in Iran, thus, obtaining the soil penetration resistance in these regions help with the management of potato production. The objective of this research was to derive pedotransfer functions by using regression and artificial neural network to predict penetration resistance from some soil variations in the agricultural soils of Ardabil plain and to compare the performance of artificial neural network with regression models. Materials and methods: Disturbed and undisturbed soil samples (n= 105 were systematically taken from 0-10 cm soil depth with nearly 3000 m distance in the agricultural lands of the Ardabil plain ((lat 38°15' to 38°40' N, long 48°16' to 48°61' E. The contents of sand, silt and clay (hydrometer method, CaCO3 (titration method, bulk density (cylinder method, particle density (Dp (pychnometer method, organic carbon (wet oxidation method, total porosity(calculating from Db and Dp, saturated (θs and field soil water (θf using the gravimetric method were measured in the laboratory. Mean geometric diameter (dg and standard deviation (σg of soil particles were computed using the percentages of sand, silt and clay. Penetration resistance was measured in situ using cone penetrometer (analog model at 10

  2. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  3. Unidirectional Expiratory Valve Method to Assess Maximal Inspiratory Pressure in Individuals without Artificial Airway.

    Directory of Open Access Journals (Sweden)

    Samantha Torres Grams

    Full Text Available Maximal Inspiratory Pressure (MIP is considered an effective method to estimate strength of inspiratory muscles, but still leads to false positive diagnosis. Although MIP assessment with unidirectional expiratory valve method has been used in patients undergoing mechanical ventilation, no previous studies investigated the application of this method in subjects without artificial airway.This study aimed to compare the MIP values assessed by standard method (MIPsta and by unidirectional expiratory valve method (MIPuni in subjects with spontaneous breathing without artificial airway. MIPuni reproducibility was also evaluated.This was a crossover design study, and 31 subjects performed MIPsta and MIPuni in a random order. MIPsta measured MIP maintaining negative pressure for at least one second after forceful expiration. MIPuni evaluated MIP using a unidirectional expiratory valve attached to a face mask and was conducted by two evaluators (A and B at two moments (Tests 1 and 2 to determine interobserver and intraobserver reproducibility of MIP values. Intraclass correlation coefficient (ICC[2,1] was used to determine intraobserver and interobserver reproducibility.The mean values for MIPuni were 14.3% higher (-117.3 ± 24.8 cmH2O than the mean values for MIPsta (-102.5 ± 23.9 cmH2O (p<0.001. Interobserver reproducibility assessment showed very high correlation for Test 1 (ICC[2,1] = 0.91, and high correlation for Test 2 (ICC[2,1] = 0.88. The assessment of the intraobserver reproducibility showed high correlation for evaluator A (ICC[2,1] = 0.86 and evaluator B (ICC[2,1] = 0.77.MIPuni presented higher values when compared with MIPsta and proved to be reproducible in subjects with spontaneous breathing without artificial airway.

  4. Analysis of multicriteria models application for selection of an optimal artificial lift method in oil production

    Directory of Open Access Journals (Sweden)

    Crnogorac Miroslav P.

    2016-01-01

    Full Text Available In the world today for the exploitation of oil reservoirs by artificial lift methods are applied different types of deep pumps (piston, centrifugal, screw, hydraulic, water jet pumps and gas lift (continuous, intermittent and plunger. Maximum values of oil production achieved by these exploitation methods are significantly different. In order to select the optimal exploitation method of oil well, the multicriteria analysis models are used. In this paper is presented an analysis of the multicriteria model's application known as VIKOR, TOPSIS, ELECTRE, AHP and PROMETHEE for selection of optimal exploitation method for typical oil well at Serbian exploration area. Ranking results of applicability of the deep piston pumps, hydraulic pumps, screw pumps, gas lift method and electric submersible centrifugal pumps, indicated that in the all above multicriteria models except in PROMETHEE, the optimal method of exploitation are deep piston pumps and gas lift.

  5. Fluvial facies reservoir productivity prediction method based on principal component analysis and artificial neural network

    Directory of Open Access Journals (Sweden)

    Pengyu Gao

    2016-03-01

    Full Text Available It is difficult to forecast the well productivity because of the complexity of vertical and horizontal developments in fluvial facies reservoir. This paper proposes a method based on Principal Component Analysis and Artificial Neural Network to predict well productivity of fluvial facies reservoir. The method summarizes the statistical reservoir factors and engineering factors that affect the well productivity, extracts information by applying the principal component analysis method and approximates arbitrary functions of the neural network to realize an accurate and efficient prediction on the fluvial facies reservoir well productivity. This method provides an effective way for forecasting the productivity of fluvial facies reservoir which is affected by multi-factors and complex mechanism. The study result shows that this method is a practical, effective, accurate and indirect productivity forecast method and is suitable for field application.

  6. Schlieren method diagnostics of plasma compression in front of coaxial gun

    International Nuclear Information System (INIS)

    Kravarik, J.; Kubes, P.; Hruska, J.; Bacilek, J.

    1983-01-01

    The schlieren method employing a movable knife edge placed in the focal plane of a laser beam was used for the diagnostics of plasma produced by a coaxial plasma gun. When compared with the interferometric method reported earlier, spatial resolution was improved by more than one order of magnitude. In the determination of electron density near the gun orifice, spherical symmetry of the current sheath inhomogeneities and cylindrical symmetry of the compression maximum were assumed. Radial variation of electron density could be reconstructed from the photometric measurements of the transversal variation of schlieren light intensity. Due to small plasma dimensions, electron density was determined directly from the knife edge shift necessary for shadowing the corresponding part of the picture. (J.U.)

  7. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi

    2016-01-01

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  8. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2016-06-03

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  9. Applicability of higher-order TVD method to low mach number compressible flows

    International Nuclear Information System (INIS)

    Akamatsu, Mikio

    1995-01-01

    Steep gradients of fluid density are the influential factor of spurious oscillation in numerical solutions of low Mach number (M<<1) compressible flows. The total variation diminishing (TVD) scheme is a promising remedy to overcome this problem and obtain accurate solutions. TVD schemes for high-speed flows are, however, not compatible with commonly used methods in low Mach number flows using pressure-based formulation. In the present study a higher-order TVD scheme is constructed on a modified form of each individual scalar equation of primitive variables. It is thus clarified that the concept of TVD is applicable to low Mach number flows within the framework of the existing numerical method. Results of test problems of the moving interface of two-component gases with the density ratio ≥ 4, demonstrate the accurate and robust (wiggle-free) profile of the scheme. (author)

  10. A multiscale method for compressible liquid-vapor flow with surface tension*

    Directory of Open Access Journals (Sweden)

    Jaegle Felix

    2013-01-01

    Full Text Available Discontinuous Galerkin methods have become a powerful tool for approximating the solution of compressible flow problems. Their direct use for two-phase flow problems with phase transformation is not straightforward because this type of flows requires a detailed tracking of the phase front. We consider the fronts in this contribution as sharp interfaces and propose a novel multiscale approach. It combines an efficient high-order Discontinuous Galerkin solver for the computation in the bulk phases on the macro-scale with the use of a generalized Riemann solver on the micro-scale. The Riemann solver takes into account the effects of moderate surface tension via the curvature of the sharp interface as well as phase transformation. First numerical experiments in three space dimensions underline the overall performance of the method.

  11. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    Science.gov (United States)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  12. A practical discrete-adjoint method for high-fidelity compressible turbulence simulations

    International Nuclear Information System (INIS)

    Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.

    2015-01-01

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space–time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge–Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that

  13. Augmented Lagrangian Method and Compressible Visco-plastic Flows: Applications to Shallow Dense Avalanches

    Science.gov (United States)

    Bresch, D.; Fernández-Nieto, E. D.; Ionescu, I. R.; Vigneaux, P.

    In this paper we propose a well-balanced finite volume/augmented Lagrangian method for compressible visco-plastic models focusing on a compressible Bingham type system with applications to dense avalanches. For the sake of completeness we also present a method showing that such a system may be derived for a shallow flow of a rigid-viscoplastic incompressible fluid, namely for incompressible Bingham type fluid with free surface. When the fluid is relatively shallow and spreads slowly, lubrication-style asymptotic approximations can be used to build reduced models for the spreading dynamics, see for instance [N.J. Balmforth et al., J. Fluid Mech (2002)]. When the motion is a little bit quicker, shallow water theory for non-Newtonian flows may be applied, for instance assuming a Navier type boundary condition at the bottom. We start from the variational inequality for an incompressible Bingham fluid and derive a shallow water type system. In the case where Bingham number and viscosity are set to zero we obtain the classical Shallow Water or Saint-Venant equations obtained for instance in [J.F. Gerbeau, B. Perthame, DCDS (2001)]. For numerical purposes, we focus on the one-dimensional in space model: We study associated static solutions with sufficient conditions that relate the slope of the bottom with the Bingham number and domain dimensions. We also propose a well-balanced finite volume/augmented Lagrangian method. It combines well-balanced finite volume schemes for spatial discretization with the augmented Lagrangian method to treat the associated optimization problem. Finally, we present various numerical tests.

  14. Methods for compressible fluid simulation on GPUs using high-order finite differences

    Science.gov (United States)

    Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer

    2017-08-01

    We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.

  15. A method for predicting the impact velocity of a projectile fired from a compressed air gun facility

    International Nuclear Information System (INIS)

    Attwood, G.J.

    1988-03-01

    This report describes the development and use of a method for calculating the velocity at impact of a projectile fired from a compressed air gun. The method is based on a simple but effective approach which has been incorporated into a computer program. The method was developed principally for use with the Horizontal Impact Facility at AEE Winfrith but has been adapted so that it can be applied to any compressed air gun of a similar design. The method has been verified by comparison of predicted velocities with test data and the program is currently being used in a predictive manner to specify test conditions for the Horizontal Impact Facility at Winfrith. (author)

  16. An Immersed Boundary Method for Solving the Compressible Navier-Stokes Equations with Fluid Structure Interaction

    Science.gov (United States)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    An immersed boundary method for the compressible Navier-Stokes equation and the additional infrastructure that is needed to solve moving boundary problems and fully coupled fluid-structure interaction is described. All the methods described in this paper were implemented in NASA's LAVA solver framework. The underlying immersed boundary method is based on the locally stabilized immersed boundary method that was previously introduced by the authors. In the present paper this method is extended to account for all aspects that are involved for fluid structure interaction simulations, such as fast geometry queries and stencil computations, the treatment of freshly cleared cells, and the coupling of the computational fluid dynamics solver with a linear structural finite element method. The current approach is validated for moving boundary problems with prescribed body motion and fully coupled fluid structure interaction problems in 2D and 3D. As part of the validation procedure, results from the second AIAA aeroelastic prediction workshop are also presented. The current paper is regarded as a proof of concept study, while more advanced methods for fluid structure interaction are currently being investigated, such as geometric and material nonlinearities, and advanced coupling approaches.

  17. An exact and consistent adjoint method for high-fidelity discretization of the compressible flow equations

    Science.gov (United States)

    Subramanian, Ramanathan Vishnampet Ganapathi

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvement. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs. Such methods have enabled sensitivity analysis and active control of turbulence at engineering flow conditions by providing gradient information at computational cost comparable to that of simulating the flow. They accelerate convergence of numerical design optimization algorithms, though this is predicated on the availability of an accurate gradient of the discretized flow equations. This is challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. We analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space--time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge--Kutta-like scheme

  18. Macron Formed Liner Compression as a Practical Method for Enabling Magneto-Inertial Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John

    2011-12-10

    The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. The main impediment for current nuclear fusion concepts is the complexity and large mass associated with the confinement systems. To take advantage of the smaller scale, higher density regime of magnetic fusion, an efficient method for achieving the compressional heating required to reach fusion gain conditions must be found. The very compact, high energy density plasmoid commonly referred to as a Field Reversed Configuration (FRC) provides for an ideal target for this purpose. To make fusion with the FRC practical, an efficient method for repetitively compressing the FRC to fusion gain conditions is required. A novel approach to be explored in this endeavor is to remotely launch a converging array of small macro-particles (macrons) that merge and form a more massive liner inside the reactor which then radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target FRC plasmoid suppresses the thermal transport to the confining liner significantly lowering the imploding power needed to compress the target. With the momentum flux being delivered by an assemblage of low mass, but high velocity macrons, many of the difficulties encountered with the liner implosion power technology are eliminated. The undertaking to be described in this proposal is to evaluate the feasibility achieving fusion conditions from this simple and low cost approach to fusion. During phase I the design and testing of the key components for the creation of the macron formed liner have been successfully carried out. Detailed numerical calculations of the merging, formation and radial implosion of the Macron Formed Liner (MFL) were also performed. The phase II effort will focus on an experimental demonstration of the macron launcher at full power, and the demonstration

  19. Based on Short Motion Paths and Artificial Intelligence Method for Chinese Chess Game

    Directory of Open Access Journals (Sweden)

    Chien-Ming Hung

    2017-08-01

    Full Text Available The article develops the decision rules to win each set of the Chinese chess game using evaluation algorithm and artificial intelligence method, and uses the mobile robot to be instead of the chess, and presents the movement scenarios using the shortest motion paths for mobile robots. Player can play the Chinese chess game according to the game rules with the supervised computer. The supervised computer decides the optimal motion path to win the set using artificial intelligence method, and controls mobile robots according to the programmed motion paths of the assigned chesses moving on the platform via wireless RF interface. We uses enhance A* searching algorithm to solve the shortest path problem of the assigned chess, and solve the collision problems of the motion paths for two mobile robots moving on the platform simultaneously. We implement a famous set to be called lwild horses run in farmr using the proposed method. First we use simulation method to display the motion paths of the assigned chesses for the player and the supervised computer. Then the supervised computer implements the simulation results on the chessboard platform using mobile robots. Mobile robots move on the chessboard platform according to the programmed motion paths and is guided to move on the centre line of the corridor, and avoid the obstacles (chesses, and detect the cross point of the platform using three reflective IR modules.

  20. Novel method to load multiple genes onto a mammalian artificial chromosome.

    Science.gov (United States)

    Tóth, Anna; Fodor, Katalin; Praznovszky, Tünde; Tubak, Vilmos; Udvardy, Andor; Hadlaczky, Gyula; Katona, Robert L

    2014-01-01

    Mammalian artificial chromosomes are natural chromosome-based vectors that may carry a vast amount of genetic material in terms of both size and number. They are reasonably stable and segregate well in both mitosis and meiosis. A platform artificial chromosome expression system (ACEs) was earlier described with multiple loading sites for a modified lambda-integrase enzyme. It has been shown that this ACEs is suitable for high-level industrial protein production and the treatment of a mouse model for a devastating human disorder, Krabbe's disease. ACEs-treated mutant mice carrying a therapeutic gene lived more than four times longer than untreated counterparts. This novel gene therapy method is called combined mammalian artificial chromosome-stem cell therapy. At present, this method suffers from the limitation that a new selection marker gene should be present for each therapeutic gene loaded onto the ACEs. Complex diseases require the cooperative action of several genes for treatment, but only a limited number of selection marker genes are available and there is also a risk of serious side-effects caused by the unwanted expression of these marker genes in mammalian cells, organs and organisms. We describe here a novel method to load multiple genes onto the ACEs by using only two selectable marker genes. These markers may be removed from the ACEs before therapeutic application. This novel technology could revolutionize gene therapeutic applications targeting the treatment of complex disorders and cancers. It could also speed up cell therapy by allowing researchers to engineer a chromosome with a predetermined set of genetic factors to differentiate adult stem cells, embryonic stem cells and induced pluripotent stem (iPS) cells into cell types of therapeutic value. It is also a suitable tool for the investigation of complex biochemical pathways in basic science by producing an ACEs with several genes from a signal transduction pathway of interest.

  1. Novel method to load multiple genes onto a mammalian artificial chromosome.

    Directory of Open Access Journals (Sweden)

    Anna Tóth

    Full Text Available Mammalian artificial chromosomes are natural chromosome-based vectors that may carry a vast amount of genetic material in terms of both size and number. They are reasonably stable and segregate well in both mitosis and meiosis. A platform artificial chromosome expression system (ACEs was earlier described with multiple loading sites for a modified lambda-integrase enzyme. It has been shown that this ACEs is suitable for high-level industrial protein production and the treatment of a mouse model for a devastating human disorder, Krabbe's disease. ACEs-treated mutant mice carrying a therapeutic gene lived more than four times longer than untreated counterparts. This novel gene therapy method is called combined mammalian artificial chromosome-stem cell therapy. At present, this method suffers from the limitation that a new selection marker gene should be present for each therapeutic gene loaded onto the ACEs. Complex diseases require the cooperative action of several genes for treatment, but only a limited number of selection marker genes are available and there is also a risk of serious side-effects caused by the unwanted expression of these marker genes in mammalian cells, organs and organisms. We describe here a novel method to load multiple genes onto the ACEs by using only two selectable marker genes. These markers may be removed from the ACEs before therapeutic application. This novel technology could revolutionize gene therapeutic applications targeting the treatment of complex disorders and cancers. It could also speed up cell therapy by allowing researchers to engineer a chromosome with a predetermined set of genetic factors to differentiate adult stem cells, embryonic stem cells and induced pluripotent stem (iPS cells into cell types of therapeutic value. It is also a suitable tool for the investigation of complex biochemical pathways in basic science by producing an ACEs with several genes from a signal transduction pathway of interest.

  2. Assessing artificial neural networks and statistical methods for infilling missing soil moisture records

    Science.gov (United States)

    Dumedah, Gift; Walker, Jeffrey P.; Chik, Li

    2014-07-01

    Soil moisture information is critically important for water management operations including flood forecasting, drought monitoring, and groundwater recharge estimation. While an accurate and continuous record of soil moisture is required for these applications, the available soil moisture data, in practice, is typically fraught with missing values. There are a wide range of methods available to infilling hydrologic variables, but a thorough inter-comparison between statistical methods and artificial neural networks has not been made. This study examines 5 statistical methods including monthly averages, weighted Pearson correlation coefficient, a method based on temporal stability of soil moisture, and a weighted merging of the three methods, together with a method based on the concept of rough sets. Additionally, 9 artificial neural networks are examined, broadly categorized into feedforward, dynamic, and radial basis networks. These 14 infilling methods were used to estimate missing soil moisture records and subsequently validated against known values for 13 soil moisture monitoring stations for three different soil layer depths in the Yanco region in southeast Australia. The evaluation results show that the top three highest performing methods are the nonlinear autoregressive neural network, rough sets method, and monthly replacement. A high estimation accuracy (root mean square error (RMSE) of about 0.03 m/m) was found in the nonlinear autoregressive network, due to its regression based dynamic network which allows feedback connections through discrete-time estimation. An equally high accuracy (0.05 m/m RMSE) in the rough sets procedure illustrates the important role of temporal persistence of soil moisture, with the capability to account for different soil moisture conditions.

  3. A comparison of methods for demonstrating artificial bone lesions; conventional versus computer tomography

    International Nuclear Information System (INIS)

    Heller, M.; Wenk, M.; Jend, H.H.

    1984-01-01

    Conventional tomography (T) and computer tomography (CT) were used for examining 97 artificial bone lesions at various sites. The purpose of the study was to determine how far CT can replace T in the diagnosis of skeletal abnormalities. The results have shown that modern CT, particularly in its high resolution form, equals T and provides additional information (substrate of a lesion, its relationship to neighbouring tissues, simultaneous demonstration of soft tissue etc.). These cannot be shown successfully by T. It follows that CT is indicated as the primary method of examination for lesions of the facial skeleton, skull base, spine, pelvis and, to some extent, extremities. (orig.) [de

  4. Intelligent Evaluation Method of Tank Bottom Corrosion Status Based on Improved BP Artificial Neural Network

    Science.gov (United States)

    Qiu, Feng; Dai, Guang; Zhang, Ying

    According to the acoustic emission information and the appearance inspection information of tank bottom online testing, the external factors associated with tank bottom corrosion status are confirmed. Applying artificial neural network intelligent evaluation method, three tank bottom corrosion status evaluation models based on appearance inspection information, acoustic emission information, and online testing information are established. Comparing with the result of acoustic emission online testing through the evaluation of test sample, the accuracy of the evaluation model based on online testing information is 94 %. The evaluation model can evaluate tank bottom corrosion accurately and realize acoustic emission online testing intelligent evaluation of tank bottom.

  5. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    Science.gov (United States)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  6. Hybrid Modeling and Optimization of Manufacturing Combining Artificial Intelligence and Finite Element Method

    CERN Document Server

    Quiza, Ramón; Davim, J Paulo

    2012-01-01

    Artificial intelligence (AI) techniques and the finite element method (FEM) are both powerful computing tools, which are extensively used for modeling and optimizing manufacturing processes. The combination of these tools has resulted in a new flexible and robust approach as several recent studies have shown. This book aims to review the work already done in this field as well as to expose the new possibilities and foreseen trends. The book is expected to be useful for postgraduate students and researchers, working in the area of modeling and optimization of manufacturing processes.

  7. Standard test method for compressive (crushing) strength of fired whiteware materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2006-01-01

    1.1 This test method covers two test procedures (A and B) for the determination of the compressive strength of fired whiteware materials. 1.2 Procedure A is generally applicable to whiteware products of low- to moderately high-strength levels (up to 150 000 psi or 1030 MPa). 1.3 Procedure B is specifically devised for testing of high-strength ceramics (over 100 000 psi or 690 MPa). 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  8. Method and apparatus for control of coherent synchrotron radiation effects during recirculation with bunch compression

    Science.gov (United States)

    Douglas, David R; Tennant, Christopher

    2015-11-10

    A modulated-bending recirculating system that avoids CSR-driven breakdown in emittance compensation by redistributing the bending along the beamline. The modulated-bending recirculating system includes a) larger angles of bending in initial FODO cells, thereby enhancing the impact of CSR early on in the beam line while the bunch is long, and 2) a decreased bending angle in the final FODO cells, reducing the effect of CSR while the bunch is short. The invention describes a method for controlling the effects of CSR during recirculation and bunch compression including a) correcting chromatic aberrations, b) correcting lattice and CSR-induced curvature in the longitudinal phase space by compensating T.sub.566, and c) using lattice perturbations to compensate obvious linear correlations x-dp/p and x'-dp/p.

  9. Comparative Analysis of Reduced-Rule Compressed Fuzzy Logic Control and Incremental Conductance MPPT Methods

    Science.gov (United States)

    Kandemir, Ekrem; Borekci, Selim; Cetin, Numan S.

    2018-04-01

    Photovoltaic (PV) power generation has been widely used in recent years, with techniques for increasing the power efficiency representing one of the most important issues. The available maximum power of a PV panel is dependent on environmental conditions such as solar irradiance and temperature. To extract the maximum available power from a PV panel, various maximum-power-point tracking (MPPT) methods are used. In this work, two different MPPT methods were implemented for a 150-W PV panel. The first method, known as incremental conductance (Inc. Cond.) MPPT, determines the maximum power by measuring the derivative of the PV voltage and current. The other method is based on reduced-rule compressed fuzzy logic control (RR-FLC), using which it is relatively easier to determine the maximum power because a single input variable is used to reduce computing loads. In this study, a 150-W PV panel system model was realized using these MPPT methods in MATLAB and the results compared. According to the simulation results, the proposed RR-FLC-based MPPT could increase the response rate and tracking accuracy by 4.66% under standard test conditions.

  10. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    International Nuclear Information System (INIS)

    Nedic, Vladimir; Despotovic, Danijela; Cvetanovic, Slobodan; Despotovic, Milan; Babic, Sasa

    2014-01-01

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L eq . Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model

  11. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  12. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    Energy Technology Data Exchange (ETDEWEB)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs [Faculty of Philology and Arts, University of Kragujevac, Jovana Cvijića bb, 34000 Kragujevac (Serbia); Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs [Faculty of Economics, University of Kragujevac, Djure Pucara Starog 3, 34000 Kragujevac (Serbia); Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs [Faculty of Economics, University of Niš, Trg kralja Aleksandra Ujedinitelja, 18000 Niš (Serbia); Despotovic, Milan, E-mail: mdespotovic@kg.ac.rs [Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac (Serbia); Babic, Sasa, E-mail: babicsf@yahoo.com [College of Applied Mechanical Engineering, Trstenik (Serbia)

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.

  13. Color matching of fabric blends: hybrid Kubelka-Munk + artificial neural network based method

    Science.gov (United States)

    Furferi, Rocco; Governi, Lapo; Volpe, Yary

    2016-11-01

    Color matching of fabric blends is a key issue for the textile industry, mainly due to the rising need to create high-quality products for the fashion market. The process of mixing together differently colored fibers to match a desired color is usually performed by using some historical recipes, skillfully managed by company colorists. More often than desired, the first attempt in creating a blend is not satisfactory, thus requiring the experts to spend efforts in changing the recipe with a trial-and-error process. To confront this issue, a number of computer-based methods have been proposed in the last decades, roughly classified into theoretical and artificial neural network (ANN)-based approaches. Inspired by the above literature, the present paper provides a method for accurate estimation of spectrophotometric response of a textile blend composed of differently colored fibers made of different materials. In particular, the performance of the Kubelka-Munk (K-M) theory is enhanced by introducing an artificial intelligence approach to determine a more consistent value of the nonlinear function relationship between the blend and its components. Therefore, a hybrid K-M+ANN-based method capable of modeling the color mixing mechanism is devised to predict the reflectance values of a blend.

  14. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    Science.gov (United States)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  15. Chapter 22: Compressed Air Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Benton, Nathanael [Nexant, Inc., San Francisco, CA (United States); Burns, Patrick [Nexant, Inc., San Francisco, CA (United States)

    2017-10-18

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressor replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  16. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  17. Linearly and nonlinearly optimized weighted essentially non-oscillatory methods for compressible turbulence

    Science.gov (United States)

    Taylor, Ellen Meredith

    Weighted essentially non-oscillatory (WENO) methods have been developed to simultaneously provide robust shock-capturing in compressible fluid flow and avoid excessive damping of fine-scale flow features such as turbulence. This is accomplished by constructing multiple candidate numerical stencils that adaptively combine so as to provide high order of accuracy and high bandwidth-resolving efficiency in continuous flow regions while averting instability-provoking interpolation across discontinuities. Under certain conditions in compressible turbulence, however, numerical dissipation remains unacceptably high even after optimization of the linear optimal stencil combination that dominates in smooth regions. The remaining nonlinear error arises from two primary sources: (i) the smoothness measurement that governs the application of adaptation away from the optimal stencil and (ii) the numerical properties of individual candidate stencils that govern numerical accuracy when adaptation engages. In this work, both of these sources are investigated, and corrective modifications to the WENO methodology are proposed and evaluated. Excessive nonlinear error due to the first source is alleviated through two separately considered procedures appended to the standard smoothness measurement technique that are designated the "relative smoothness limiter" and the "relative total variation limiter." In theory, appropriate values of their associated parameters should be insensitive to flow configuration, thereby sidestepping the prospect of costly parameter tuning; and this expectation of broad effectiveness is assessed in direct numerical simulations (DNS) of one-dimensional inviscid test problems, three-dimensional compressible isotropic turbulence of varying Reynolds and turbulent Mach numbers, and shock/isotropic-turbulence interaction (SITI). In the process, tools for efficiently comparing WENO adaptation behavior in smooth versus shock-containing regions are developed. The

  18. Risk assessment for pipelines with active defects based on artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Anghel, Calin I. [Department of Chemical Engineering, Faculty of Chemistry and Chemical Engineering, University ' Babes-Bolyai' , Cluj-Napoca (Romania)], E-mail: canghel@chem.ubbcluj.ro

    2009-07-15

    The paper provides another insight into the pipeline risk assessment for in-service pressure piping containing defects. Beside of the traditional analytical approximation methods or sampling-based methods safety index and failure probability of pressure piping containing defects will be obtained based on a novel type of support vector machine developed in a minimax manner. The safety index or failure probability is carried out based on a binary classification approach. The procedure named classification reliability procedure, involving a link between artificial intelligence and reliability methods was developed as a user-friendly computer program in MATLAB language. To reveal the capacity of the proposed procedure two comparative numerical examples replicating a previous related work and predicting the failure probabilities of pressured pipeline with defects were presented.

  19. Risk assessment for pipelines with active defects based on artificial intelligence methods

    International Nuclear Information System (INIS)

    Anghel, Calin I.

    2009-01-01

    The paper provides another insight into the pipeline risk assessment for in-service pressure piping containing defects. Beside of the traditional analytical approximation methods or sampling-based methods safety index and failure probability of pressure piping containing defects will be obtained based on a novel type of support vector machine developed in a minimax manner. The safety index or failure probability is carried out based on a binary classification approach. The procedure named classification reliability procedure, involving a link between artificial intelligence and reliability methods was developed as a user-friendly computer program in MATLAB language. To reveal the capacity of the proposed procedure two comparative numerical examples replicating a previous related work and predicting the failure probabilities of pressured pipeline with defects were presented.

  20. Determination of Electron Optical Properties for Aperture Zoom Lenses Using an Artificial Neural Network Method.

    Science.gov (United States)

    Isik, Nimet

    2016-04-01

    Multi-element electrostatic aperture lens systems are widely used to control electron or charged particle beams in many scientific instruments. By means of applied voltages, these lens systems can be operated for different purposes. In this context, numerous methods have been performed to calculate focal properties of these lenses. In this study, an artificial neural network (ANN) classification method is utilized to determine the focused/unfocused charged particle beam in the image point as a function of lens voltages for multi-element electrostatic aperture lenses. A data set for training and testing of ANN is taken from the SIMION 8.1 simulation program, which is a well known and proven accuracy program in charged particle optics. Mean squared error results of this study indicate that the ANN classification method provides notable performance characteristics for electrostatic aperture zoom lenses.

  1. Rupture process of the 2013 Okhotsk deep mega earthquake from iterative backprojection and compress sensing methods

    Science.gov (United States)

    Qin, W.; Yin, J.; Yao, H.

    2013-12-01

    On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for

  2. An artificial nonlinear diffusivity method for supersonic reacting flows with shocks

    Science.gov (United States)

    Fiorina, B.; Lele, S. K.

    2007-03-01

    A computational approach for modeling interactions between shocks waves, contact discontinuities and reactions zones with a high-order compact scheme is investigated. To prevent the formation of spurious oscillations around shocks, artificial nonlinear viscosity [A.W. Cook, W.H. Cabot, A high-wavenumber viscosity for high resolution numerical method, J. Comput. Phys. 195 (2004) 594-601] based on high-order derivative of the strain rate tensor is used. To capture temperature and species discontinuities a nonlinear diffusivity based on the entropy gradient is added. It is shown that the damping of 'wiggles' is controlled by the model constants and is largely independent of the mesh size and the shock strength. The same holds for the numerical shock thickness and allows a determination of the L2 error. In the shock tube problem, with fluids of different initial entropy separated by the diaphragm, an artificial diffusivity is required to accurately capture the contact surface. Finally, the method is applied to a shock wave propagating into a medium with non-uniform density/entropy and to a CJ detonation wave. Multi-dimensional formulation of the model is presented and is illustrated by a 2D oblique wave reflection from an inviscid wall, by a 2D supersonic blunt body flow and by a Mach reflection problem.

  3. Spatial capture-recapture: a promising method for analyzing data collected using artificial cover objects

    Science.gov (United States)

    Sutherland, Chris; Munoz, David; Miller, David A.W.; Grant, Evan H. Campbell

    2016-01-01

    Spatial capture–recapture (SCR) is a relatively recent development in ecological statistics that provides a spatial context for estimating abundance and space use patterns, and improves inference about absolute population density. SCR has been applied to individual encounter data collected noninvasively using methods such as camera traps, hair snares, and scat surveys. Despite the widespread use of capture-based surveys to monitor amphibians and reptiles, there are few applications of SCR in the herpetological literature. We demonstrate the utility of the application of SCR for studies of reptiles and amphibians by analyzing capture–recapture data from Red-Backed Salamanders, Plethodon cinereus, collected using artificial cover boards. Using SCR to analyze spatial encounter histories of marked individuals, we found evidence that density differed little among four sites within the same forest (on average, 1.59 salamanders/m2) and that salamander detection probability peaked in early October (Julian day 278) reflecting expected surface activity patterns of the species. The spatial scale of detectability, a measure of space use, indicates that the home range size for this population of Red-Backed Salamanders in autumn was 16.89 m2. Surveying reptiles and amphibians using artificial cover boards regularly generates spatial encounter history data of known individuals, which can readily be analyzed using SCR methods, providing estimates of absolute density and inference about the spatial scale of habitat use.

  4. Monitoring of operation with artificial intelligence methods; Betriebsueberwachung mit Verfahren der Kuenstlichen Intelligenz

    Energy Technology Data Exchange (ETDEWEB)

    Bruenninghaus, H. [DMT-Gesellschaft fuer Forschung und Pruefung mbH, Essen (Germany). Geschaeftsbereich Systemtechnik

    1999-03-11

    Taking the applications `early detection of fires` and `reduction of burst of messages` as an example, the usability of artificial intelligence (AI) methods in the monitoring of operation was checked in a R and D project. The contribution describes the concept, development and evaluation of solutions to the specified problems. A platform, which made it possible to investigate different AI methods (in particular artificial neuronal networks), had to be creaated as a basis for the project. At the same time ventilation data had to be acquired and processed by the networks for the classification. (orig.) [Deutsch] Am Beispiel der Anwendungsfaelle `Brandfrueherkennung` und `Meldungsschauerreduzierung` wurde im Rahmen eines F+E-Vorhabens die Einsetzbarkeit von Kuenstliche-Intelligenz-Methoden (KI) in der Betriebsueberwachung geprueft. Der Beitrag stellt Konzeption, Entwicklung und Bewertung von Loesungsansaetzen fuer die genannten Aufgabenstellungen vor. Als Grundlage fuer das Vorhaben wurden anhand KI-Methoden (speziell: Kuenstliche Neuronale Netze -KNN) auf der Grundlage gewonnener und aufbereiteter Wetterdaten die Beziehungen zwischen den Wettermessstellen im Laufe des Wetterwegs klassifiziert. (orig.)

  5. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    Science.gov (United States)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  6. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    Science.gov (United States)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  7. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2017-02-25

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  8. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi

    2017-01-01

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  9. Lattice Boltzmann methods for thermal flows: Continuum limit and applications to compressible Rayleigh Taylor systems

    NARCIS (Netherlands)

    Scagliarini, Andrea; Biferale, L.; Sbragaglia, M.; Sugiyama, K.; Toschi, F.

    2010-01-01

    We compute the continuum thermohydrodynamical limit of a new formulation of lattice kinetic equations for thermal compressible flows, recently proposed by Sbragaglia et al. [J. Fluid Mech. 628, 299 (2009)] . We show that the hydrodynamical manifold is given by the correct compressible

  10. Differences of Streptococcus mutans adhesion between artificial mouth systems: a dinamic and static methods

    Directory of Open Access Journals (Sweden)

    Aryan Morita

    2016-06-01

    Full Text Available Background: Various materials have been used for treating dental caries. Dental caries is a disease that attacks hard tissues of the teeth. The initial phase of caries is a formation of bacterial biofilm, called as dental plaque. Dental restorative materials are expected for preventing secondary caries formation initiated by dental plaque. Initial bacterial adhesion is assumed to be an important stage of dental plaque formation. Bacteria that recognize the receptor for binding to the pellicle on tooth surface are known as initial bacterial colonies. One of the bacteria that plays a role in the early stage of dental plaque formation is Streptococcus mutans (S. mutans. Artificial mouth system (AMS used in bacterial biofilm research on the oral cavity provides the real condition of oral cavity and continous and intermittent supply of nutrients for bacteria. Purpose: This study aimed to compare the profile of S. mutans bacterial adhesion as the primary etiologic agent for dental caries between using static method and using artificial mouth system, a dinamic. method (AMS. Method: The study was conducted at Faculty of Dentistry and Integrated Research and testing laboratory (LPPT in Universitas Gadjah Mada from April to August 2015. Composite resin was used as the subject of this research. Twelve composite resins with a diameter of 5 mm and a width of 2 mm were divided into two groups, namely group using static method and group using dynamic method. Static method was performed by submerging the samples into a 100µl suspension of 1.5 x 108 CFU/ml S. mutans and 200µl BHI broth. Meanwhile AMS method was carried out by placing the samples at the AMS tube drained with 20 drops/minute of bacterial suspension and sterile aquadest. After 72 hours, five samples from each group were calculated for their biofilm mass using 1% crystal violet and read by a spectrofotometer with a wavelength of 570 nm. Meanwhile, one sample from each group was taken for its

  11. Triaxial- and uniaxial-compression testing methods developed for extraction of pore water from unsaturated tuff, Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Mower, T.E.; Higgins, J.D. [Colorado School of Mines, Golden, CO (USA). Dept. of Geology and Geological Engineering; Yang, I.C. [Geological Survey, Denver, CO (USA). Water Resources Div.

    1989-12-31

    To support the study of hydrologic system in the unsaturated zone at Yucca Mountain, Nevada, two extraction methods were examined to obtain representative, uncontaminated pore-water samples from unsaturated tuff. Results indicate that triaxial compression, which uses a standard cell, can remove pore water from nonwelded tuff that has an initial moisture content greater than 11% by weight; uniaxial compression, which uses a specifically fabricated cell, can extract pore water from nonwelded tuff that has an initial moisture content greater than 8% and from welded tuff that has an initial moisture content greater than 6.5%. For the ambient moisture conditions of Yucca Mountain tuffs, uniaxial compression is the most efficient method of pore-water extraction. 12 refs., 7 figs., 2 tabs.

  12. Design of alluvial Egyptian irrigation canals using artificial neural networks method

    Directory of Open Access Journals (Sweden)

    Hassan Ibrahim Mohamed

    2013-06-01

    Full Text Available In the present study, artificial neural networks method (ANNs is used to estimate the main parameters which used in design of stable alluvial channels. The capability of ANN models to predict the stable alluvial channels dimensions is investigated, where the flow rate and sediment mean grain size were considered as input variables and wetted perimeter, hydraulic radius, and water surface slope were considered as output variables. The used ANN models are based on a back propagation algorithm to train a multi-layer feed-forward network (Levenberg Marquardt algorithm. The proposed models were verified using 311 data sets of field data collected from 61 manmade canals and drains. Several statistical measures and graphical representation are used to check the accuracy of the models in comparison with previous empirical equations. The results of the developed ANN model proved that this technique is reliable in such field compared with previously developed methods.

  13. Demand Forecasting Methods in Accommodation Establishments: A Research with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ebru ULUCAN

    2018-05-01

    Full Text Available As it being seen in every sector, demand forecasting in tourism is been conducted with various qualitative and quantitative methods. In recent years, artificial neural network models, which have been developed as an alternative to these forecasting methods, give the nearest values in forecasting with the smallest failure percentage. This study aims to reveal that accomodation establishments can use the neural network models as an alternative while forecasting their demand. With this aim, neural network models have been tested by using the sold room values between the period of 2013-2016 of a five star hotel in Istanbul and it is found that the results acquired from the testing models are the nearest values comparing the realized figures. In the light of these results, tourism demand of the hotel for 2017 and 2018 has been forecasted.

  14. A comparison of performance of several artificial intelligence methods for forecasting monthly discharge time series

    Science.gov (United States)

    Wang, Wen-Chuan; Chau, Kwok-Wing; Cheng, Chun-Tian; Qiu, Lin

    2009-08-01

    SummaryDeveloping a hydrological forecasting model based on past records is crucial to effective hydropower reservoir management and scheduling. Traditionally, time series analysis and modeling is used for building mathematical models to generate hydrologic records in hydrology and water resources. Artificial intelligence (AI), as a branch of computer science, is capable of analyzing long-series and large-scale hydrological data. In recent years, it is one of front issues to apply AI technology to the hydrological forecasting modeling. In this paper, autoregressive moving-average (ARMA) models, artificial neural networks (ANNs) approaches, adaptive neural-based fuzzy inference system (ANFIS) techniques, genetic programming (GP) models and support vector machine (SVM) method are examined using the long-term observations of monthly river flow discharges. The four quantitative standard statistical performance evaluation measures, the coefficient of correlation ( R), Nash-Sutcliffe efficiency coefficient ( E), root mean squared error (RMSE), mean absolute percentage error (MAPE), are employed to evaluate the performances of various models developed. Two case study river sites are also provided to illustrate their respective performances. The results indicate that the best performance can be obtained by ANFIS, GP and SVM, in terms of different evaluation criteria during the training and validation phases.

  15. A ghost fluid method for sharp interface simulations of compressible multiphase flows

    International Nuclear Information System (INIS)

    Majidi, Sahand; Afshari, Asghar

    2016-01-01

    A ghost fluid based computational tool is developed to study a wide range of compressible multiphase flows involving strong shocks and contact discontinuities while accounting for surface tension, viscous stresses and gravitational forces. The solver utilizes constrained reinitialization method to predict the interface configuration at each time step. Surface tension effect is handled via an exact interface Riemann problem solver. Interfacial viscous stresses are approximated by considering continuous velocity and viscous stress across the interface. To assess the performance of the solver several benchmark problems are considered: One-dimensional gas-water shock tube problem, shock-bubble interaction, air cavity collapse in water, underwater explosion, Rayleigh-Taylor Instability, and ellipsoidal drop oscillations. Results obtained from the numerical simulations indicate that the numerical methodology performs reasonably well in predicting flow features and exhibit a very good agreement with prior experimental and numerical observations. To further examine the accuracy of the developed ghost fluid solver, the obtained results are compared to those by a conventional diffuse interface solver. The comparison shows the capability of our ghost fluid method in reproducing the experimentally observed flow characteristics while revealing more details regarding topological changes of the interface.

  16. A ghost fluid method for sharp interface simulations of compressible multiphase flows

    Energy Technology Data Exchange (ETDEWEB)

    Majidi, Sahand; Afshari, Asghar [University of Tehran, Teheran (Iran, Islamic Republic of)

    2016-04-15

    A ghost fluid based computational tool is developed to study a wide range of compressible multiphase flows involving strong shocks and contact discontinuities while accounting for surface tension, viscous stresses and gravitational forces. The solver utilizes constrained reinitialization method to predict the interface configuration at each time step. Surface tension effect is handled via an exact interface Riemann problem solver. Interfacial viscous stresses are approximated by considering continuous velocity and viscous stress across the interface. To assess the performance of the solver several benchmark problems are considered: One-dimensional gas-water shock tube problem, shock-bubble interaction, air cavity collapse in water, underwater explosion, Rayleigh-Taylor Instability, and ellipsoidal drop oscillations. Results obtained from the numerical simulations indicate that the numerical methodology performs reasonably well in predicting flow features and exhibit a very good agreement with prior experimental and numerical observations. To further examine the accuracy of the developed ghost fluid solver, the obtained results are compared to those by a conventional diffuse interface solver. The comparison shows the capability of our ghost fluid method in reproducing the experimentally observed flow characteristics while revealing more details regarding topological changes of the interface.

  17. On the use of adaptive multiresolution method with time-varying tolerance for compressible fluid flows

    Science.gov (United States)

    Soni, V.; Hadjadj, A.; Roussel, O.

    2017-12-01

    In this paper, a fully adaptive multiresolution (MR) finite difference scheme with a time-varying tolerance is developed to study compressible fluid flows containing shock waves in interaction with solid obstacles. To ensure adequate resolution near rigid bodies, the MR algorithm is combined with an immersed boundary method based on a direct-forcing approach in which the solid object is represented by a continuous solid-volume fraction. The resulting algorithm forms an efficient tool capable of solving linear and nonlinear waves on arbitrary geometries. Through a one-dimensional scalar wave equation, the accuracy of the MR computation is, as expected, seen to decrease in time when using a constant MR tolerance considering the accumulation of error. To overcome this problem, a variable tolerance formulation is proposed, which is assessed through a new quality criterion, to ensure a time-convergence solution for a suitable quality resolution. The newly developed algorithm coupled with high-resolution spatial and temporal approximations is successfully applied to shock-bluff body and shock-diffraction problems solving Euler and Navier-Stokes equations. Results show excellent agreement with the available numerical and experimental data, thereby demonstrating the efficiency and the performance of the proposed method.

  18. Diffuse-Interface Capturing Methods for Compressible Two-Phase Flows

    Science.gov (United States)

    Saurel, Richard; Pantano, Carlos

    2018-01-01

    Simulation of compressible flows became a routine activity with the appearance of shock-/contact-capturing methods. These methods can determine all waves, particularly discontinuous ones. However, additional difficulties may appear in two-phase and multimaterial flows due to the abrupt variation of thermodynamic properties across the interfacial region, with discontinuous thermodynamical representations at the interfaces. To overcome this difficulty, researchers have developed augmented systems of governing equations to extend the capturing strategy. These extended systems, reviewed here, are termed diffuse-interface models, because they are designed to compute flow variables correctly in numerically diffused zones surrounding interfaces. In particular, they facilitate coupling the dynamics on both sides of the (diffuse) interfaces and tend to the proper pure fluid-governing equations far from the interfaces. This strategy has become efficient for contact interfaces separating fluids that are governed by different equations of state, in the presence or absence of capillary effects, and with phase change. More sophisticated materials than fluids (e.g., elastic-plastic materials) have been considered as well.

  19. Fertility response of artificial insemination methods in sheep with fresh and frozen-thawed semen.

    Science.gov (United States)

    Masoudi, Reza; Zare Shahneh, Ahmad; Towhidi, Armin; Kohram, Hamid; Akbarisharif, Abbas; Sharafi, Mohsen

    2017-02-01

    The aim of this study was to evaluate the fertility response of artificial insemination (AI) methods with fresh and frozen sperm in sheep. In experiment 1, one hundred and fifty fat tailed Zandi ewes were assigned into 3 equal groups and inseminated with three AI methods consisting of vaginal, laparoscopic and trans-cervical AI with fresh semen. In experiment 2, a factorial study (3 AI methods × 2 extenders) was used to analyze the effects of three AI methods and two freezing extenders containing soybean lecithin (SL) or Egg yolk (EY) on reproductive performance of 300 fat tailed Zandi ewes. Also, total motility, progressive motility, viability and lipid peroxidation of semen were evaluated after freeze-thawing in two extenders. In result, there was no significant difference among three AI methods when fresh semen was used. In experiment 2, the highest percentage of pregnancy rate, parturition rate and lambing rate were obtained in laparoscopic AI group (P semen, trans-cervical AI was more efficient than vaginal method when frozen-thawed semen was used, but its efficiency was not as high as laparoscopic method. Also, SL extender can be an efficient alternative extender to preserve ram sperm during cryopreservation procedure without adverse effects of EY. Copyright © 2016. Published by Elsevier Inc.

  20. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    Science.gov (United States)

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  1. Research of Block-Based Motion Estimation Methods for Video Compression

    Directory of Open Access Journals (Sweden)

    Tropchenko Andrey

    2016-08-01

    Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.

  2. A direct Eulerian method for the simulation of multi-material compressible flows with material sliding

    International Nuclear Information System (INIS)

    Motte, R.; Braeunig, J.P.; Peybernes, M.

    2012-01-01

    As the simulation of compressible flows with several materials is essential for applications studied within the CEA-DAM, the authors propose an approach based on finite volumes with centred variables for the resolution of compressible Euler equations. Moreover, they allow materials to slide with respect to each other as it is the case for water and air, for example. A conservation law is written for each material in a hybrid grid, and a condition of contact between materials under the form of fluxes is expressed. It is illustrated by the case of an intense shock propagating in water and interacting with an air bubble which will be strongly deformed and compressed

  3. Effectiveness of Context-Aware Character Input Method for Mobile Phone Based on Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Masafumi Matsuhara

    2012-01-01

    Full Text Available Opportunities and needs are increasing to input Japanese sentences on mobile phones since performance of mobile phones is improving. Applications like E-mail, Web search, and so on are widely used on mobile phones now. We need to input Japanese sentences using only 12 keys on mobile phones. We have proposed a method to input Japanese sentences on mobile phones quickly and easily. We call this method number-Kanji translation method. The number string inputted by a user is translated into Kanji-Kana mixed sentence in our proposed method. Number string to Kana string is a one-to-many mapping. Therefore, it is difficult to translate a number string into the correct sentence intended by the user. The proposed context-aware mapping method is able to disambiguate a number string by artificial neural network (ANN. The system is able to translate number segments into the intended words because the system becomes aware of the correspondence of number segments with Japanese words through learning by ANN. The system does not need a dictionary. We also show the effectiveness of our proposed method for practical use by the result of the evaluation experiment in Twitter data.

  4. Stability monitoring for BWR based on singular value decomposition method using artificial neural network

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Shimazu, Yoichiro; Michishita, Hiroshi

    2005-01-01

    A new method for evaluating the decay ratios in a boiling water reactor (BWR) using the singular value decomposition (SVD) method had been proposed. In this method, a signal component closely related to the BWR stability can be extracted from independent components of the neutron noise signal decomposed by the SVD method. However, real-time stability monitoring by the SVD method requires an efficient procedure for screening such components. For efficient screening, an artificial neural network (ANN) with three layers was adopted. The trained ANN was actually applied to decomposed components of local power range monitor (LPRM) signals that were measured in stability experiments conducted in the Ringhals-1 BWR. In each LPRM signal, multiple candidates were screened from the decomposed components. However, decay ratios could be estimated by introducing appropriate criterions for selecting the most suitable component among the candidates. The estimated decay ratios are almost identical to those evaluated by visual screening in a previous study. The selected components commonly have the largest singular value, the largest decay ratio and the least squared fitting error among the candidates. By virtue of excellent screening performance of the trained ANN, the real-time stability monitoring by the SVD method can be applied in practice. (author)

  5. Cognitive Artificial Intelligence Method for Interpreting Transformer Condition Based on Maintenance Data

    Directory of Open Access Journals (Sweden)

    Karel Octavianus Bachri

    2017-07-01

    Full Text Available A3S(Arwin-Adang-Aciek-Sembiring is a method of information fusion at a single observation and OMA3S(Observation Multi-time A3S is a method of information fusion for time-series data. This paper proposes OMA3S-based Cognitive Artificial-Intelligence method for interpreting Transformer Condition, which is calculated based on maintenance data from Indonesia National Electric Company (PLN. First, the proposed method is tested using the previously published data, and then followed by implementation on maintenance data. Maintenance data are fused to obtain part condition, and part conditions are fused to obtain transformer condition. Result shows proposed method is valid for DGA fault identification with the average accuracy of 91.1%. The proposed method not only can interpret the major fault, it can also identify the minor fault occurring along with the major fault, allowing early warning feature. Result also shows part conditions can be interpreted using information fusion on maintenance data, and the transformer condition can be interpreted using information fusion on part conditions. The future works on this research is to gather more data, to elaborate more factors to be fused, and to design a cognitive processor that can be used to implement this concept of intelligent instrumentation.

  6. Efficient solution of the non-linear Reynolds equation for compressible fluid using the finite element method

    DEFF Research Database (Denmark)

    Larsen, Jon Steffen; Santos, Ilmar

    2015-01-01

    An efficient finite element scheme for solving the non-linear Reynolds equation for compressible fluid coupled to compliant structures is presented. The method is general and fast and can be used in the analysis of airfoil bearings with simplified or complex foil structure models. To illustrate...

  7. A comparison of sputum induction methods: ultrasonic vs compressed-air nebulizer and hypertonic vs isotonic saline inhalation.

    Science.gov (United States)

    Loh, L C; Eg, K P; Puspanathan, P; Tang, S P; Yip, K S; Vijayasingham, P; Thayaparan, T; Kumar, S

    2004-03-01

    Airway inflammation can be demonstrated by the modem method of sputum induction using ultrasonic nebulizer and hypertonic saline. We studied whether compressed-air nebulizer and isotonic saline which are commonly available and cost less, are as effective in inducing sputum in normal adult subjects as the above mentioned tools. Sixteen subjects underwent weekly sputum induction in the following manner: ultrasonic nebulizer (Medix Sonix 2000, Clement Clarke, UK) using hypertonic saline, ultrasonic nebulizer using isotonic saline, compressed-air nebulizer (BestNeb, Taiwan) using hypertonic saline, and compressed-air nebulizer using isotonic saline. Overall, the use of an ultrasonic nebulizer and hypertonic saline yielded significantly higher total sputum cell counts and a higher percentage of cell viability than compressed-air nebulizers and isotonic saline. With the latter, there was a trend towards squamous cell contaminations. The proportion of various sputum cell types was not significantly different between the groups, and the reproducibility in sputum macrophages and neutrophils was high (Intraclass correlation coefficient, r [95%CI]: 0.65 [0.30-0.91] and 0.58 [0.22-0.89], p compressed-air nebulizers and isotonic saline. We conclude that in normal subjects, although both nebulizers and saline types can induce sputum with reproducible cellular profile, ultrasonic nebulizers and hypertonic saline are more effective but less well tolerated.

  8. Investigation of GDL compression effects on the performance of a PEM fuel cell cathode by lattice Boltzmann method

    Science.gov (United States)

    Molaeimanesh, G. R.; Nazemian, M.

    2017-08-01

    Proton exchange membrane (PEM) fuel cells with a great potential for application in vehicle propulsion systems will have a promising future. However, to overcome the exiting challenges against their wider commercialization further fundamental research is inevitable. The effects of gas diffusion layer (GDL) compression on the performance of a PEM fuel cell is not well-recognized; especially, via pore-scale simulation technique capturing the fibrous microstructure of the GDL. In the current investigation, a stochastic microstructure reconstruction method is proposed which can capture GDL microstructure changes by compression. Afterwards, lattice Boltzmann pore-scale simulation technique is adopted to simulate the reactive gas flow through 10 different cathode electrodes with dissimilar carbon paper GDLs produced from five different compression levels and two different carbon fiber diameters. The distributions of oxygen mole fraction, water vapor mole fraction and current density for the simulated cases are presented and analyzed. The results of simulations demonstrate that when the fiber diameter is 9 μm adding compression leads to lower average current density while when the fiber diameter is 7 μm the compression effect is not monotonic.

  9. Using the Maturity Method in Predicting the Compressive Strength of Vinyl Ester Polymer Concrete at an Early Age

    Directory of Open Access Journals (Sweden)

    Nan Ji Jin

    2017-01-01

    Full Text Available The compressive strength of vinyl ester polymer concrete is predicted using the maturity method. The compressive strength rapidly increased until the curing age of 24 hrs and thereafter slowly increased until the curing age of 72 hrs. As the MMA content increased, the compressive strength decreased. Furthermore, as the curing temperature decreased, compressive strength decreased. For vinyl ester polymer concrete, datum temperature, ranging from −22.5 to −24.6°C, decreased as the MMA content increased. The maturity index equation for cement concrete cannot be applied to polymer concrete and the maturity of vinyl ester polymer concrete can only be estimated through control of the time interval Δt. Thus, this study introduced a suitable scaled-down factor (n for the determination of polymer concrete’s maturity, and a factor of 0.3 was the most suitable. Also, the DR-HILL compressive strength prediction model was determined as applicable to vinyl ester polymer concrete among the dose-response models. For the parameters of the prediction model, applying the parameters by combining all data obtained from the three different amounts of MMA content was deemed acceptable. The study results could be useful for the quality control of vinyl ester polymer concrete and nondestructive prediction of early age strength.

  10. Upwind methods for the Baer–Nunziato equations and higher-order reconstruction using artificial viscosity

    International Nuclear Information System (INIS)

    Fraysse, F.; Redondo, C.; Rubio, G.; Valero, E.

    2016-01-01

    This article is devoted to the numerical discretisation of the hyperbolic two-phase flow model of Baer and Nunziato. A special attention is paid on the discretisation of intercell flux functions in the framework of Finite Volume and Discontinuous Galerkin approaches, where care has to be taken to efficiently approximate the non-conservative products inherent to the model equations. Various upwind approximate Riemann solvers have been tested on a bench of discontinuous test cases. New discretisation schemes are proposed in a Discontinuous Galerkin framework following the criterion of Abgrall and the path-conservative formalism. A stabilisation technique based on artificial viscosity is applied to the high-order Discontinuous Galerkin method and compared against classical TVD-MUSCL Finite Volume flux reconstruction.

  11. A Method of Effective Quarry Water Purifying Using Artificial Filtering Arrays

    Science.gov (United States)

    Tyulenev, M.; Garina, E.; Khoreshok, A.; Litvin, O.; Litvin, Y.; Maliukhina, E.

    2017-01-01

    The development of open pit mining in the large coal basins of Russia and other countries increases their negative impact on the environment. Along with the damage of land and air pollution by dust and combustion gases of blasting, coal pits have a significant negative impact on water resources. Polluted quarry water worsens the ecological situation on a much larger area than covered by air pollution and land damage. This significantly worsens the conditions of people living in cities and towns located near the coal pits, and complicates the subsequent restoration of the environment, irreversibly destroying the nature. Therefore, the research of quarry wastewater purifying is becoming an important mater for scholars of technical colleges and universities in the regions with developing open-pit mining. This paper describes the method of determining the basic parameters of the artificial filtering arrays formed on coal pits of Kuzbass (Western Siberia, Russia), and gives recommendations on its application.

  12. Infrared thermography based on artificial intelligence as a screening method for carpal tunnel syndrome diagnosis.

    Science.gov (United States)

    Jesensek Papez, B; Palfy, M; Mertik, M; Turk, Z

    2009-01-01

    This study further evaluated a computer-based infrared thermography (IRT) system, which employs artificial neural networks for the diagnosis of carpal tunnel syndrome (CTS) using a large database of 502 thermal images of the dorsal and palmar side of 132 healthy and 119 pathological hands. It confirmed the hypothesis that the dorsal side of the hand is of greater importance than the palmar side when diagnosing CTS thermographically. Using this method it was possible correctly to classify 72.2% of all hands (healthy and pathological) based on dorsal images and > 80% of hands when only severely affected and healthy hands were considered. Compared with the gold standard electromyographic diagnosis of CTS, IRT cannot be recommended as an adequate diagnostic tool when exact severity level diagnosis is required, however we conclude that IRT could be used as a screening tool for severe cases in populations with high ergonomic risk factors of CTS.

  13. Upwind methods for the Baer–Nunziato equations and higher-order reconstruction using artificial viscosity

    Energy Technology Data Exchange (ETDEWEB)

    Fraysse, F., E-mail: francois.fraysse@rs2n.eu [RS2N, St. Zacharie (France); E. T. S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Madrid (Spain); Redondo, C.; Rubio, G.; Valero, E. [E. T. S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Madrid (Spain)

    2016-12-01

    This article is devoted to the numerical discretisation of the hyperbolic two-phase flow model of Baer and Nunziato. A special attention is paid on the discretisation of intercell flux functions in the framework of Finite Volume and Discontinuous Galerkin approaches, where care has to be taken to efficiently approximate the non-conservative products inherent to the model equations. Various upwind approximate Riemann solvers have been tested on a bench of discontinuous test cases. New discretisation schemes are proposed in a Discontinuous Galerkin framework following the criterion of Abgrall and the path-conservative formalism. A stabilisation technique based on artificial viscosity is applied to the high-order Discontinuous Galerkin method and compared against classical TVD-MUSCL Finite Volume flux reconstruction.

  14. Application of artificial neural networks for response surface modelling in HPLC method development

    Directory of Open Access Journals (Sweden)

    Mohamed A. Korany

    2012-01-01

    Full Text Available This paper discusses the usefulness of artificial neural networks (ANNs for response surface modelling in HPLC method development. In this study, the combined effect of pH and mobile phase composition on the reversed-phase liquid chromatographic behaviour of a mixture of salbutamol (SAL and guaiphenesin (GUA, combination I, and a mixture of ascorbic acid (ASC, paracetamol (PAR and guaiphenesin (GUA, combination II, was investigated. The results were compared with those produced using multiple regression (REG analysis. To examine the respective predictive power of the regression model and the neural network model, experimental and predicted response factor values, mean of squares error (MSE, average error percentage (Er%, and coefficients of correlation (r were compared. It was clear that the best networks were able to predict the experimental responses more accurately than the multiple regression analysis.

  15. An Improved Ghost-cell Immersed Boundary Method for Compressible Inviscid Flow Simulations

    KAUST Repository

    Chi, Cheng

    2015-01-01

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary

  16. An improved ghost-cell immersed boundary method for compressible flow simulations

    KAUST Repository

    Chi, Cheng; Lee, Bok Jik; Im, Hong G.

    2016-01-01

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary

  17. A sharp interface method for compressible liquid–vapor flow with phase transition and surface tension

    Energy Technology Data Exchange (ETDEWEB)

    Fechter, Stefan, E-mail: stefan.fechter@iag.uni-stuttgart.de [Institut für Aerodynamik und Gasdynamik, Universität Stuttgart, Pfaffenwaldring 21, 70569 Stuttgart (Germany); Munz, Claus-Dieter, E-mail: munz@iag.uni-stuttgart.de [Institut für Aerodynamik und Gasdynamik, Universität Stuttgart, Pfaffenwaldring 21, 70569 Stuttgart (Germany); Rohde, Christian, E-mail: Christian.Rohde@mathematik.uni-stuttgart.de [Institut für Angewandte Analysis und Numerische Simulation, Universität Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart (Germany); Zeiler, Christoph, E-mail: Christoph.Zeiler@mathematik.uni-stuttgart.de [Institut für Angewandte Analysis und Numerische Simulation, Universität Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart (Germany)

    2017-05-01

    The numerical approximation of non-isothermal liquid–vapor flow within the compressible regime is a difficult task because complex physical effects at the phase interfaces can govern the global flow behavior. We present a sharp interface approach which treats the interface as a shock-wave like discontinuity. Any mixing of fluid phases is avoided by using the flow solver in the bulk regions only, and a ghost-fluid approach close to the interface. The coupling states for the numerical solution in the bulk regions are determined by the solution of local two-phase Riemann problems across the interface. The Riemann solution accounts for the relevant physics by enforcing appropriate jump conditions at the phase boundary. A wide variety of interface effects can be handled in a thermodynamically consistent way. This includes surface tension or mass/energy transfer by phase transition. Moreover, the local normal speed of the interface, which is needed to calculate the time evolution of the interface, is given by the Riemann solution. The interface tracking itself is based on a level-set method. The focus in this paper is the description of the two-phase Riemann solver and its usage within the sharp interface approach. One-dimensional problems are selected to validate the approach. Finally, the three-dimensional simulation of a wobbling droplet and a shock droplet interaction in two dimensions are shown. In both problems phase transition and surface tension determine the global bulk behavior.

  18. Study on compressive strength of self compacting mortar cubes under normal & electric oven curing methods

    Science.gov (United States)

    Prasanna Venkatesh, G. J.; Vivek, S. S.; Dhinakaran, G.

    2017-07-01

    In the majority of civil engineering applications, the basic building blocks were the masonry units. Those masonry units were developed as a monolithic structure by plastering process with the help of binding agents namely mud, lime, cement and their combinations. In recent advancements, the mortar study plays an important role in crack repairs, structural rehabilitation, retrofitting, pointing and plastering operations. The rheology of mortar includes flowable, passing and filling properties which were analogous with the behaviour of self compacting concrete. In self compacting (SC) mortar cubes, the cement was replaced by mineral admixtures namely silica fume (SF) from 5% to 20% (with an increment of 5%), metakaolin (MK) from 10% to 30% (with an increment of 10%) and ground granulated blast furnace slag (GGBS) from 25% to 75% (with an increment of 25%). The ratio between cement and fine aggregate was kept constant as 1: 2 for all normal and self compacting mortar mixes. The accelerated curing namely electric oven curing with the differential temperature of 128°C for the period of 4 hours was adopted. It was found that the compressive strength obtained from the normal and electric oven method of curing was higher for self compacting mortar cubes than normal mortar cube. The cement replacement by 15% SF, 20% MK and 25%GGBS obtained higher strength under both curing conditions.

  19. A spectral element-FCT method for the compressible Euler equations

    International Nuclear Information System (INIS)

    Giannakouros, J.; Karniadakis, G.E.

    1994-01-01

    A new algorithm based on spectral element discretizations and flux-corrected transport concepts is developed for the solution of the Euler equations of inviscid compressible fluid flow. A conservative formulation is proposed based on one- and two-dimensional cell-averaging and reconstruction procedures, which employ a staggered mesh of Gauss-Chebyshev and Gauss-Lobatto-Chebyshev collocation points. Particular emphasis is placed on the construction of robust boundary and interfacial conditions in one- and two-dimensions. It is demonstrated through shock-tube problems and two-dimensional simulations that the proposed algorithm leads to stable, non-oscillatory solutions of high accuracy. Of particular importance is the fact that dispersion errors are minimal, as show through experiments. From the operational point of view, casting the method in a spectral element formulation provides flexibility in the discretization, since a variable number of macro-elements or collocation points per element can be employed to accomodate both accuracy and geometric requirements

  20. Rescuers' physical fatigue with different chest compression to ventilation methods during simulated infant cardiopulmonary resuscitation.

    Science.gov (United States)

    Boldingh, Anne Marthe; Jensen, Thomas Hagen; Bjørbekk, Ane Torvik; Solevåg, Anne Lee; Nakstad, Britt

    2016-10-01

    To assess development of objective, subjective and indirect measures of fatigue during simulated infant cardiopulmonary resuscitation (CPR) with two different methods. Using a neonatal manikin, 17 subject-pairs were randomized in a crossover design to provide 5-min CPR with a 3:1 chest compression (CC) to ventilation (C:V) ratio and continuous CCs at a rate of 120 min(-1) with asynchronous ventilations (CCaV-120). We measured participants' changes in heart rate (HR) and mean arterial pressure (MAP); perceived level of fatigue on a validated Likert scale; and manikin CC measures. CCaV-120 compared with a 3:1 C:V ratio resulted in a change during 5-min of CPR in HR 49 versus 40 bpm (p = 0.01), and MAP 1.7 versus -2.8 mmHg (p = 0.03); fatigue rated on a Likert scale 12.9 versus 11.4 (p = 0.2); and a significant decay in CC depth after 90 s (p = 0.03). The results indicate a trend toward more fatigue during simulated CPR in CCaV-120 compared to the recommended 3:1 C:V CPR. These results support current guidelines.

  1. Application of a finite element method to the calculation of compressible subsonic flows

    International Nuclear Information System (INIS)

    Montagne, J.L.

    1980-01-01

    The accidental transients in nuclear reactors requires two-phase flow calculation in complicated geometries. In the present case, the flow has been limited to the study of an homogeneous bidimensional flow model. One obtains equations analogous to those for compressible gas. The two-phase nature leads to sudden variations of specific mass as a function of pressure and enthalpy. In practice, the flows are in a transport regime, this is why one has sought a stable discretization scheme for the hyperbolic system of Euler equations. In order to take into account the thermal phenomena, the natural variables were kept, flow rate, pressure enthalpy and the equations were used in their conservative form. A Galerkin method was used to solve the momentum conservation equation. The space to which the flow rates belong is submitted to a matching condition, the normal component of these vectors is continuous at the boundary between elements. The pressures, enthalpy specific mass, in contrast, are discontinuous between two elements. Correspondences must be established between these two type of discretization. The program set into operation uses a discretization of lowest order, and has been conceived for processing time steps conditioned only by the flow speed. It has been tested in two cases where the thermal phenomena are important: cool liquid introduced in vapor, and heating along a plate [fr

  2. An artificial neural network method for lumen and media-adventitia border detection in IVUS.

    Science.gov (United States)

    Su, Shengran; Hu, Zhenghui; Lin, Qiang; Hau, William Kongto; Gao, Zhifan; Zhang, Heye

    2017-04-01

    Intravascular ultrasound (IVUS) has been well recognized as one powerful imaging technique to evaluate the stenosis inside the coronary arteries. The detection of lumen border and media-adventitia (MA) border in IVUS images is the key procedure to determine the plaque burden inside the coronary arteries, but this detection could be burdensome to the doctor because of large volume of the IVUS images. In this paper, we use the artificial neural network (ANN) method as the feature learning algorithm for the detection of the lumen and MA borders in IVUS images. Two types of imaging information including spatial, neighboring features were used as the input data to the ANN method, and then the different vascular layers were distinguished accordingly through two sparse auto-encoders and one softmax classifier. Another ANN was used to optimize the result of the first network. In the end, the active contour model was applied to smooth the lumen and MA borders detected by the ANN method. The performance of our approach was compared with the manual drawing method performed by two IVUS experts on 461 IVUS images from four subjects. Results showed that our approach had a high correlation and good agreement with the manual drawing results. The detection error of the ANN method close to the error between two groups of manual drawing result. All these results indicated that our proposed approach could efficiently and accurately handle the detection of lumen and MA borders in the IVUS images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Artificial neural network and classical least-squares methods for neurotransmitter mixture analysis.

    Science.gov (United States)

    Schulze, H G; Greek, L S; Gorzalka, B B; Bree, A V; Blades, M W; Turner, R F

    1995-02-01

    Identification of individual components in biological mixtures can be a difficult problem regardless of the analytical method employed. In this work, Raman spectroscopy was chosen as a prototype analytical method due to its inherent versatility and applicability to aqueous media, making it useful for the study of biological samples. Artificial neural networks (ANNs) and the classical least-squares (CLS) method were used to identify and quantify the Raman spectra of the small-molecule neurotransmitters and mixtures of such molecules. The transfer functions used by a network, as well as the architecture of a network, played an important role in the ability of the network to identify the Raman spectra of individual neurotransmitters and the Raman spectra of neurotransmitter mixtures. Specifically, networks using sigmoid and hyperbolic tangent transfer functions generalized better from the mixtures in the training data set to those in the testing data sets than networks using sine functions. Networks with connections that permit the local processing of inputs generally performed better than other networks on all the testing data sets. and better than the CLS method of curve fitting, on novel spectra of some neurotransmitters. The CLS method was found to perform well on noisy, shifted, and difference spectra.

  4. Artificial intelligence methods applied in the controlled synthesis of polydimethilsiloxane - poly (methacrylic acid) copolymer networks with imposed properties

    Science.gov (United States)

    Rusu, Teodora; Gogan, Oana Marilena

    2016-05-01

    This paper describes the use of artificial intelligence method in copolymer networks design. In the present study, we pursue a hybrid algorithm composed from two research themes in the genetic design framework: a Kohonen neural network (KNN), path (forward problem) combined with a genetic algorithm path (backward problem). The Tabu Search Method is used to improve the performance of the genetic algorithm path.

  5. HPLC-QTOF-MS method for quantitative determination of active compounds in an anti-cellulite herbal compress

    Directory of Open Access Journals (Sweden)

    Ngamrayu Ngamdokmai

    2017-08-01

    Full Text Available A herbal compress used in Thai massage has been modified for use in cellulite treatment. Its main active ingredients were ginger, black pepper, java long pepper, tea and coffee. The objective of this study was to develop and validate an HPLCQTOF-MS method for determining its active compounds, i.e., caffeine, 6-gingerol, and piperine in raw materials as well as in the formulation together with the flavouring agent, camphor. The four compounds were chromatographically separated. The analytical method was validated through selectivity, intra-, inter day precision, accuracy and matrix effect. The results showed that the herbal compress contained caffeine (2.16 mg/g, camphor (106.15 mg/g, 6-gingerol (0.76 mg/g, and piperine (4.19 mg/g. The chemical stability study revealed that herbal compresses retained >80% of their active compounds after 1 month of storage at ambient conditions. Our method can be used for quality control of the herbal compress and its raw materials.

  6. A parallel finite-volume finite-element method for transient compressible turbulent flows with heat transfer

    International Nuclear Information System (INIS)

    Masoud Ziaei-Rad

    2010-01-01

    In this paper, a two-dimensional numerical scheme is presented for the simulation of turbulent, viscous, transient compressible flows in the simultaneously developing hydraulic and thermal boundary layer region. The numerical procedure is a finite-volume-based finite-element method applied to unstructured grids. This combination together with a new method applied for the boundary conditions allows for accurate computation of the variables in the entrance region and for a wide range of flow fields from subsonic to transonic. The Roe-Riemann solver is used for the convective terms, whereas the standard Galerkin technique is applied for the viscous terms. A modified κ-ε model with a two-layer equation for the near-wall region combined with a compressibility correction is used to predict the turbulent viscosity. Parallel processing is also employed to divide the computational domain among the different processors to reduce the computational time. The method is applied to some test cases in order to verify the numerical accuracy. The results show significant differences between incompressible and compressible flows in the friction coefficient, Nusselt number, shear stress and the ratio of the compressible turbulent viscosity to the molecular viscosity along the developing region. A transient flow generated after an accidental rupture in a pipeline was also studied as a test case. The results show that the present numerical scheme is stable, accurate and efficient enough to solve the problem of transient wall-bounded flow.

  7. Development and validation of dissolution method for carvedilol compression-coated tablets

    Directory of Open Access Journals (Sweden)

    Ritesh Shah

    2011-12-01

    Full Text Available The present study describes the development and validation of a dissolution method for carvedilol compression-coated tablets. Dissolution test was performed using a TDT-06T dissolution apparatus. Based on the physiological conditions of the body, 0.1N hydrochloric acid was used as dissolution medium and release was monitored for 2 hours to verify the immediate release pattern of the drug in acidic pH, followed by pH 6.8 in citric-phosphate buffer for 22 hours, to simulate a sustained release pattern in the intestine. Influences of rotation speed and surfactant concentration in medium were evaluated. Samples were analysed by validated UV visible spectrophotometric method at 286 nm. 1% sodium lauryl sulphate (SLS was found to be optimum for improving carvedilol solubility in pH 6.8 citric-phosphate buffer. Analysis of variance showed no significant difference between the results obtained at 50 and 100 rpm. The discriminating dissolution method was successfully developed for carvedilol compression-coated tablets. The conditions that allowed dissolution determination were USP type I apparatus at 100 rpm, containing 1000 ml of 0.1N HCl for 2 hours, followed by pH 6.8 citric-phosphate buffer with 1% SLS for 22 hours at 37.0 ± 0.5 ºC. Samples were analysed by UV spectrophotometric method and validated as per ICH guidelines.O presente estudo descreve o desenvolvimento e a validação de método de dissolução para comprimidos revestidos de carvedilol. O teste de dissolução foi efetuado utilizando-se o aparelho para dissolução TDT-06T. Com base nas condições fisiológicas do organismo, utilizou-se ácido clorídrico 0,1 N como meio de dissolução e a liberação foi monitorada por 2 horas para se verificar o padrão de liberação imediata do fármaco em condições de pH baixo, seguidas por pH 6,8 em tampão cítrico-fosfato por 22 horas, para simular o padrão de liberação controlada no intestino. Avaliou-se a influência da velocidade de

  8. The use of artificial intelligence methods for visual analysis of properties of surface layers

    Directory of Open Access Journals (Sweden)

    Tomasz Wójcicki

    2014-12-01

    Full Text Available [b]Abstract[/b]. The article presents a selected area of research on the possibility of automatic prediction of material properties based on the analysis of digital images. Original, holistic model of forecasting properties of surface layers based on a multi-step process that includes the selected methods of processing and analysis of images, inference with the use of a priori knowledge bases and multi-valued fuzzy logic, and simulation with the use of finite element methods is presented. Surface layers characteristics and core technologies of their production processes such as mechanical, thermal, thermo-mechanical, thermo-chemical, electrochemical, physical are discussed. Developed methods used in the model for the classification of images of the surface layers are shown. The objectives of the use of selected methods of processing and analysis of digital images, including techniques for improving the quality of images, segmentation, morphological transformation, pattern recognition and simulation of physical phenomena in the structures of materials are described.[b]Keywords[/b]: image analysis, surface layer, artificial intelligence, fuzzy logic

  9. Thin Foil Acceleration Method for Measuring the Unloading Isentropes of Shock-Compressed Matter

    International Nuclear Information System (INIS)

    Asay, J.R.; Chhabildas, L.C.; Fortov, V.E.; Kanel, G.I.; Khishchenko, K.V.; Lomonosov, I.V.; Mehlhorn, T.; Razorenov, S.V.; Utkin, A.V.

    1999-01-01

    This work has been performed as part of the search for possible ways to utilize the capabilities of laser and particle beams techniques in shock wave and equation of state physics. The peculiarity of these techniques is that we have to deal with micron-thick targets and not well reproducible incident shock wave parameters, so all measurements should be of a high resolution and be done in one shot. Besides the Hugoniots, the experimental basis for creating the equations of state includes isentropes corresponding to unloading of shock-compressed matter. Experimental isentrope data are most important in the region of vaporization. With guns or explosive facilities, the unloading isentrope is recovered from a series of experiments where the shock wave parameters in plates of standard low-impedance materials placed behind the sample are measured [1,2]. The specific internal energy and specific volume are calculated from the measured p(u) release curve which corresponds to the Riemann integral. This way is not quite suitable for experiments with beam techniques where the incident shock waves are not well reproducible. The thick foil method [3] provides a few experimental points on the isentrope in one shot. When a higher shock impedance foil is placed on the surface of the material studied, the release phase occurs by steps, whose durations correspond to that for the shock wave to go back and forth in the foil. The velocity during the different steps, connected with the knowledge of the Hugoniot of the foil, allows us to determine a few points on the isentropic unloading curve. However, the method becomes insensitive when the low pressure range of vaporization is reached in the course of the unloading. The isentrope in this region can be measured by recording the smooth acceleration of a thin witness plate foil. With the mass of the foil known, measurements of the foil acceleration will give us the vapor pressure

  10. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  11. Technical Note: Validation of two methods to determine contact area between breast and compression paddle in mammography.

    Science.gov (United States)

    Branderhorst, Woutjan; de Groot, Jerry E; van Lier, Monique G J T B; Highnam, Ralph P; den Heeten, Gerard J; Grimbergen, Cornelis A

    2017-08-01

    To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. For a set of 300 breast compressions, we measured the contact areas between breast and paddle, both capacitively using a transparent foil with indium-tin-oxide (ITO) coating attached to the paddle, and retrospectively from the obtained mammograms using image processing software (Volpara Enterprise, algorithm version 1.5.2). A gold standard was obtained from video images of the compressed breast. During each compression, the breast was illuminated from the sides in order to create a dark shadow on the video image where the breast was in contact with the compression paddle. We manually segmented the shadows captured at the time of x-ray exposure and measured their areas. We found a strong correlation between the manual segmentations and the capacitive measurements [r = 0.989, 95% CI (0.987, 0.992)] and between the manual segmentations and the image processing software [r = 0.978, 95% CI (0.972, 0.982)]. Bland-Altman analysis showed a bias of -0.0038 dm 2 for the capacitive measurement (SD 0.0658, 95% limits of agreement [-0.1329, 0.1252]) and -0.0035 dm 2 for the image processing software [SD 0.0962, 95% limits of agreement (-0.1921, 0.1850)]. The size of the contact area between the paddle and the breast can be determined accurately and precisely, both in real-time using the capacitive method, and retrospectively using image processing software. This result is beneficial for scientific research, data analysis and quality control systems that depend on one of these two methods for determining the average pressure on the breast during mammographic compression. © 2017 Sigmascreening B.V. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  12. A Novel CAE Method for Compression Molding Simulation of Carbon Fiber-Reinforced Thermoplastic Composite Sheet Materials

    Directory of Open Access Journals (Sweden)

    Yuyang Song

    2018-06-01

    Full Text Available Its high-specific strength and stiffness with lower cost make discontinuous fiber-reinforced thermoplastic (FRT materials an ideal choice for lightweight applications in the automotive industry. Compression molding is one of the preferred manufacturing processes for such materials as it offers the opportunity to maintain a longer fiber length and higher volume production. In the past, we have demonstrated that compression molding of FRT in bulk form can be simulated by treating melt flow as a continuum using the conservation of mass and momentum equations. However, the compression molding of such materials in sheet form using a similar approach does not work well. The assumption of melt flow as a continuum does not hold for such deformation processes. To address this challenge, we have developed a novel simulation approach. First, the draping of the sheet was simulated as a structural deformation using the explicit finite element approach. Next, the draped shape was compressed using fluid mechanics equations. The proposed method was verified by building a physical part and comparing the predicted fiber orientation and warpage measurements performed on the physical parts. The developed method and tools are expected to help in expediting the development of FRT parts, which will help achieve lightweight targets in the automotive industry.

  13. A Comparative Study of Compression Methods and the Development of CODEC Program of Biological Signal for Emergency Telemedicine Service

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, T.S.; Kim, J.S. [Changwon National University, Changwon (Korea); Lim, Y.H. [Visionite Co., Ltd., Seoul (Korea); Yoo, S.K. [Yonsei University, Seoul (Korea)

    2003-05-01

    In an emergency telemedicine system such as the High-quality Multimedia based Real-time Emergency Telemedicine(HMRET) service, it is very important to examine the status of the patient continuously using the multimedia data including the biological signals(ECG, BP, Respiration, S{sub p}O{sub 2}) of the patient. In order to transmit these data real time through the communication means which have the limited transmission capacity, it is also necessary to compress the biological data besides other multimedia data. For this purpose, we investigate and compare the ECG compression techniques in the time domain and in the wavelet transform domain, and present an effective lossless compression method of the biological signals using JPEG Huffman table for an emergency telemedicine system. And, for the HMRET service, we developed the lossless compression and reconstruction program of the biological signals in MSVC++ 6.0 using DPCM method and JPEG Huffman table, and tested in an internet environment. (author). 15 refs., 17 figs., 7 tabs.

  14. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  15. Real power transfer allocation method with the application of artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Mustafa, M.W.; Khalid, S.N.; Shareef, H.; Khairuddin, A. [Technological Univ. of Malaysia, Skudai, Johor Bahru (Malaysia). Dept. of Electrical Power Enginering

    2008-07-01

    This paper presented a newly modified nodal equations method for identifying the real power transfer between generators and load. The objective was to represent each load current as a function of the generator's current and load voltages. The modified admittance matrix of a circuit was used to decompose the load voltage dependent term into components of generator dependent terms. By using these two decompositions of current and voltage terms, the real power transfer between loads and generators was obtained. The robustness of the proposed method was demonstrated on the modified IEEE 30-bus system. An appropriate Artificial Neural Network (ANN) was also created to solve the same problem in a simpler and faster manner with very good accuracy. For this purpose, supervised learning paradigm and feedforward architecture were chosen for the proposed ANN power transfer allocation technique. The method could be adapted to other larger systems by modifying the neural network structure. This technique can be used to solve some of the difficult real power pricing and costing issues and to ensure fairness and transparency in the deregulated environment of power system operation. 22 refs., 5 tabs., 8 figs.

  16. Characterisation of PV CIS module by artificial neural networks. A comparative study with other methods

    International Nuclear Information System (INIS)

    Almonacid, F.; Rus, C.; Hontoria, L.; Munoz, F.J.

    2010-01-01

    The presence of PV modules made with new technologies and materials is increasing in PV market, in special Thin Film Solar Modules (TFSM). They are ready to make a substantial contribution to the world's electricity generation. Although Si wafer-based cells account for the most of increase, technologies of thin film have been those of the major growth in last three years. During 2007 they grew 133%. On the other hand, manufacturers provide ratings for PV modules for conditions referred to as Standard Test Conditions (STC). However, these conditions rarely occur outdoors, so the usefulness and applicability of the indoors characterisation in standard test conditions of PV modules is a controversial issue. Therefore, to carry out a correct photovoltaic engineering, a suitable characterisation of PV module electrical behaviour is necessary. The IDEA Research Group from Jaen University has developed a method based on artificial neural networks (ANNs) to electrical characterisation of PV modules. An ANN was able to generate V-I curves of si-crystalline PV modules for any irradiance and module cell temperature. The results show that the proposed ANN introduces a good accurate prediction for si-crystalline PV modules performance when compared with the measured values. Now, this method is going to be applied for electrical characterisation of PV CIS modules. Finally, a comparative study with other methods, of electrical characterisation, is done. (author)

  17. Review on applications of artificial intelligence methods for dam and reservoir-hydro-environment models.

    Science.gov (United States)

    Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Abdullah, Sharifah Mastura Syed; El-Shafie, Ahmed

    2018-05-01

    Efficacious operation for dam and reservoir system could guarantee not only a defenselessness policy against natural hazard but also identify rule to meet the water demand. Successful operation of dam and reservoir systems to ensure optimal use of water resources could be unattainable without accurate and reliable simulation models. According to the highly stochastic nature of hydrologic parameters, developing accurate predictive model that efficiently mimic such a complex pattern is an increasing domain of research. During the last two decades, artificial intelligence (AI) techniques have been significantly utilized for attaining a robust modeling to handle different stochastic hydrological parameters. AI techniques have also shown considerable progress in finding optimal rules for reservoir operation. This review research explores the history of developing AI in reservoir inflow forecasting and prediction of evaporation from a reservoir as the major components of the reservoir simulation. In addition, critical assessment of the advantages and disadvantages of integrated AI simulation methods with optimization methods has been reported. Future research on the potential of utilizing new innovative methods based AI techniques for reservoir simulation and optimization models have also been discussed. Finally, proposal for the new mathematical procedure to accomplish the realistic evaluation of the whole optimization model performance (reliability, resilience, and vulnerability indices) has been recommended.

  18. Orbit Determination from Tracking Data of Artificial Satellite Using the Method of Differential Correction

    Directory of Open Access Journals (Sweden)

    Byoung-Sun Lee

    1988-06-01

    Full Text Available The differential correction process determining osculating orbital elements as correct as possible at a given instant of time from tracking data of artificial satellite was accomplished. Preliminary orbital elements were used as an initial value of the differential correction procedure and iterated until the residual of real observation(O and computed observation(C was minimized. Tracking satellite was NOAA-9 or TIROS-N series. Two types of tracking data were prediction data precomputed from mean orbital elements of TBUS and real data obtained from tracking 1.707GHz HRPT signal of NOAA-9 using 5 meter auto-track antenna in Radio Research Laboratory. According to tracking data either Gauss method or Herrick-Gibbs method was applied to preliminary orbit determination. In the differential correction stage we used both of the Escobal(1975's analytical method and numerical ones are nearly consistent. And the differentially corrected orbit converged to the same value in spite of the differences between preliminary orbits of each time span.

  19. [A method of recognizing biology surface spectrum using cascade-connection artificial neural nets].

    Science.gov (United States)

    Shi, Wei-Jie; Yao, Yong; Zhang, Tie-Qiang; Meng, Xian-Jiang

    2008-05-01

    A method of recognizing the visible spectrum of micro-areas on the biological surface with cascade-connection artificial neural nets is presented in the present paper. The visible spectra of spots on apples' pericarp, ranging from 500 to 730 nm, were obtained with a fiber-probe spectrometer, and a new spectrum recognition system consisting of three-level cascade-connection neural nets was set up. The experiments show that the spectra of rotten, scar and bumped spot on an apple's pericarp can be recognized by the spectrum recognition system, and the recognition accuracy is higher than 85% even when noise level is 15%. The new recognition system overcomes the disadvantages of poor accuracy and poor anti-noise with the traditional system based on single cascade neural nets. Finally, a new method of expression of recognition results was proved. The method is based on the conception of degree of membership in fuzzing mathematics, and through it the recognition results can be expressed exactly and objectively.

  20. A Method Based on Artificial Intelligence To Fully Automatize The Evaluation of Bovine Blastocyst Images.

    Science.gov (United States)

    Rocha, José Celso; Passalia, Felipe José; Matos, Felipe Delestro; Takahashi, Maria Beatriz; Ciniciato, Diego de Souza; Maserati, Marc Peter; Alves, Mayra Fernanda; de Almeida, Tamie Guibu; Cardoso, Bruna Lopes; Basso, Andrea Cristina; Nogueira, Marcelo Fábio Gouveia

    2017-08-09

    Morphological analysis is the standard method of assessing embryo quality; however, its inherent subjectivity tends to generate discrepancies among evaluators. Using genetic algorithms and artificial neural networks (ANNs), we developed a new method for embryo analysis that is more robust and reliable than standard methods. Bovine blastocysts produced in vitro were classified as grade 1 (excellent or good), 2 (fair), or 3 (poor) by three experienced embryologists according to the International Embryo Technology Society (IETS) standard. The images (n = 482) were subjected to automatic feature extraction, and the results were used as input for a supervised learning process. One part of the dataset (15%) was used for a blind test posterior to the fitting, for which the system had an accuracy of 76.4%. Interestingly, when the same embryologists evaluated a sub-sample (10%) of the dataset, there was only 54.0% agreement with the standard (mode for grades). However, when using the ANN to assess this sub-sample, there was 87.5% agreement with the modal values obtained by the evaluators. The presented methodology is covered by National Institute of Industrial Property (INPI) and World Intellectual Property Organization (WIPO) patents and is currently undergoing a commercial evaluation of its feasibility.

  1. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Koivistoinen Teemu

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  2. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Alpo Värri

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  3. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Science.gov (United States)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  4. Optimization of the segmented method for optical compression and multiplexing system

    Science.gov (United States)

    Al Falou, Ayman

    2002-05-01

    Because of the constant increasing demands of images exchange, and despite the ever increasing bandwidth of the networks, compression and multiplexing of images is becoming inseparable from their generation and display. For high resolution real time motion pictures, electronic performing of compression requires complex and time-consuming processing units. On the contrary, by its inherent bi-dimensional character, coherent optics is well fitted to perform such processes that are basically bi-dimensional data handling in the Fourier domain. Additionally, the main limiting factor that was the maximum frame rate is vanishing because of the recent improvement of spatial light modulator technology. The purpose of this communication is to benefit from recent optical correlation algorithms. The segmented filtering used to store multi-references in a given space bandwidth product optical filter can be applied to networks to compress and multiplex images in a given bandwidth channel.

  5. Development of modifications to the material point method for the simulation of thin membranes, compressible fluids, and their interactions

    Energy Technology Data Exchange (ETDEWEB)

    York, A.R. II [Sandia National Labs., Albuquerque, NM (United States). Engineering and Process Dept.

    1997-07-01

    The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.

  6. Method and device for the powerful compression of laser-produced plasmas for nuclear fusion

    International Nuclear Information System (INIS)

    Hora, H.

    1975-01-01

    According to the invention, more than 10% of the laser energy are converted into mechanical energy of compression, in that the compression is produced by non-linear excessive radiation pressure. The time and local spectral and intensity distribution of the laser pulse must be controlled. The focussed laser beams must increase to over 10 15 W/cm 2 in less than 10 -9 seconds and the time variation of the intensities must be carried out so that the dynamic absorption of the outer plasma corona by rippling consumes less than 90% of the laser energy. (GG) [de

  7. A new method for simplification and compression of 3D meshes

    OpenAIRE

    Attene, Marco

    2001-01-01

    We focus on the lossy compression of manifold triangle meshes. Our SwingWrapper approach partitions the surface of an original mesh M into simply-connected regions, called triangloids. We compute a new mesh M'. Each triangle of M' is a close approximation of a pseudo-triangle of M. By construction, the connectivity of M' is fairly regular and can be compressed to less than a bit per triangle using EdgeBreaker or one of the other recently developed schemes. The locations of the vertices of M' ...

  8. Artificial neural networks versus conventional methods for boiling water reactor stability monitoring

    International Nuclear Information System (INIS)

    Hagen, T.H.J.J. van der

    1995-01-01

    The application of an artificial neural network (ANN) for boiling water reactor (BWR) stability monitoring was studied. A three-layer perceptron was trained on synthetic autocorrelation functions to estimate the decay ratio and the resonance frequency from measured neutron noise. Training of the ANN was improved by adding noise to the training patterns and by applying nonconventional error definitions in the generalized delta rule. The performance of the developed ANN was compared with those of conventional stability monitoring techniques. Explicit care was taken for generating unbiased test data. It is found that the trained ANN is capable of monitoring the stability of the Dodewaard BWR for four specific cases. By comparing properties such as the false alarm ratio, the alarm failure ratio, and the average time to alarm, it is shown that it performs worse than model-based methods in stability monitoring of exact second-order systems but that it is more robust (better resistant to corruptions of the input data and to deviations of the system at issue from an exact second-order system) than other methods. The latter explains its good performance on the Dodewaard BWR and is promising for the application of an ANN for stability monitoring of other reactors and for other operating conditions

  9. QSAR Study of Insecticides of Phthalamide Derivatives Using Multiple Linear Regression and Artificial Neural Network Methods

    Directory of Open Access Journals (Sweden)

    Adi Syahputra

    2014-03-01

    Full Text Available Quantitative structure activity relationship (QSAR for 21 insecticides of phthalamides containing hydrazone (PCH was studied using multiple linear regression (MLR, principle component regression (PCR and artificial neural network (ANN. Five descriptors were included in the model for MLR and ANN analysis, and five latent variables obtained from principle component analysis (PCA were used in PCR analysis. Calculation of descriptors was performed using semi-empirical PM6 method. ANN analysis was found to be superior statistical technique compared to the other methods and gave a good correlation between descriptors and activity (r2 = 0.84. Based on the obtained model, we have successfully designed some new insecticides with higher predicted activity than those of previously synthesized compounds, e.g.2-(decalinecarbamoyl-5-chloro-N’-((5-methylthiophen-2-ylmethylene benzohydrazide, 2-(decalinecarbamoyl-5-chloro-N’-((thiophen-2-yl-methylene benzohydrazide and 2-(decaline carbamoyl-N’-(4-fluorobenzylidene-5-chlorobenzohydrazide with predicted log LC50 of 1.640, 1.672, and 1.769 respectively.

  10. Residential building energy estimation method based on the application of artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Marshall, S.; Kajl, S.

    1999-07-01

    The energy requirements of a residential building five to twenty-five stories high can be measured using a newly proposed analytical method based on artificial intelligence. The method is fast and provides a wide range of results such as total energy consumption values, power surges, and heating or cooling consumption values. A series of database were created to take into account the particularities which influence the energy consumption of a building. In this study, DOE-2 software was created for use in 8 apartment models. A total of 27 neural networks were used, 3 for the estimation of energy consumption in the corridor, and 24 for inside the apartments. Three user interfaces were created to facilitate the estimation of energy consumption. These were named the Energy Estimation Assistance System (EEAS) interfaces and are only accessible using MATLAB software. The input parameters for EEAS are: climatic region, exterior wall resistance, roofing resistance, type of windows, infiltration, number of storeys, and corridor ventilation system operating schedule. By changing the parameters, the EEAS can determine annual heating, cooling and basic energy consumption levels for apartments and corridors. 2 tabs., 2 figs.

  11. Comparison of three artificial digestion methods for detection of non-encapsulated Trichinella pseudospiralis larvae in pork

    DEFF Research Database (Denmark)

    Nockler, K.; Reckinger, S.; Szabo, I.

    2009-01-01

    In a ring trial involving five laboratories (A, B, C, D, and E), three different methods of artificial digestion were compared for the detection of non-encapsulated Trichinella pseudospiralis larvae in minced meat. Each sample panel consisted often 1 g minced pork samples. All samples in each panel...... were derived from a bulk meat preparation with a nominal value of either 7 or 17 larvae per g (Ipg). Samples were tested for the number of muscle larvae using the magnetic stirrer method (labs A, B, and E), stomacher method (lab B), and Trichomatic 35 (R) (labs C and D). T. pseudospiralis larvae were...... by using the magnetic stirrer method (22%), followed by the stomacher method (25%), and Trichomatic 35 (R) (30%). Results revealed that T. pseudospiralis larvae in samples with a nominal value of 7 and 17 Ipg can be detected by all three methods of artificial digestion....

  12. A stable penalty method for the compressible Navier-Stokes equations: I. Open boundary conditions

    DEFF Research Database (Denmark)

    Hesthaven, Jan; Gottlieb, D.

    1996-01-01

    The purpose of this paper is to present asymptotically stable open boundary conditions for the numerical approximation of the compressible Navier-Stokes equations in three spatial dimensions. The treatment uses the conservation form of the Navier-Stokes equations and utilizes linearization...

  13. Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.

    Science.gov (United States)

    Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun

    2011-12-01

    Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.

  14. A novel method for fabrication of biodegradable scaffolds with high compression moduli

    NARCIS (Netherlands)

    DeGroot, JH; Kuijper, HW; Pennings, AJ

    1997-01-01

    It has been previously shown that, when used for meniscal reconstruction, porous copoly(L-lactide/epsilon-caprolactone) implants enhanced healing of meniscal lesions owing to their excellent adhesive properties. However, it appeared that the materials had an insufficient compression modulus to

  15. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    Science.gov (United States)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  16. Hall et al., 2016 Artificial Turf Surrogate Surface Methods Paper Data File

    Data.gov (United States)

    U.S. Environmental Protection Agency — Mercury dry deposition data quantified via static water surrogate surface (SWSS) and artificial turf surrogate surface (ATSS) collectors. This dataset is associated...

  17. On the Need for Artificial Intelligence and Advanced Test and Evaluation Methods for Space Exploration

    Science.gov (United States)

    Scheidt, D. H.; Hibbitts, C. A.; Chen, M. H.; Paxton, L. J.; Bekker, D. L.

    2017-02-01

    Implementing mature artificial intelligence would create the ability to significantly increase the science return from a mission, while potentially saving costs in mission and instrument operations, and solving currently intractable problems.

  18. Application of artificial intelligence (AI) methods for designing and analysis of reconfigurable cellular manufacturing system (RCMS)

    CSIR Research Space (South Africa)

    Xing, B

    2009-12-01

    Full Text Available This work focuses on the design and control of a novel hybrid manufacturing system: Reconfigurable Cellular Manufacturing System (RCMS) by using Artificial Intelligence (AI) approach. It is hybrid as it combines the advantages of Cellular...

  19. The method in γ spectrum analysis with artificial neural network based on MATLAB

    International Nuclear Information System (INIS)

    Bai Lixin; Zhang Yiyun; Xu Jiayun; Wu Liping

    2003-01-01

    Analyzing γ spectrum with artificial neural network have the advantage of using the information of whole spectrum and having high analyzing precision. A convenient realization based on MATLAB was present in this

  20. Structural refinement of artificial superlattices by the X-ray diffraction method

    CERN Document Server

    Ishibashi, Y; Tsurumi, T

    1999-01-01

    This paper reports a structural refinement of BaTiO sub 3 (BTO)/SrTiO sub 3 (STO) artificially superstructured thin films. The refinement was achieved by taking into account the effect of interdiffusion between BTO and STO. The samples were prepared by a molecular-beam epitaxy method on SrTiO sub 3 (001) substrate at 600 .deg. C. The phonon model was employed to simulate the X-ray diffraction (XRD) profiles. A discrepancy was observed in the intensities of the satellite peaks when the effect of the interdiffusion between BTO and STO was not incorporated in the simulation. In successive simulations, the concentration profile due to the interdiffusion was first calculated according to Fick's second law, and then the coefficients of the Fourier series describing the lattice distortion and the modulation of the structure factor were determined. The XRD profiles thus simulated almost completely agreed with those observed. This indicates that XRD analysis with the calculation process proposed in this study will ena...

  1. Fault detection and analysis in nuclear research facility using artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Ghazali, Abu Bakar, E-mail: Abakar@uniten.edu.my [Department of Electronics & Communication, College of Engineering, Universiti Tenaga Nasional, 43009 Kajang, Selangor (Malaysia); Ibrahim, Maslina Mohd [Instrumentation Program, Malaysian Nuclear Agency, Bangi (Malaysia)

    2016-01-22

    In this article, an online detection of transducer and actuator condition is discussed. A case study is on the reading of area radiation monitor (ARM) installed at the chimney of PUSPATI TRIGA nuclear reactor building, located at Bangi, Malaysia. There are at least five categories of abnormal ARM reading that could happen during the transducer failure, namely either the reading becomes very high, or very low/ zero, or with high fluctuation and noise. Moreover, the reading may be significantly higher or significantly lower as compared to the normal reading. An artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are good methods for modeling this plant dynamics. The failure of equipment is based on ARM reading so it is then to compare with the estimated ARM data from ANN/ ANFIS function. The failure categories in either ‘yes’ or ‘no’ state are obtained from a comparison between the actual online data and the estimated output from ANN/ ANFIS function. It is found that this system design can correctly report the condition of ARM equipment in a simulated environment and later be implemented for online monitoring. This approach can also be extended to other transducers, such as the temperature profile of reactor core and also to include other critical actuator conditions such as the valves and pumps in the reactor facility provided that the failure symptoms are clearly defined.

  2. Artificial Intelligence Mechanisms on Interactive Modified Simplex Method with Desirability Function for Optimising Surface Lapping Process

    Directory of Open Access Journals (Sweden)

    Pongchanun Luangpaiboon

    2014-01-01

    Full Text Available A study has been made to optimise the influential parameters of surface lapping process. Lapping time, lapping speed, downward pressure, and charging pressure were chosen from the preliminary studies as parameters to determine process performances in terms of material removal, lap width, and clamp force. The desirability functions of the-nominal-the-best were used to compromise multiple responses into the overall desirability function level or D response. The conventional modified simplex or Nelder-Mead simplex method and the interactive desirability function are performed to optimise online the parameter levels in order to maximise the D response. In order to determine the lapping process parameters effectively, this research then applies two powerful artificial intelligence optimisation mechanisms from harmony search and firefly algorithms. The recommended condition of (lapping time, lapping speed, downward pressure, and charging pressure at (33, 35, 6.0, and 5.0 has been verified by performing confirmation experiments. It showed that the D response level increased to 0.96. When compared with the current operating condition, there is a decrease of the material removal and lap width with the improved process performance indices of 2.01 and 1.14, respectively. Similarly, there is an increase of the clamp force with the improved process performance index of 1.58.

  3. Artificial Bee Colony Algorithm Combined with Grenade Explosion Method and Cauchy Operator for Global Optimization

    Directory of Open Access Journals (Sweden)

    Jian-Guo Zheng

    2015-01-01

    Full Text Available Artificial bee colony (ABC algorithm is a popular swarm intelligence technique inspired by the intelligent foraging behavior of honey bees. However, ABC is good at exploration but poor at exploitation and its convergence speed is also an issue in some cases. To improve the performance of ABC, a novel ABC combined with grenade explosion method (GEM and Cauchy operator, namely, ABCGC, is proposed. GEM is embedded in the onlooker bees’ phase to enhance the exploitation ability and accelerate convergence of ABCGC; meanwhile, Cauchy operator is introduced into the scout bees’ phase to help ABCGC escape from local optimum and further enhance its exploration ability. Two sets of well-known benchmark functions are used to validate the better performance of ABCGC. The experiments confirm that ABCGC is significantly superior to ABC and other competitors; particularly it converges to the global optimum faster in most cases. These results suggest that ABCGC usually achieves a good balance between exploitation and exploration and can effectively serve as an alternative for global optimization.

  4. Cargo flows distribution over the loading sites of enterprises by using methods of artificial intelligence

    Directory of Open Access Journals (Sweden)

    Олександр Павлович Кіркін

    2017-06-01

    Full Text Available Development of information technologies and market requirements in effective control over cargo flows, forces enterprises to look for new ways and methods of automated control over the technological operations. For rail transportation one of the most complicated tasks of automation is the cargo flows distribution over the sites of loading and unloading. In this article the solution with the use of one of the methods of artificial intelligence – a fuzzy inference has been proposed. The analysis of the last publications showed that the fuzzy inference method is effective for the solution of similar tasks, it makes it possible to accumulate experience, it is stable to temporary impacts of the environmental conditions. The existing methods of the cargo flows distribution over the sites of loading and unloading are too simplified and can lead to incorrect decisions. The purpose of the article is to create a distribution model of cargo flows of the enterprises over the sites of loading and unloading, basing on the fuzzy inference method and to automate the control. To achieve the objective a mathematical model of the cargo flows distribution over the sites of loading and unloading has been made using fuzzy logic. The key input parameters of the model are: «number of loading sites», «arrival of the next set of cars», «availability of additional operations». The output parameter is «a variety of set of cars». Application of the fuzzy inference method made it possible to reduce loading time by 15% and to reduce costs for preparatory operations before loading by 20%. Thus this method is an effective means and holds the greatest promise for railway competitiveness increase. Interaction between different types of transportation and their influence on the cargo flows distribution over the sites of loading and unloading hasn’t been considered. These sites may be busy transshipping at that very time which is characteristic of large enterprises

  5. Investigation of the influence of different surface regularization methods for cylindrical concrete specimens in axial compression tests

    Directory of Open Access Journals (Sweden)

    R. MEDEIROS

    Full Text Available ABSTRACT This study was conducted with the aim of evaluating the influence of different methods for end surface preparation of compressive strength test specimens. Four different methods were compared: a mechanical wear method through grinding using a diamond wheel established by NBR 5738; a mechanical wear method using a diamond saw which is established by NM 77; an unbonded system using neoprene pads in metal retainer rings established by C1231 and a bonded capping method with sulfur mortar established by NBR 5738 and by NM 77. To develop this research, 4 concrete mixes were determined with different strength levels, 2 of group 1 and 2 of group 2 strength levels established by NBR 8953. Group 1 consists of classes C20 to C50, 5 in 5MPa, also known as normal strength concrete. Group 2 is comprised of class C55, C60 to C100, 10 in 10 MPa, also known as high strength concrete. Compression tests were carried out at 7 and 28 days for the 4 surface preparation methods. The results of this study indicate that the method established by NBR 5738 is the most effective among the 4 strengths considered, once it presents lower dispersion of values obtained from the tests, measured by the coefficient of variation and, in almost all cases, it demonstrates the highest mean of rupture test. The method described by NBR 5738 achieved the expected strength level in all tests.

  6. Larvas output and influence of human factor in reliability of meat inspection by the method of artificial digestion

    OpenAIRE

    Đorđević Vesna; Savić Marko; Vasilev Saša; Đorđević Milovan

    2013-01-01

    On the basis of the performed analyses of the factors that contributed the infected meat reach food chain, we have found out that the infection occurred after consuming the meat inspected by the method of collective samples artificial digestion by using a magnetic stirrer (MM). In this work there are presented assay results which show how modifications of the method, on the level of final sedimentation, influence the reliability of Trichinella larvas detect...

  7. Low-Complexity Spatial-Temporal Filtering Method via Compressive Sensing for Interference Mitigation in a GNSS Receiver

    Directory of Open Access Journals (Sweden)

    Chung-Liang Chang

    2014-01-01

    Full Text Available A compressive sensing based array processing method is proposed to lower the complexity, and computation load of array system and to maintain the robust antijam performance in global navigation satellite system (GNSS receiver. Firstly, the spatial and temporal compressed matrices are multiplied with array signal, which results in a small size array system. Secondly, the 2-dimensional (2D minimum variance distortionless response (MVDR beamformer is employed in proposed system to mitigate the narrowband and wideband interference simultaneously. The iterative process is performed to find optimal spatial and temporal gain vector by MVDR approach, which enhances the steering gain of direction of arrival (DOA of interest. Meanwhile, the null gain is set at DOA of interference. Finally, the simulated navigation signal is generated offline by the graphic user interface tool and employed in the proposed algorithm. The theoretical analysis results using the proposed algorithm are verified based on simulated results.

  8. Simulation of 2-D Compressible Flows on a Moving Curvilinear Mesh with an Implicit-Explicit Runge-Kutta Method

    KAUST Repository

    AbuAlSaud, Moataz

    2012-07-01

    The purpose of this thesis is to solve unsteady two-dimensional compressible Navier-Stokes equations for a moving mesh using implicit explicit (IMEX) Runge- Kutta scheme. The moving mesh is implemented in the equations using Arbitrary Lagrangian Eulerian (ALE) formulation. The inviscid part of the equation is explicitly solved using second-order Godunov method, whereas the viscous part is calculated implicitly. We simulate subsonic compressible flow over static NACA-0012 airfoil at different angle of attacks. Finally, the moving mesh is examined via oscillating the airfoil between angle of attack = 0 and = 20 harmonically. It is observed that the numerical solution matches the experimental and numerical results in the literature to within 20%.

  9. Methods for Creation and Detection of Ultra-Strong Artificial Ionization in the Upper Atmosphere (Invited)

    Science.gov (United States)

    Bernhardt, P. A.; Siefring, C. L.; Briczinski, S. J.; Kendall, E. A.; Watkins, B. J.; Bristow, W. A.; Michell, R.

    2013-12-01

    The High Frequency Active Auroral Research Program (HAARP) transmitter in Alaska has been used to produce localized regions of artificial ionization at altitudes between 150 and 250 km. High power radio waves tuned near harmonics of the electron gyro frequency were discovered by Todd Pederson of the Air Force Research Laboratory to produce ionosonde traces that looked like artificial ionization layers below the natural F-region. The initial regions of artificial ionization (AI) were not stable but had moved down in altitude over a period of 15 minutes. Recently, artificial ionization has been produced by the 2nd, 3rd, 4th and 6th harmonics transmissions by the HAARP. In march 2013, the artificial ionization clouds were sustained for more the 5 hours using HAARP tuned to the 4 fce at the full power of 3.6 Mega-Watts with a twisted-beam antenna pattern. Frequency selection with narrow-band sweeps and antenna pattern shaping has been employed for optimal generation of AI. Recent research at HAARP has produced the longest lived and denser artificial ionization clouds using HF transmissions at the harmonics of the electron cyclotron frequency and ring-shaped radio beams tailored to prevent the descent of the clouds. Detection of artificial ionization employs (1) ionosonde echoes, (2) coherent backscatter from the Kodiak SuperDARN radar, (3) enhanced ion and plasma line echoes from the HAARP MUIR radar at 400 MHz, (4) high resolution optical image from ground sites, and (5) unique stimulated electromagnetic emissions, and (6) strong UHF and L-Band scintillation induced into trans-ionospheric signals from satellite radio beacons. Future HAARP experiments will determine the uses of long-sustained AI for enhanced HF communications.

  10. SU-E-J-18: Evaluation of the Effectiveness of Compression Methods in SBRT for Lung.

    Science.gov (United States)

    Liao, Y; Tolekids, G; Yao, R; Templeton, A; Sensakovic, W; Chu, J

    2012-06-01

    This study aims to evaluate the effectiveness of compression in immobilizing tumor during stereotactic body radiotherapy (SBRT) for lung cancer. Published data have demonstrated bigger respiratory motion in lower lobe than in upper lobe during normal breathing. We hypothesize that 4DCT-based patient selection and abdominal compression would immobilize lung tumor volumes effectively, regardless of their location. We retrospectively reviewed 12 SBRT lung cases treated with Trilogy® (Varian Medical System, Palo Alto, CA). Either compression plate or Vac-LokTM was used as abdomen compression of the SBRT immobilization system (Body Pro-LokTM, CIVCO) to restrict patients' breathing during CT simulation and treatment delivery. These cases are grouped into 2 categories: lower and upper lobe tumor, each with 6 cases. Records for 33 treatments were studied. On each treatment day, the patient was set up to the bony anatomy using kV-kV-match. A CBCT was performed to further set up the patient to the tumor based on the soft tissue information. The shifts from CBCT-setup were analyzed as displacement vectors demonstrating the magnitude of the tumor motion relative to the bony anatomy. The mean magnitude of displacement vectors for upper lobe and lower lobe were 3.7±2.7 and 4.2±6.3, [1S.D.] mm, respectively. The Wilcoxon rank sum test indicates that the difference in the displacement vector between the two groups is not statistically significant (p-value = 0.33). The magnitude of shifts from CBCT were small with mean value <5mm in SBRT lung treatments. No statistically significant difference were observed in the displacement of tumor between lower and upper lobes. With limited sample size, this suggests that our current 4DCT screening/abdominal compression approach is effective in restricting the respiration-induced tumor motion despite its location within the lung. We plan to confirm this Result in additional patients. © 2012 American Association of Physicists in Medicine.

  11. An Examination of a Music Appreciation Method Incorporating Tactile Sensations from Artificial Vibrations

    Science.gov (United States)

    Ideguchi, Tsuyoshi; Yoshida, Ryujyu; Ooshima, Keita

    We examined how test subject impressions of music changed when artificial vibrations were incorporated as constituent elements of a musical composition. In this study, test subjects listened to several music samples in which different types of artificial vibration had been incorporated and then subjectively evaluated any resulting changes to their impressions of the music. The following results were obtained: i) Even if rhythm vibration is added to a silent component of a musical composition, it can effectively enhance musical fitness. This could be readily accomplished when actual sounds that had been synchronized with the vibration components were provided beforehand. ii) The music could be listened to more comfortably by adding not only a natural vibration extracted from percussion instruments but also artificial vibration as tactile stimulation according to intentional timing. Furthermore, it was found that the test subjects' impression of the music was affected by a characteristic of the artificial vibration. iii) Adding vibration to high-frequency areas can offer an effective and practical way of enhancing the appeal of a musical composition. iv) The movement sensations of sound and vibration could be experienced when the strength of the sound and vibration are modified in turn. These results suggest that the intentional application of artificial vibration could result in a sensitivity amplification factor on the part of a listener.

  12. Exploiting of the Compression Methods for Reconstruction of the Antenna Far-Field Using Only Amplitude Near-Field Measurements

    Directory of Open Access Journals (Sweden)

    J. Puskely

    2010-06-01

    Full Text Available The novel approach exploits the principle of the conventional two-plane amplitude measurements for the reconstruction of the unknown electric field distribution on the antenna aperture. The method combines a global optimization with a compression method. The global optimization method (GO is used to minimize the functional, and the compression method is used to reduce the number of unknown variables. The algorithm employs the Real Coded Genetic Algorithm (RCGA as the global optimization approach. The Discrete Cosine Transform (DCT and the Discrete Wavelet Transform (DWT are applied to reduce the number of unknown variables. Pros and cons of methods are investigated and reported for the solution of the problem. In order to make the algorithm faster, exploitation of amplitudes from a single scanning plane is also discussed. First, the algorithm is used to obtain an initial estimate. Subsequently, the common Fourier iterative algorithm is used to reach global minima with sufficient accuracy. The method is examined measuring the dish antenna.

  13. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Science.gov (United States)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-12-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  14. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    International Nuclear Information System (INIS)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-01-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis

  15. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Stoitsis, John [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)]. E-mail: stoitsis@biosim.ntua.gr; Valavanis, Ioannis [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Mougiakakou, Stavroula G. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Golemati, Spyretta [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Nikita, Alexandra [University of Athens, Medical School 152 28 Athens (Greece); Nikita, Konstantina S. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)

    2006-12-20

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  16. Distinguishing of artificial irradiation by α dose: a method of discriminating imitations of ancient pottery

    International Nuclear Information System (INIS)

    Wang Weida; Xia Junding; Zhou Zhixin; Leung, P.L.

    2003-01-01

    If a modern pottery is artificially irradiated by γ-rays of 60 Co source, the modern will become ancient when the pottery is dated by the thermoluminescence technique. For distinguishing artificial irradiation a study was made. Meanwhile the 'fine-grain' and 'pre-dose' techniques were used respectively for measurement of the paleodose in a fine-grain sample from the same pottery. If the paleodose measured by the fine-grain technique is greater than that by the pre-dose techniques, we can affirm that the difference between two paleodoses is due to α dose and this paleodose containing α component results from natural radiation, the pottery therefore is ancient. If two paleodoses are equal approximately, i.e. α dose is not included in the paleodose, the paleodose comes from artificial γ irradiation and the pottery is an imitation

  17. Real-time and encryption efficiency improvements of simultaneous fusion, compression and encryption method based on chaotic generators

    Science.gov (United States)

    Jridi, Maher; Alfalou, Ayman

    2018-03-01

    In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.

  18. ChIPWig: a random access-enabling lossless and lossy compression method for ChIP-seq data.

    Science.gov (United States)

    Ravanmehr, Vida; Kim, Minji; Wang, Zhiying; Milenkovic, Olgica

    2018-03-15

    Chromatin immunoprecipitation sequencing (ChIP-seq) experiments are inexpensive and time-efficient, and result in massive datasets that introduce significant storage and maintenance challenges. To address the resulting Big Data problems, we propose a lossless and lossy compression framework specifically designed for ChIP-seq Wig data, termed ChIPWig. ChIPWig enables random access, summary statistics lookups and it is based on the asymptotic theory of optimal point density design for nonuniform quantizers. We tested the ChIPWig compressor on 10 ChIP-seq datasets generated by the ENCODE consortium. On average, lossless ChIPWig reduced the file sizes to merely 6% of the original, and offered 6-fold compression rate improvement compared to bigWig. The lossy feature further reduced file sizes 2-fold compared to the lossless mode, with little or no effects on peak calling and motif discovery using specialized NarrowPeaks methods. The compression and decompression speed rates are of the order of 0.2 sec/MB using general purpose computers. The source code and binaries are freely available for download at https://github.com/vidarmehr/ChIPWig-v2, implemented in C ++. milenkov@illinois.edu. Supplementary data are available at Bioinformatics online.

  19. Evaluation of dna extraction methods of the Salmonella sp. bacterium in artificially infected chickens eggs

    Directory of Open Access Journals (Sweden)

    Ana Cristina dos Reis Ferreira

    2015-06-01

    Full Text Available ABSTRACT. Ferreira A.C.dosR. & dos Santos B.M. [Evaluation of dna extraction methods of the Salmonella sp. bacterium in artificially infected chickens eggs.] Avaliação de três métodos de extração de DNA de Salmonella sp. em ovos de galinhas contaminados artificialmente. Revista Brasileira de Medicina Veterinária, 37(2:115-119, 2015. Departamento de Veterinária, Universidade Federal de Viçosa, Campus Universitário, Av. Peter Henry Rolfs, s/n, Viçosa, MG 36571-000, Brasil. E-mail: bmsantos@ufv.br The present study evaluated the efficiency of different protocols for the genomic DNA extraction of Salmonella bacteria in chicken eggs free of specific pathogens – SPF. Seventy-five eggs were used and divided into five groups with fifteen eggs each. Three of the five groups of eggs were inoculated with enteric Salmonella cultures. One of the five groups was inoculated with Escherichia coli bacterium culture. And another group of eggs was the negative control that received saline solution 0.85% infertile. The eggs were incubated on a temperature that varied from 20 to 25°C during 24, 48 and 72 hours. Five yolks of each group were collected every 24 hours. These yolks were homogenized and centrifuged during 10 minutes. The supernatant was rejected. After the discard, PBS ph 7.2 was added and centrifuged again. The sediment obtained of each group was used for the extraction of bacterial genomic DNA. Silica particles and a commercial kit were utilized as the extraction methods. The extracted DNA was kept on a temperature of 20°C until the evaluation through PCR. The primers utilized were related with the invA gene and they were the following: 5’ GTA AAA TTA TCG CCA CGT TCG GGC AA 3’ and 5’ TCA TCG CAC CGT CAA AGG AAC C 3’. The amplification products were visualized in transilluminator with ultraviolet light. The obtained results through the bacterial DNA extractions demonstrated that the extraction method utilizing silica particles was

  20. An efficient finite differences method for the computation of compressible, subsonic, unsteady flows past airfoils and panels

    Science.gov (United States)

    Colera, Manuel; Pérez-Saborid, Miguel

    2017-09-01

    A finite differences scheme is proposed in this work to compute in the time domain the compressible, subsonic, unsteady flow past an aerodynamic airfoil using the linearized potential theory. It improves and extends the original method proposed in this journal by Hariharan, Ping and Scott [1] by considering: (i) a non-uniform mesh, (ii) an implicit time integration algorithm, (iii) a vectorized implementation and (iv) the coupled airfoil dynamics and fluid dynamic loads. First, we have formulated the method for cases in which the airfoil motion is given. The scheme has been tested on well known problems in unsteady aerodynamics -such as the response to a sudden change of the angle of attack and to a harmonic motion of the airfoil- and has been proved to be more accurate and efficient than other finite differences and vortex-lattice methods found in the literature. Secondly, we have coupled our method to the equations governing the airfoil dynamics in order to numerically solve problems where the airfoil motion is unknown a priori as happens, for example, in the cases of the flutter and the divergence of a typical section of a wing or of a flexible panel. Apparently, this is the first self-consistent and easy-to-implement numerical analysis in the time domain of the compressible, linearized coupled dynamics of the (generally flexible) airfoil-fluid system carried out in the literature. The results for the particular case of a rigid airfoil show excellent agreement with those reported by other authors, whereas those obtained for the case of a cantilevered flexible airfoil in compressible flow seem to be original or, at least, not well-known.

  1. Prediction of enthalpy of fusion of pure compounds using an Artificial Neural Network-Group Contribution method

    International Nuclear Information System (INIS)

    Gharagheizi, Farhad; Salehi, Gholam Reza

    2011-01-01

    Highlights: → An Artificial Neural Network-Group Contribution method is presented for prediction of enthalpy of fusion of pure compounds at their normal melting point. → Validity of the model is confirmed using a large evaluated data set containing 4157 pure compounds. → The average percent error of the model is equal to 2.65% in comparison with the experimental data. - Abstract: In this work, the Artificial Neural Network-Group Contribution (ANN-GC) method is applied to estimate the enthalpy of fusion of pure chemical compounds at their normal melting point. 4157 pure compounds from various chemical families are investigated to propose a comprehensive and predictive model. The obtained results show the Squared Correlation Coefficient (R 2 ) of 0.999, Root Mean Square Error of 0.82 kJ/mol, and average absolute deviation lower than 2.65% for the estimated properties from existing experimental values.

  2. An image compression method for space multispectral time delay and integration charge coupled device camera

    International Nuclear Information System (INIS)

    Li Jin; Jin Long-Xu; Zhang Ran-Feng

    2013-01-01

    Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low-complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in multispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian—Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band

  3. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  4. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Zhehuang Huang

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  5. A new method for making artificially weathered stone specimens for testing of conservation treatments

    NARCIS (Netherlands)

    Lubelli, B.A.; Hees, R.P.J. van; Nijland, T.G.; Bolhuis, J.

    2015-01-01

    tThe application of new consolidating products on the surface of weathered materials is a common inter-vention technique in conservation practice. Due to the difficulty of producing artificially weatheredsubstrates in a reproducible way, the effect of consolidating products in laboratory is

  6. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    Science.gov (United States)

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.

    2015-03-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  7. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    International Nuclear Information System (INIS)

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-01-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced. (paper)

  8. Studies on improvement of diagnostic ability of computed tomography (CT) in the parenchymatous organs in the upper abdomen, 1. Study on the upper abdominal compression method

    Energy Technology Data Exchange (ETDEWEB)

    Kawata, Ryo [Gifu Univ. (Japan). Faculty of Medicine

    1982-07-01

    1) The upper abdominal compression method was easily applicable for CT examination in practically all the patients. It gave no harm and considerably improved CT diagnosis. 2) The materials used for compression were foamed polystyrene, the Mix-Dp and a water bag. When CT examination was performed to diagnose such lesions as a circumscribed tumor, compression with the Mix-Dp was most useful, and when it was performed for screening examination of upper abdominal diseases, compression with a water bag was most effective. 3) Improvement in contour-depicting ability of CT by the compression method was most marked at the body of the pancreas, followed by the head of the pancreas and the posterior surface of the left lobe of the liver. Slight improvement was seen also at the tail of the pancreas and the left adrenal gland. 4) Improvement in organ-depicting ability of CT by the compression method was estimated by a 4-category classification method. It was found that the improvement was most marked at the body and the head of the pancreas. Considerable improvement was observed also at the left lobe of the liver and the both adrenal glands. Little improvement was obtained at the spleen. When contrast enhancement was combined with the compression method, improvement at such organs which were liable to be enhanced, as the liver and the adrenal glands, was promoted, while the organ-depicting ability was decreased at the pancreas. 5) By comparing the CT image under compression with that without compression, continuous infiltrations of gastric cancer into the body and the tail of the pancreas in 2 cases and a retroperitoneal infiltration of pancreatic tumor in 1 case were diagnosed preoperatively.

  9. Comparison of three artificial digestion methods for detection of non-encapsulated Trichinella pseudospiralis larvae in pork.

    Science.gov (United States)

    Nöckler, K; Reckinger, S; Szabó, I; Maddox-Hyttel, C; Pozio, E; van der Giessen, J; Vallée, I; Boireau, P

    2009-02-23

    In a ring trial involving five laboratories (A, B, C, D, and E), three different methods of artificial digestion were compared for the detection of non-encapsulated Trichinella pseudospiralis larvae in minced meat. Each sample panel consisted of ten 1g minced pork samples. All samples in each panel were derived from a bulk meat preparation with a nominal value of either 7 or 17 larvae per g (lpg). Samples were tested for the number of muscle larvae using the magnetic stirrer method (labs A, B, and E), stomacher method (lab B), and Trichomatic 35 (labs C and D). T. pseudospiralis larvae were found in all 120 samples tested. For samples with 7 lpg, larval recoveries were significantly higher using the stomacher method versus the magnetic stirrer method, but there were no significant differences for samples with 17 lpg. In comparing laboratory results irrespective of the method used, lab B detected a significantly higher number of larvae than lab E for samples with 7 lpg, and lab E detected significantly less larvae than labs A, B, and D in samples with 17 lpg. The lowest overall variation for quantitative results (i.e. larval recoveries which were outside the tolerance range) was achieved by using the magnetic stirrer method (22%), followed by the stomacher method (25%), and Trichomatic 35 (30%). Results revealed that T. pseudospiralis larvae in samples with a nominal value of 7 and 17 lpg can be detected by all three methods of artificial digestion.

  10. Notion Of Artificial Labs Slow Global Warming And Advancing Engine Studies Perspectives On A Computational Experiment On Dual-Fuel Compression-Ignition Engine Research

    Directory of Open Access Journals (Sweden)

    Tonye K. Jack

    2017-06-01

    Full Text Available To appreciate clean energy applications of the dual-fuel internal combustion engine D-FICE with pilot Diesel fuel to aid public policy formulation in terms of present and future benefits to the modern transportation stationary power and promotion of oil and gas green- drilling the brief to an engine research team was to investigate the feasible advantages of dual-fuel compression-ignition engines guided by the following concerns i Sustainable fuel and engine power delivery ii The requirements for fuel flexibility iii Low exhausts emissions and environmental pollution iv Achieving low specific fuel consumption and economy for maximum power v The comparative advantages over the conventional Diesel engines vi Thermo-economic modeling and analysis for the optimal blend as basis for a benefitcost evaluation Planned in two stages for reduced cost and fast turnaround of results - initial preliminary stage with basic simple models and advanced stage with more detailed complex modeling. The paper describes a simplified MATLAB based computational experiment predictive model for the thermodynamic combustion and engine performance analysis of dual-fuel compression-ignition engine studies operating on the theoretical limited-pressure cycle with several alternative fuel-blends. Environmental implications for extreme temperature moderation are considered by finite-time thermodynamic modeling for maximum power with predictions for pollutants formation and control by reaction rates kinetics analysis of systematic reduced plausible coupled chemistry models through the NCN reaction pathway for the gas-phase reactions classes of interest. Controllable variables for engine-out pollutants emissions reduction and in particular NOx elimination are identified. Verifications and Validations VampV through Performance Comparisons were made using a clinical approach in selection of StrokeBore ratios greater-than and equal-to one amp88051 low-to-high engine speeds and medium

  11. Numerical simulation of the interaction between a nonlinear elastic structure and compressible flow by the discontinuous Galerkin method

    Czech Academy of Sciences Publication Activity Database

    Kosík, Adam; Feistauer, M.; Hadrava, Martin; Horáček, Jaromír

    2015-01-01

    Roč. 267, September (2015), s. 382-396 ISSN 0096-3003 R&D Projects: GA ČR(CZ) GAP101/11/0207 Institutional support: RVO:61388998 Keywords : discontinuous Galerkin method * nonlinear elasticity * compressible viscous flow * fluid–structure interaction Subject RIV: BI - Acoustics Impact factor: 1.345, year: 2015 http://www.sciencedirect.com/science/article/pii/S0096300315002453/pdfft?md5=02d46bc730e3a7fb8a5008aaab1da786&pid=1-s2.0-S0096300315002453-main.pdf

  12. Artificial life and Piaget.

    Science.gov (United States)

    Mueller, Ulrich; Grobman, K H.

    2003-04-01

    Artificial life provides important theoretical and methodological tools for the investigation of Piaget's developmental theory. This new method uses artificial neural networks to simulate living phenomena in a computer. A recent study by Parisi and Schlesinger suggests that artificial life might reinvigorate the Piagetian framework. We contrast artificial life with traditional cognitivist approaches, discuss the role of innateness in development, and examine the relation between physiological and psychological explanations of intelligent behaviour.

  13. Calculation of the energy provided by a PV generator. Comparative study: Conventional methods vs. artificial neural networks

    International Nuclear Information System (INIS)

    Almonacid, F.; Rus, C.; Perez-Higueras, P.; Hontoria, L.

    2011-01-01

    The use of photovoltaics for electricity generation purposes has recorded one of the largest increases in the field of renewable energies. The energy production of a grid-connected PV system depends on various factors. In a wide sense, it is considered that the annual energy provided by a generator is directly proportional to the annual radiation incident on the plane of the generator and to the installed nominal power. However, a range of factors is influencing the expected outcome by reducing the generation of energy. The aim of this study is to compare the results of four different methods for estimating the annual energy produced by a PV generator: three of them are classical methods and the fourth one is based on an artificial neural network developed by the R and D Group for Solar and Automatic Energy at the University of Jaen. The results obtained shown that the method based on an artificial neural network provides better results than the alternative classical methods in study, mainly due to the fact that this method takes also into account some second order effects, such as low irradiance, angular and spectral effects. -- Research highlights: → It is considered that the annual energy provided by a PV generator is directly proportional to the annual radiation incident on the plane of the generator and to the installed nominal power. → A range of factors are influencing the expected outcome by reducing the generation of energy (mismatch losses, dirt and dust, Ohmic losses,.). → The aim of this study is to compare the results of four different methods for estimating the annual energy produced by a PV generator: three of them are classical methods and the fourth one is based on an artificial neural network. → The results obtained shown that the method based on an artificial neural network provides better results than the alternative classical methods in study. While classical methods have only taken into account temperature losses, the method based in

  14. Investigation of Surface Pre-Treatment Methods for Wafer-Level Cu-Cu Thermo-Compression Bonding

    Directory of Open Access Journals (Sweden)

    Koki Tanaka

    2016-12-01

    Full Text Available To increase the yield of the wafer-level Cu-Cu thermo-compression bonding method, certain surface pre-treatment methods for Cu are studied which can be exposed to the atmosphere before bonding. To inhibit re-oxidation under atmospheric conditions, the reduced pure Cu surface is treated by H2/Ar plasma, NH3 plasma and thiol solution, respectively, and is covered by Cu hydride, Cu nitride and a self-assembled monolayer (SAM accordingly. A pair of the treated wafers is then bonded by the thermo-compression bonding method, and evaluated by the tensile test. Results show that the bond strengths of the wafers treated by NH3 plasma and SAM are not sufficient due to the remaining surface protection layers such as Cu nitride and SAMs resulting from the pre-treatment. In contrast, the H2/Ar plasma–treated wafer showed the same strength as the one with formic acid vapor treatment, even when exposed to the atmosphere for 30 min. In the thermal desorption spectroscopy (TDS measurement of the H2/Ar plasma–treated Cu sample, the total number of the detected H2 was 3.1 times more than the citric acid–treated one. Results of the TDS measurement indicate that the modified Cu surface is terminated by chemisorbed hydrogen atoms, which leads to high bonding strength.

  15. An artificial neural network ensemble method for fault diagnosis of proton exchange membrane fuel cell system

    International Nuclear Information System (INIS)

    Shao, Meng; Zhu, Xin-Jian; Cao, Hong-Fei; Shen, Hai-Feng

    2014-01-01

    The commercial viability of PEMFC (proton exchange membrane fuel cell) systems depends on using effective fault diagnosis technologies in PEMFC systems. However, many researchers have experimentally studied PEMFC (proton exchange membrane fuel cell) systems without considering certain fault conditions. In this paper, an ANN (artificial neural network) ensemble method is presented that improves the stability and reliability of the PEMFC systems. In the first part, a transient model giving it flexibility in application to some exceptional conditions is built. The PEMFC dynamic model is built and simulated using MATLAB. In the second, using this model and experiments, the mechanisms of four different faults in PEMFC systems are analyzed in detail. Third, the ANN ensemble for the fault diagnosis is built and modeled. This model is trained and tested by the data. The test result shows that, compared with the previous method for fault diagnosis of PEMFC systems, the proposed fault diagnosis method has higher diagnostic rate and generalization ability. Moreover, the partial structure of this method can be altered easily, along with the change of the PEMFC systems. In general, this method for diagnosis of PEMFC has value for certain applications. - Highlights: • We analyze the principles and mechanisms of the four faults in PEMFC (proton exchange membrane fuel cell) system. • We design and model an ANN (artificial neural network) ensemble method for the fault diagnosis of PEMFC system. • This method has high diagnostic rate and strong generalization ability

  16. Comparison of standard resampling methods for performance estimation of artificial neural network ensembles

    OpenAIRE

    Green, Michael; Ohlsson, Mattias

    2007-01-01

    Estimation of the generalization performance for classification within the medical applications domain is always an important task. In this study we focus on artificial neural network ensembles as the machine learning technique. We present a numerical comparison between five common resampling techniques: k-fold cross validation (CV), holdout, using three cutoffs, and bootstrap using five different data sets. The results show that CV together with holdout $0.25$ and $0.50$ are the best resampl...

  17. Orbit Determination from Tracking Data of Artificial Satellite Using the Method of Differential Correction

    OpenAIRE

    Byoung-Sun Lee; Jung-Hyun Jo; Sang-Young Park; Kyu-Hong Choi; Chun-Hwey Kim

    1988-01-01

    The differential correction process determining osculating orbital elements as correct as possible at a given instant of time from tracking data of artificial satellite was accomplished. Preliminary orbital elements were used as an initial value of the differential correction procedure and iterated until the residual of real observation(O) and computed observation(C) was minimized. Tracking satellite was NOAA-9 or TIROS-N series. Two types of tracking data were prediction data precomputed fro...

  18. Event classification and optimization methods using artificial intelligence and other relevant techniques: Sharing the experiences

    Science.gov (United States)

    Mohamed, Abdul Aziz; Hasan, Abu Bakar; Ghazali, Abu Bakar Mhd.

    2017-01-01

    Classification of large data into respected classes or groups could be carried out with the help of artificial intelligence (AI) tools readily available in the market. To get the optimum or best results, optimization tool could be applied on those data. Classification and optimization have been used by researchers throughout their works, and the outcomes were very encouraging indeed. Here, the authors are trying to share what they have experienced in three different areas of applied research.

  19. Leak Detection Modeling and Simulation for Oil Pipeline with Artificial Intelligence Method

    OpenAIRE

    Sukarno, Pudjo; Sidarto, Kuntjoro Adji; Trisnobudi, Amoranto; Setyoadi, Delint Ira; Rohani, Nancy; Darmadi, Darmadi

    2007-01-01

    Leak detection is always interesting research topic, where leak location and leak rate are two pipeline leaking parameters that should be determined accurately to overcome pipe leaking problems. In this research those two parameters are investigated by developing transmission pipeline model and the leak detection model which is developed using Artificial Neural Network. The mathematical approach needs actual leak data to train the leak detection model, however such data could not be obtained ...

  20. METHODS OF TEXT INFORMATION CLASSIFICATION ON THE BASIS OF ARTIFICIAL NEURAL AND SEMANTIC NETWORKS

    Directory of Open Access Journals (Sweden)

    L. V. Serebryanaya

    2016-01-01

    Full Text Available The article covers the use of perseptron, Hopfild artificial neural network and semantic network for classification of text information. Network training algorithms are studied. An algorithm of inverse mistake spreading for perceptron network and convergence algorithm for Hopfild network are implemented. On the basis of the offered models and algorithms automatic text classification software is developed and its operation results are evaluated.

  1. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  2. Semen parameters can be predicted from environmental factors and lifestyle using artificial intelligence methods.

    Science.gov (United States)

    Girela, Jose L; Gil, David; Johnsson, Magnus; Gomez-Torres, María José; De Juan, Joaquín

    2013-04-01

    Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors as well as life habits may affect semen quality. In this paper we use artificial intelligence techniques in order to predict semen characteristics resulting from environmental factors, life habits, and health status, with these techniques constituting a possible decision support system that can help in the study of male fertility potential. A total of 123 young, healthy volunteers provided a semen sample that was analyzed according to the World Health Organization 2010 criteria. They also were asked to complete a validated questionnaire about life habits and health status. Sperm concentration and percentage of motile sperm were related to sociodemographic data, environmental factors, health status, and life habits in order to determine the predictive accuracy of a multilayer perceptron network, a type of artificial neural network. In conclusion, we have developed an artificial neural network that can predict the results of the semen analysis based on the data collected by the questionnaire. The semen parameter that is best predicted using this methodology is the sperm concentration. Although the accuracy for motility is slightly lower than that for concentration, it is possible to predict it with a significant degree of accuracy. This methodology can be a useful tool in early diagnosis of patients with seminal disorders or in the selection of candidates to become semen donors.

  3. Numerical and theoretical aspects of the modelling of compressible two-phase flow by interface capture methods

    International Nuclear Information System (INIS)

    Kokh, S.

    2001-01-01

    This research thesis reports the development of a numerical direct simulation of compressible two-phase flows by using interface capturing methods. These techniques are based on the use of an Eulerian fixed grid to describe flow variables as well as the interface between fluids. The author first recalls conventional interface capturing methods and makes the distinction between those based on discontinuous colour functions and those based on level set functions. The approach is then extended to a five equation model to allow the largest as possible choice of state equations for the fluids. Three variants are developed. A solver inspired by the Roe scheme is developed for one of them. These interface capturing methods are then refined, more particularly for problems of numerical diffusion at the interface. A last part addresses the study of dynamic phase change. Non-conventional thermodynamics tools are used to study the structures of an interface which performs phase transition [fr

  4. A study on the advanced methods for on-line signal processing by using artificial intelligence in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Wan Joo

    1993-02-01

    signals in a certain time interval for reducing the loads of the fusion part. The simulation results of LOCA in the simulator are demonstrated for the classification of the signal trend. The demonstration is performed for the transient states of a steam generator. Using the fuzzy memberships, the pre-processors classify the trend types in each time interval into three classes; increase, decrease, and steady that are fuzzy to classify. The result compared with the artificial neural network which has no pre-processor shows that the training time is reduced and the outputs are seldom influenced by noises. Because most knowledge of human operators include fuzzy concepts and words, the method like this is very helpful for computerizing the buman expert's knowledge

  5. Application of a hybrid method combining grey model and back propagation artificial neural networks to forecast hepatitis B in china.

    Science.gov (United States)

    Gan, Ruijing; Chen, Xiaojun; Yan, Yu; Huang, Daizheng

    2015-01-01

    Accurate incidence forecasting of infectious disease provides potentially valuable insights in its own right. It is critical for early prevention and may contribute to health services management and syndrome surveillance. This study investigates the use of a hybrid algorithm combining grey model (GM) and back propagation artificial neural networks (BP-ANN) to forecast hepatitis B in China based on the yearly numbers of hepatitis B and to evaluate the method's feasibility. The results showed that the proposal method has advantages over GM (1, 1) and GM (2, 1) in all the evaluation indexes.

  6. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  7. Solid hydrogen and deuterium. II. Pressure and compressibility calculated by a lowest-order constrained-variation method

    International Nuclear Information System (INIS)

    Pettersen, G.; Ostgaard, E.

    1988-01-01

    The pressure and the compressibility of solid H 2 and D 2 are obtained from ground-state energies calculated by means of a modified variational lowest order constrained-variation (LOCV) method. Both fcc and hcp structures are considered, but results are given for the fcc structure only. The pressure and the compressibility are calculated or estimated from the dependence of the ground-state energy on density or molar volume, generally in a density region of 0.65σ -3 to 1.3σ -3 , corresponding to a molar volume of 0.65σ -3 to 1.3σ -3 , corresponding to a molar volume of 12-24 cm 3 mole, where σ = 2.958 angstrom, and the calculations are done for five different two-body potentials. Theoretical results for the pressure are 340-460 atm for solid H 2 at a particle density of 0.82σ -3 or a molar volume of 19 cm 3 /mole, and 370-490 atm for solid 4 He at a particle density of 0.92σ -3 or a molar volume of 17 cm 3 /mole. The corresponding experimental results are 650 and 700 atm, respectively. Theoretical results for the compressibility are 210 times 10 -6 to 260 times 10 -6 atm -1 for solid H 2 at a particle density of 0.82σ -3 or a molar volume of 19 cm 3 /mole, and 150 times 10 -6 to 180 times 10 -6 atm -1 for solid D 2 at a particle density of 0.92σ -3 or a molar volume of 17 cm 3 mole. The corresponding experimental results are 180 times 10 -6 and 140 times 10 -6 atm -1 , respectively. The agreement with experimental results is better for higher densities

  8. An example of the use of the DELPHI method: future prospects of artificial heart techniques in France

    International Nuclear Information System (INIS)

    Derian, Jean-Claude; Morize, Francoise; Vernejoul, Pierre de; Vial, Renee

    1971-01-01

    The artificial heart is still only a research project surrounded by numerous uncertainties which make it very difficult to estimate, at the moment, the possibilities for future development of this technique in France. A systematic analysis of the hazards which characterize this project has been undertaken in the following report: restricting these uncertainties has required a taking into account of opinions of specialists concerned with type of research or its upshot. We have achieved this by adapting an investigation technique which is still unusual in France, the DELPHI method. This adaptation has allowed the confrontation and statistical aggregation of the opinions given by a body of a hundred experts who were consulted through a program of sequential interrogations which studied in particular, the probable date of the research issue, the clinical cases which require the use of an artificial heart, as well as the probable future needs. After having taken into account the economic constraints, we can deduce from these results the probable amount of plutonium 238 needed in the hypothesis where isotopic generator would be retained for the energetics feeding of the artificial heart [fr

  9. Long-time stability effects of quadrature and artificial viscosity on nodal discontinuous Galerkin methods for gas dynamics

    Science.gov (United States)

    Durant, Bradford; Hackl, Jason; Balachandar, Sivaramakrishnan

    2017-11-01

    Nodal discontinuous Galerkin schemes present an attractive approach to robust high-order solution of the equations of fluid mechanics, but remain accompanied by subtle challenges in their consistent stabilization. The effect of quadrature choices (full mass matrix vs spectral elements), over-integration to manage aliasing errors, and explicit artificial viscosity on the numerical solution of a steady homentropic vortex are assessed over a wide range of resolutions and polynomial orders using quadrilateral elements. In both stagnant and advected vortices in periodic and non-periodic domains the need arises for explicit stabilization beyond the numerical surface fluxes of discontinuous Galerkin spectral elements. Artificial viscosity via the entropy viscosity method is assessed as a stabilizing mechanism. It is shown that the regularity of the artificial viscosity field is essential to its use for long-time stabilization of small-scale features in nodal discontinuous Galerkin solutions of the Euler equations of gas dynamics. Supported by the Department of Energy Predictive Science Academic Alliance Program Contract DE-NA0002378.

  10. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  11. Comparison of Methods to Predict Lower Bound Buckling Loads of Cylinders Under Axial Compression

    Science.gov (United States)

    Haynie, Waddy T.; Hilburger, Mark W.

    2010-01-01

    Results from a numerical study of the buckling response of two different orthogrid stiffened circular cylindrical shells with initial imperfections and subjected to axial compression are used to compare three different lower bound buckling load prediction techniques. These lower bound prediction techniques assume different imperfection types and include an imperfection based on a mode shape from an eigenvalue analysis, an imperfection caused by a lateral perturbation load, and an imperfection in the shape of a single stress-free dimple. The STAGS finite element code is used for the analyses. Responses of the cylinders for ranges of imperfection amplitudes are considered, and the effect of each imperfection is compared to the response of a geometrically perfect cylinder. Similar behavior was observed for shells that include a lateral perturbation load and a single dimple imperfection, and the results indicate that the predicted lower bounds are much less conservative than the corresponding results for the cylinders with the mode shape imperfection considered herein. In addition, the lateral perturbation technique and the single dimple imperfection produce response characteristics that are physically meaningful and can be validated via testing.

  12. Entropy stable high order discontinuous Galerkin methods for ideal compressible MHD on structured meshes

    Science.gov (United States)

    Liu, Yong; Shu, Chi-Wang; Zhang, Mengping

    2018-02-01

    We present a discontinuous Galerkin (DG) scheme with suitable quadrature rules [15] for ideal compressible magnetohydrodynamic (MHD) equations on structural meshes. The semi-discrete scheme is analyzed to be entropy stable by using the symmetrizable version of the equations as introduced by Godunov [32], the entropy stable DG framework with suitable quadrature rules [15], the entropy conservative flux in [14] inside each cell and the entropy dissipative approximate Godunov type numerical flux at cell interfaces to make the scheme entropy stable. The main difficulty in the generalization of the results in [15] is the appearance of the non-conservative "source terms" added in the modified MHD model introduced by Godunov [32], which do not exist in the general hyperbolic system studied in [15]. Special care must be taken to discretize these "source terms" adequately so that the resulting DG scheme satisfies entropy stability. Total variation diminishing / bounded (TVD/TVB) limiters and bound-preserving limiters are applied to control spurious oscillations. We demonstrate the accuracy and robustness of this new scheme on standard MHD examples.

  13. Simulation of moving boundaries interacting with compressible reacting flows using a second-order adaptive Cartesian cut-cell method

    Science.gov (United States)

    Muralidharan, Balaji; Menon, Suresh

    2018-03-01

    A high-order adaptive Cartesian cut-cell method, developed in the past by the authors [1] for simulation of compressible viscous flow over static embedded boundaries, is now extended for reacting flow simulations over moving interfaces. The main difficulty related to simulation of moving boundary problems using immersed boundary techniques is the loss of conservation of mass, momentum and energy during the transition of numerical grid cells from solid to fluid and vice versa. Gas phase reactions near solid boundaries can produce huge source terms to the governing equations, which if not properly treated for moving boundaries, can result in inaccuracies in numerical predictions. The small cell clustering algorithm proposed in our previous work is now extended to handle moving boundaries enforcing strict conservation. In addition, the cell clustering algorithm also preserves the smoothness of solution near moving surfaces. A second order Runge-Kutta scheme where the boundaries are allowed to change during the sub-time steps is employed. This scheme improves the time accuracy of the calculations when the body motion is driven by hydrodynamic forces. Simple one dimensional reacting and non-reacting studies of moving piston are first performed in order to demonstrate the accuracy of the proposed method. Results are then reported for flow past moving cylinders at subsonic and supersonic velocities in a viscous compressible flow and are compared with theoretical and previously available experimental data. The ability of the scheme to handle deforming boundaries and interaction of hydrodynamic forces with rigid body motion is demonstrated using different test cases. Finally, the method is applied to investigate the detonation initiation and stabilization mechanisms on a cylinder and a sphere, when they are launched into a detonable mixture. The effect of the filling pressure on the detonation stabilization mechanisms over a hyper-velocity sphere launched into a hydrogen

  14. The Effects of Design Strength, Fly Ash Content and Curing Method on Compressive Strength of High Volume Fly Ash Concrete: A Design of Experimental

    OpenAIRE

    Solikin Mochamad; Setiawan Budi

    2017-01-01

    High volume fly ash concrete becomes one of alternatives to produce green concrete as it uses waste material and significantly reduces the utilization of Portland cement in concrete production. Although using less cement, its compressive strength is comparable to ordinary Portland cement (hereafter OPC) and the its durability increases significantly. This paper reports investigation on the effect of design strength, fly ash content and curing method on compressive strength of High Volume Fly ...

  15. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2016-01-01

    Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.

  16. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Science.gov (United States)

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  17. Application of the collapsing method to acoustic emissions in a rock salt sample during a triaxial compression experiment

    International Nuclear Information System (INIS)

    Manthei, G.; Eisenblaetter, J.; Moriya, H.; Niitsuma, H.; Jones, R.H.

    2003-01-01

    Collapsing is a relatively new method. It is used for detecting patterns and structures in blurred and cloudy pictures of multiple soundings. In the case described here, the measurements were made in a very small region with a length of only a few decimeters. The events were registered during a triaxial compression experiment on a compact block of rock salt. The collapsing method showed a cellular structure of the salt block across the whole length of the test piece. The cells had a length of several cm, enclosing several grains of salt with an average grain size of less than one cm. In view of the fact that not all cell walls corresponded to acoustic emission events, it was assumed that only those grain boundaries are activated that are oriented at a favourable angle to the field of tension of the test piece [de

  18. Comparison of the accuracy of SST estimates by artificial neural networks (ANN) and other quantitative methods using radiolarian data from the Antarctic and Pacific Oceans

    Digital Repository Service at National Institute of Oceanography (India)

    Gupta, S.M.; Malmgren, B.A.

    ) regression, the maximum likelihood (ML) method, and artificial neural networks (ANNs), based on radiolarian faunal abundance data from surface sediments from the Antarctic and Pacific Oceans. Recent studies have suggested that ANNs may represent one...

  19. Compression stockings

    Science.gov (United States)

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  20. Accretion rate in mangroves sediment at Sungai Miang, Pahang, Malaysia: 230Thexcess versus artificial horizon marker method

    International Nuclear Information System (INIS)

    Kamaruzzaman Yunus; Jamil Tajam; Hasrizal Shaari; Noor Azhar Mohd Shazili; Misbahul Mohd Amin

    2008-01-01

    Mangroves have enormous ecological value and one of their important role is to act as an efficient sediment trappers which dominantly supplied by rivers and the atmosphere to the oceans. Applying the 230 Th excess method, an average accretion rate of 0.54 cm yr -1 was obtained. this is comparable to that of an artificial horizon marker method giving an average of 0.54 cm yr -1 . The 230 Th excess method provides a rapid and simple method of evaluating 230 Th excess accumulation histories in sediment cores. Sample preparation is also significantly simplified, thus providing a relatively quick and easy method for the determination of the accretion rate in mangrove area. (author)

  1. Estimation of the groundwater recharge in laterita using the artificial tritium method

    International Nuclear Information System (INIS)

    Castro Rubio Poli, D. de; Kimmelman e Silva, A.A.; Pfisterer, U.

    1990-01-01

    An estimation of the groundwater recharge was made, for the first time, in laterita, which is a alteration of dunite. This work was carried out at the city of Cajati-Jacupiranga, situated in the Ribeira Valley, state of Sao Paulo. The moisture migration in unsaturated zones was analized using water tagget with artificial tritium. In the place studied, an annual recharge of 1070mm was estimated. This value corresponds to 65% of local precipitation (1650 mm/year). The difference can be considered as a loss through evaporation, evapotranspiration and run off. (author) [pt

  2. Fluid-driven origami-inspired artificial muscles

    Science.gov (United States)

    Li, Shuguang; Vogt, Daniel M.; Rus, Daniela; Wood, Robert J.

    2017-12-01

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ˜600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration.

  3. Method for Cleanly and Precisely Breaking Off a Rock Core Using a Radial Compressive Force

    Science.gov (United States)

    Richardson, Megan; Lin, Justin

    2011-01-01

    The Mars Sample Return mission has the goal to drill, break off, and retain rock core samples. After some results gained from rock core mechanics testing, the realization that scoring teeth would cleanly break off the core after only a few millimeters of penetration, and noting that rocks are weak in tension, the idea was developed to use symmetric wedging teeth in compression to weaken and then break the core at the contact plane. This concept was developed as a response to the break-off and retention requirements. The wedges wrap around the estimated average diameter of the core to get as many contact locations as possible, and are then pushed inward, radially, through the core towards one another. This starts a crack and begins to apply opposing forces inside the core to propagate the crack across the plane of contact. The advantage is in the simplicity. Only two teeth are needed to break five varieties of Mars-like rock cores with limited penetration and reasonable forces. Its major advantage is that it does not require any length of rock to be attached to the parent in order to break the core at the desired location. Test data shows that some rocks break off on their own into segments or break off into discs. This idea would grab and retain a disc, push some discs upward and others out, or grab a segment, break it at the contact plane, and retain the portion inside of the device. It also does this with few moving parts in a simple, space-efficient design. This discovery could be implemented into a coring drill bit to precisely break off and retain any size rock core.

  4. Thermal characteristics of highly compressed bentonite

    International Nuclear Information System (INIS)

    Sueoka, Tooru; Kobayashi, Atsushi; Imamura, S.; Ogawa, Terushige; Murata, Shigemi.

    1990-01-01

    In the disposal of high level radioactive wastes in strata, it is planned to protect the canisters enclosing wastes with buffer materials such as overpacks and clay, therefore, the examination of artificial barrier materials is an important problem. The concept of the disposal in strata and the soil mechanics characteristics of highly compressed bentonite as an artificial barrier material were already reported. In this study, the basic experiment on the thermal characteristics of highly compressed bentonite was carried out, therefore, it is reported. The thermal conductivity of buffer materials is important because the possibility that it determines the temperature of solidified bodies and canisters is high, and the buffer materials may cause the thermal degeneration due to high temperature. Thermophysical properties are roughly divided into thermodynamic property, transport property and optical property. The basic principle of measured thermal conductivity and thermal diffusivity, the kinds of the measuring method and so on are explained. As for the measurement of the thermal conductivity of highly compressed bentonite, the experimental setup, the procedure, samples and the results are reported. (K.I.)

  5. Discontinuous Galerkin finite element method with anisotropic local grid refinement for inviscid compressible flows

    NARCIS (Netherlands)

    van der Vegt, Jacobus J.W.; van der Ven, H.

    1998-01-01

    A new discretization method for the three-dimensional Euler equations of gas dynamics is presented, which is based on the discontinuous Galerkin finite element method. Special attention is paid to an efficient implementation of the discontinuous Galerkin method that minimizes the number of flux

  6. Supervised artificial neural network-based method for conversion of solar radiation data (case study: Algeria)

    Science.gov (United States)

    Laidi, Maamar; Hanini, Salah; Rezrazi, Ahmed; Yaiche, Mohamed Redha; El Hadj, Abdallah Abdallah; Chellali, Farouk

    2017-04-01

    In this study, a backpropagation artificial neural network (BP-ANN) model is used as an alternative approach to predict solar radiation on tilted surfaces (SRT) using a number of variables involved in physical process. These variables are namely the latitude of the site, mean temperature and relative humidity, Linke turbidity factor and Angstrom coefficient, extraterrestrial solar radiation, solar radiation data measured on horizontal surfaces (SRH), and solar zenith angle. Experimental solar radiation data from 13 stations spread all over Algeria around the year (2004) were used for training/validation and testing the artificial neural networks (ANNs), and one station was used to make the interpolation of the designed ANN. The ANN model was trained, validated, and tested using 60, 20, and 20 % of all data, respectively. The configuration 8-35-1 (8 inputs, 35 hidden, and 1 output neurons) presented an excellent agreement between the prediction and the experimental data during the test stage with determination coefficient of 0.99 and root meat squared error of 5.75 Wh/m2, considering a three-layer feedforward backpropagation neural network with Levenberg-Marquardt training algorithm, a hyperbolic tangent sigmoid and linear transfer function at the hidden and the output layer, respectively. This novel model could be used by researchers or scientists to design high-efficiency solar devices that are usually tilted at an optimum angle to increase the solar incident on the surface.

  7. Application of a Hybrid Method Combining Grey Model and Back Propagation Artificial Neural Networks to Forecast Hepatitis B in China

    Directory of Open Access Journals (Sweden)

    Ruijing Gan

    2015-01-01

    Full Text Available Accurate incidence forecasting of infectious disease provides potentially valuable insights in its own right. It is critical for early prevention and may contribute to health services management and syndrome surveillance. This study investigates the use of a hybrid algorithm combining grey model (GM and back propagation artificial neural networks (BP-ANN to forecast hepatitis B in China based on the yearly numbers of hepatitis B and to evaluate the method’s feasibility. The results showed that the proposal method has advantages over GM (1, 1 and GM (2, 1 in all the evaluation indexes.

  8. Development of classification and prediction methods of critical heat flux using fuzzy theory and artificial neural networks

    International Nuclear Information System (INIS)

    Moon, Sang Ki

    1995-02-01

    This thesis applies new information techniques, artificial neural networks, (ANNs) and fuzzy theory, to the investigation of the critical heat flux (CHF) phenomenon for water flow in vertical round tubes. The work performed are (a) classification and prediction of CHF based on fuzzy clustering and ANN, (b) prediction and parametric trends analysis of CHF using ANN with the introduction of dimensionless parameters, and (c) detection of CHF occurrence using fuzzy rule and spatiotemporal neural network (STN). Fuzzy clustering and ANN are used for classification and prediction of the CHF using primary system parameters. The fuzzy clustering classifies the experimental CHF data into a few data clusters (data groups) according to the data characteristics. After classification of the experimental data, the characteristics of the resulted clusters are discussed with emphasis on the distribution of the experimental conditions and physical mechanisms. The CHF data in each group are trained in an artificial neural network to predict the CHF. The artificial neural network adjusts the weight so as to minimize the prediction error within the corresponding cluster. Application of the proposed method to the KAIST CHF data bank shows good prediction capability of the CHF, better than other existing methods. Parametric trends of the CHF are analyzed by applying artificial neural networks to a CHF data base for water flow in uniformly heated vertical round tubes. The analyses are performed from three viewpoints, i.e., for fixed inlet conditions, for fixed exit conditions, and based on local conditions hypothesis. In order to remove the necessity of data classification, Katto and Groeneveld et al.'s dimensionless parameters are introduced in training the ANNs with the experimental CHF data. The trained ANNs predict the CHF better than any other conventional correlations, showing RMS error of 8.9%, 13.1%, and 19.3% for fixed inlet conditions, for fixed exit conditions, and for local

  9. Artificial Inductance Concept to Compensate Nonlinear Inductance Effects in the Back EMF-Based Sensorless Control Method for PMSM

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Lei, Xiao; Blaabjerg, Frede

    2013-01-01

    The back EMF-based sensorless control method is very popular for permanent magnet synchronous machines (PMSMs) in the medium- to high-speed operation range due to its simple structure. In this speed range, the accuracy of the estimated position is mainly affected by the inductance, which varies...... at different loading conditions due to saturation effects. In this paper, a new concept of using a constant artificial inductance to replace the actual varying machine inductance for position estimation is introduced. This facilitates greatly the analysis of the influence of inductance variation...

  10. Effect of filtration of signals of brain activity on quality of recognition of brain activity patterns using artificial intelligence methods

    Science.gov (United States)

    Hramov, Alexander E.; Frolov, Nikita S.; Musatov, Vyachaslav Yu.

    2018-02-01

    In present work we studied features of the human brain states classification, corresponding to the real movements of hands and legs. For this purpose we used supervised learning algorithm based on feed-forward artificial neural networks (ANNs) with error back-propagation along with the support vector machine (SVM) method. We compared the quality of operator movements classification by means of EEG signals obtained experimentally in the absence of preliminary processing and after filtration in different ranges up to 25 Hz. It was shown that low-frequency filtering of multichannel EEG data significantly improved accuracy of operator movements classification.

  11. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  12. The research of optimal selection method for wavelet packet basis in compressing the vibration signal of a rolling bearing in fans and pumps

    International Nuclear Information System (INIS)

    Hao, W; Jinji, G

    2012-01-01

    Compressing the vibration signal of a rolling bearing has important significance to wireless monitoring and remote diagnosis of fans and pumps which is widely used in the petrochemical industry. In this paper, according to the characteristics of the vibration signal in a rolling bearing, a compression method based on the optimal selection of wavelet packet basis is proposed. We analyze several main attributes of wavelet packet basis and the effect to the compression of the vibration signal in a rolling bearing using wavelet packet transform in various compression ratios, and proposed a method to precisely select a wavelet packet basis. Through an actual signal, we come to the conclusion that an orthogonal wavelet packet basis with low vanishing moment should be used to compress the vibration signal of a rolling bearing to get an accurate energy proportion between the feature bands in the spectrum of reconstructing the signal. Within these low vanishing moments, orthogonal wavelet packet basis, and 'coif' wavelet packet basis can obtain the best signal-to-noise ratio in the same compression ratio for its best symmetry.

  13. Compressive force-path method unified ultimate limit-state design of concrete structures

    CERN Document Server

    Kotsovos, Michael D

    2014-01-01

    This book presents a method which simplifies and unifies the design of reinforced concrete (RC) structures and is applicable to any structural element under both normal and seismic loading conditions. The proposed method has a sound theoretical basis and is expressed in a unified form applicable to all structural members, as well as their connections. It is applied in practice through the use of simple failure criteria derived from first principles without the need for calibration through the use of experimental data. The method is capable of predicting not only load-carrying capacity but also the locations and modes of failure, as well as safeguarding the structural performance code requirements. In this book, the concepts underlying the method are first presented for the case of simply supported RC beams. The application of the method is progressively extended so as to cover all common structural elements. For each structural element considered, evidence of the validity of the proposed method is presented t...

  14. Screening of a new cadmium hyperaccumulator, Galinsoga parviflora, from winter farmland weeds using the artificially high soil cadmium concentration method.

    Science.gov (United States)

    Lin, Lijin; Jin, Qian; Liu, Yingjie; Ning, Bo; Liao, Ming'an; Luo, Li

    2014-11-01

    A new method, the artificially high soil cadmium (Cd) concentration method, was used to screen for Cd hyperaccumulators among winter farmland weeds. Galinsoga parviflora was the most promising remedial plant among 5 Cd accumulators or hyperaccumulators. In Cd concentration gradient experiments, as soil Cd concentration increased, root and shoot biomass decreased, and their Cd contents increased. In additional concentration gradient experiments, superoxide dismutase and peroxidase activities increased with soil Cd concentrations up to 75 mg kg(-1) , while expression of their isoenzymes strengthened. Catalase (CAT) activity declined and CAT isoenzyme expression weakened at soil Cd concentrations less than 50 mg kg(-1) . The maxima of Cd contents in shoots and roots were 137.63 mg kg(-1) and 105.70 mg kg(-1) , respectively, at 100 mg kg(-1) Cd in soil. The root and shoot bioconcentration factors exceeded 1.0, as did the translocation factor. In a field experiment, total extraction of Cd by shoots was 1.35 mg m(-2) to 1.43 mg m(-2) at soil Cd levels of 2.04 mg kg(-1) to 2.89 mg kg(-1) . Therefore, the artificially high soil Cd concentration method was effective for screening Cd hyperaccumulators. Galinsoga parviflora is a Cd hyperaccumulator that could be used to efficiently remediate Cd-contaminated farmland soil. © 2014 SETAC.

  15. [Research progress on mechanical performance evaluation of artificial intervertebral disc].

    Science.gov (United States)

    Li, Rui; Wang, Song; Liao, Zhenhua; Liu, Weiqiang

    2018-03-01

    The mechanical properties of artificial intervertebral disc (AID) are related to long-term reliability of prosthesis. There are three testing methods involved in the mechanical performance evaluation of AID based on different tools: the testing method using mechanical simulator, in vitro specimen testing method and finite element analysis method. In this study, the testing standard, testing equipment and materials of AID were firstly introduced. Then, the present status of AID static mechanical properties test (static axial compression, static axial compression-shear), dynamic mechanical properties test (dynamic axial compression, dynamic axial compression-shear), creep and stress relaxation test, device pushout test, core pushout test, subsidence test, etc. were focused on. The experimental techniques using in vitro specimen testing method and testing results of available artificial discs were summarized. The experimental methods and research status of finite element analysis were also summarized. Finally, the research trends of AID mechanical performance evaluation were forecasted. The simulator, load, dynamic cycle, motion mode, specimen and test standard would be important research fields in the future.

  16. Website-based PNG image steganography using the modified Vigenere Cipher, least significant bit, and dictionary based compression methods

    Science.gov (United States)

    Rojali, Salman, Afan Galih; George

    2017-08-01

    Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.

  17. DELIMINATE--a fast and efficient method for loss-less compression of genomic sequences: sequence analysis.

    Science.gov (United States)

    Mohammed, Monzoorul Haque; Dutta, Anirban; Bose, Tungadri; Chadaram, Sudha; Mande, Sharmila S

    2012-10-01

    An unprecedented quantity of genome sequence data is currently being generated using next-generation sequencing platforms. This has necessitated the development of novel bioinformatics approaches and algorithms that not only facilitate a meaningful analysis of these data but also aid in efficient compression, storage, retrieval and transmission of huge volumes of the generated data. We present a novel compression algorithm (DELIMINATE) that can rapidly compress genomic sequence data in a loss-less fashion. Validation results indicate relatively higher compression efficiency of DELIMINATE when compared with popular general purpose compression algorithms, namely, gzip, bzip2 and lzma. Linux, Windows and Mac implementations (both 32 and 64-bit) of DELIMINATE are freely available for download at: http://metagenomics.atc.tcs.com/compression/DELIMINATE. sharmila@atc.tcs.com Supplementary data are available at Bioinformatics online.

  18. Comparing and validating methods of reading instruction using behavioural and neural findings in an artificial orthography.

    Science.gov (United States)

    Taylor, J S H; Davis, Matthew H; Rastle, Kathleen

    2017-06-01

    There is strong scientific consensus that emphasizing print-to-sound relationships is critical when learning to read alphabetic languages. Nevertheless, reading instruction varies across English-speaking countries, from intensive phonic training to multicuing environments that teach sound- and meaning-based strategies. We sought to understand the behavioral and neural consequences of these differences in relative emphasis. We taught 24 English-speaking adults to read 2 sets of 24 novel words (e.g., /buv/, /sig/), written in 2 different unfamiliar orthographies. Following pretraining on oral vocabulary, participants learned to read the novel words over 8 days. Training in 1 language was biased toward print-to-sound mappings while training in the other language was biased toward print-to-meaning mappings. Results showed striking benefits of print-sound training on reading aloud, generalization, and comprehension of single words. Univariate analyses of fMRI data collected at the end of training showed that print-meaning relative to print-sound relative training increased neural effort in dorsal pathway regions involved in reading aloud. Conversely, activity in ventral pathway brain regions involved in reading comprehension was no different following print-meaning versus print-sound training. Multivariate analyses validated our artificial language approach, showing high similarity between the spatial distribution of fMRI activity during artificial and English word reading. Our results suggest that early literacy education should focus on the systematicities present in print-to-sound relationships in alphabetic languages, rather than teaching meaning-based strategies, in order to enhance both reading aloud and comprehension of written words. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Daily Reservoir Runoff Forecasting Method Using Artificial Neural Network Based on Quantum-behaved Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Chun-tian Cheng

    2015-07-01

    Full Text Available Accurate daily runoff forecasting is of great significance for the operation control of hydropower station and power grid. Conventional methods including rainfall-runoff models and statistical techniques usually rely on a number of assumptions, leading to some deviation from the exact results. Artificial neural network (ANN has the advantages of high fault-tolerance, strong nonlinear mapping and learning ability, which provides an effective method for the daily runoff forecasting. However, its training has certain drawbacks such as time-consuming, slow learning speed and easily falling into local optimum, which cannot be ignored in the real world application. In order to overcome the disadvantages of ANN model, the artificial neural network model based on quantum-behaved particle swarm optimization (QPSO, ANN-QPSO for short, is presented for the daily runoff forecasting in this paper, where QPSO was employed to select the synaptic weights and thresholds of ANN, while ANN was used for the prediction. The proposed model can combine the advantages of both QPSO and ANN to enhance the generalization performance of the forecasting model. The methodology is assessed by using the daily runoff data of Hongjiadu reservoir in southeast Guizhou province of China from 2006 to 2014. The results demonstrate that the proposed approach achieves much better forecast accuracy than the basic ANN model, and the QPSO algorithm is an alternative training technique for the ANN parameters selection.

  20. Introducing micrometer-sized artificial objects into live cells: a method for cell-giant unilamellar vesicle electrofusion.

    Directory of Open Access Journals (Sweden)

    Akira C Saito

    Full Text Available Here, we report a method for introducing large objects of up to a micrometer in diameter into cultured mammalian cells by electrofusion of giant unilamellar vesicles. We prepared GUVs containing various artificial objects using a water-in-oil (w/o emulsion centrifugation method. GUVs and dispersed HeLa cells were exposed to an alternating current (AC field to induce a linear cell-GUV alignment, and then a direct current (DC pulse was applied to facilitate transient electrofusion. With uniformly sized fluorescent beads as size indexes, we successfully and efficiently introduced beads of 1 µm in diameter into living cells along with a plasmid mammalian expression vector. Our electrofusion did not affect cell viability. After the electrofusion, cells proliferated normally until confluence was reached, and the introduced fluorescent beads were inherited during cell division. Analysis by both confocal microscopy and flow cytometry supported these findings. As an alternative approach, we also introduced a designed nanostructure (DNA origami into live cells. The results we report here represent a milestone for designing artificial symbiosis of functionally active objects (such as micro-machines in living cells. Moreover, our technique can be used for drug delivery, tissue engineering, and cell manipulation.

  1. Degradation of ticarcillin by subcritial water oxidation method: Application of response surface methodology and artificial neural network modeling.

    Science.gov (United States)

    Yabalak, Erdal

    2018-05-18

    This study was performed to investigate the mineralization of ticarcillin in the artificially prepared aqueous solution presenting ticarcillin contaminated waters, which constitute a serious problem for human health. 81.99% of total organic carbon removal, 79.65% of chemical oxygen demand removal, and 94.35% of ticarcillin removal were achieved by using eco-friendly, time-saving, powerful and easy-applying, subcritical water oxidation method in the presence of a safe-to-use oxidizing agent, hydrogen peroxide. Central composite design, which belongs to the response surface methodology, was applied to design the degradation experiments, to optimize the methods, to evaluate the effects of the system variables, namely, temperature, hydrogen peroxide concentration, and treatment time, on the responses. In addition, theoretical equations were proposed in each removal processes. ANOVA tests were utilized to evaluate the reliability of the performed models. F values of 245.79, 88.74, and 48.22 were found for total organic carbon removal, chemical oxygen demand removal, and ticarcillin removal, respectively. Moreover, artificial neural network modeling was applied to estimate the response in each case and its prediction and optimizing performance was statistically examined and compared to the performance of central composite design.

  2. Spatial interpolation and radiological mapping of ambient gamma dose rate by using artificial neural networks and fuzzy logic methods.

    Science.gov (United States)

    Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkın, Halim; Çevik, Uğur

    2017-09-01

    The aim of this study was to determine spatial risk dispersion of ambient gamma dose rate (AGDR) by using both artificial neural network (ANN) and fuzzy logic (FL) methods, compare the performances of methods, make dose estimations for intermediate stations with no previous measurements and create dose rate risk maps of the study area. In order to determine the dose distribution by using artificial neural networks, two main networks and five different network structures were used; feed forward ANN; Multi-layer perceptron (MLP), Radial basis functional neural network (RBFNN), Quantile regression neural network (QRNN) and recurrent ANN; Jordan networks (JN), Elman networks (EN). In the evaluation of estimation performance obtained for the test data, all models appear to give similar results. According to the cross-validation results obtained for explaining AGDR distribution, Pearson's r coefficients were calculated as 0.94, 0.91, 0.89, 0.91, 0.91 and 0.92 and RMSE values were calculated as 34.78, 43.28, 63.92, 44.86, 46.77 and 37.92 for MLP, RBFNN, QRNN, JN, EN and FL, respectively. In addition, spatial risk maps showing distributions of AGDR of the study area were created by all models and results were compared with geological, topological and soil structure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Determination of penetration depth at high velocity impact using finite element method and artificial neural network tools

    Directory of Open Access Journals (Sweden)

    Namık KılıÇ

    2015-06-01

    Full Text Available Determination of ballistic performance of an armor solution is a complicated task and evolved significantly with the application of finite element methods (FEM in this research field. The traditional armor design studies performed with FEM requires sophisticated procedures and intensive computational effort, therefore simpler and accurate numerical approaches are always worthwhile to decrease armor development time. This study aims to apply a hybrid method using FEM simulation and artificial neural network (ANN analysis to approximate ballistic limit thickness for armor steels. To achieve this objective, a predictive model based on the artificial neural networks is developed to determine ballistic resistance of high hardness armor steels against 7.62 mm armor piercing ammunition. In this methodology, the FEM simulations are used to create training cases for Multilayer Perceptron (MLP three layer networks. In order to validate FE simulation methodology, ballistic shot tests on 20 mm thickness target were performed according to standard Stanag 4569. Afterwards, the successfully trained ANN(s is used to predict the ballistic limit thickness of 500 HB high hardness steel armor. Results show that even with limited number of data, FEM-ANN approach can be used to predict ballistic penetration depth with adequate accuracy.

  4. Artificial intelligence

    CERN Document Server

    Hunt, Earl B

    1975-01-01

    Artificial Intelligence provides information pertinent to the fundamental aspects of artificial intelligence. This book presents the basic mathematical and computational approaches to problems in the artificial intelligence field.Organized into four parts encompassing 16 chapters, this book begins with an overview of the various fields of artificial intelligence. This text then attempts to connect artificial intelligence problems to some of the notions of computability and abstract computing devices. Other chapters consider the general notion of computability, with focus on the interaction bet

  5. Efficacy of Blood Sources and Artificial Blood Feeding Methods in Rearing of Aedes aegypti (Diptera: Culicidae) for Sterile Insect Technique and Incompatible Insect Technique Approaches in Sri Lanka

    OpenAIRE

    Nayana Gunathilaka; Tharaka Ranathunge; Lahiru Udayanga; Wimaladharma Abeyewickreme

    2017-01-01

    Introduction Selection of the artificial membrane feeding technique and blood meal source has been recognized as key considerations in mass rearing of vectors. Methodology Artificial membrane feeding techniques, namely, glass plate, metal plate, and Hemotek membrane feeding method, and three blood sources (human, cattle, and chicken) were evaluated based on feeding rates, fecundity, and hatching rates of Aedes aegypti. Significance in the variations among blood feeding was investigated by one...

  6. Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows

    Science.gov (United States)

    Zwick, David; Hackl, Jason; Balachandar, S.

    2017-11-01

    Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.

  7. Finite element methods in incompressible, adiabatic, and compressible flows from fundamental concepts to applications

    CERN Document Server

    Kawahara, Mutsuto

    2016-01-01

    This book focuses on the finite element method in fluid flows. It is targeted at researchers, from those just starting out up to practitioners with some experience. Part I is devoted to the beginners who are already familiar with elementary calculus. Precise concepts of the finite element method remitted in the field of analysis of fluid flow are stated, starting with spring structures, which are most suitable to show the concepts of superposition/assembling. Pipeline system and potential flow sections show the linear problem. The advection–diffusion section presents the time-dependent problem; mixed interpolation is explained using creeping flows, and elementary computer programs by FORTRAN are included. Part II provides information on recent computational methods and their applications to practical problems. Theories of Streamline-Upwind/Petrov–Galerkin (SUPG) formulation, characteristic formulation, and Arbitrary Lagrangian–Eulerian (ALE) formulation and others are presented with practical results so...

  8. The Effects of Design Strength, Fly Ash Content and Curing Method on Compressive Strength of High Volume Fly Ash Concrete: A Design of Experimental

    Directory of Open Access Journals (Sweden)

    Solikin Mochamad

    2017-01-01

    Full Text Available High volume fly ash concrete becomes one of alternatives to produce green concrete as it uses waste material and significantly reduces the utilization of Portland cement in concrete production. Although using less cement, its compressive strength is comparable to ordinary Portland cement (hereafter OPC and the its durability increases significantly. This paper reports investigation on the effect of design strength, fly ash content and curing method on compressive strength of High Volume Fly Ash Concrete. The experiment and data analysis were prepared using minitab, a statistic software for design of experimental. The specimens were concrete cylinder with diameter of 15 cm and height of 30 cm, tested for its compressive strength at 56 days. The result of the research demonstrates that high volume fly ash concrete can produce comparable compressive strength which meets the strength of OPC design strength especially for high strength concrete. In addition, the best mix proportion to achieve the design strength is the combination of high strength concrete and 50% content of fly ash. Moreover, the use of spraying method for curing method of concrete on site is still recommended as it would not significantly reduce the compressive strength result.

  9. Methods for evaluating tensile and compressive properties of plastic laminates reinforced with unwoven glass fibers

    Science.gov (United States)

    Karl Romstad

    1964-01-01

    Methods of obtaining strength and elastic properties of plastic laminates reinforced with unwoven glass fibers were evaluated using the criteria of the strength values obtained and the failure characteristics observed. Variables investigated were specimen configuration and the manner of supporting and loading the specimens. Results of this investigation indicate that...

  10. Geochemical and isotopic methods for management of artificial recharge in mazraha station (Damascus)

    International Nuclear Information System (INIS)

    Abou Zakhem, B.; Hafez, R.; Kadkoy, N.

    2009-11-01

    Artificial recharge of shallow groundwater at specially designed facilities is an attractive option increasing the storage capacity of potable water in arid and semi arid region such as Syria, Damascus Oasis. This operation needs integral management and detailed knowledge of groundwater dynamics and quantity and quality development of water. The objective of this study is to determine the temporal and spatial variations of chemical and environmental isotopic characteristics of groundwater during injection and recovery process. The geochemical and environmental isotope techniques are ideally suited for these investigations. 400 to 500 x10 3 m 3 of spring water were injected annually into the ambient groundwater in Mazraha station, Damascus Oasis, which is used later for drinking purpose. Native groundwater and injected water are calcium bicarbonate type with EC of about 850±100 μS/cm and 300±50 μS/cm respectively. The injected water is under saturated with respect to calcite, while ambient groundwater is over saturated and the mixed water is in equilibrium after injection. It was observed that The injection process created a dilution cloud decreasing chemical concentrations progressively that improve the groundwater quality. After completed injection, the dilution center moved about 200 m during 85 days to the south southeast according to the ambient groundwater flow path. Based on this observation, the hydraulic conductivity of the aquifer is estimated about 7.5±1.3x10 -4 m/s. The spatial distribution maps of CFC-11 and CFC-12, after injection, showed the same shape and flow direction of the spatial distribution of chemical elements. The effective diameter of artificial recharge is limited to about 250 m from the injection wells, as EC, Cl- and NO 3 - concentrations are effected significantly. Mixing ratio of 30% is required in order to lower nitrate concentration to less than 50 mg/l in native groundwater for potable water. Depending on pumping rate, the

  11. Separation prediction in two dimensional boundary layer flows using artificial neural networks

    International Nuclear Information System (INIS)

    Sabetghadam, F.; Ghomi, H.A.

    2003-01-01

    In this article, the ability of artificial neural networks in prediction of separation in steady two dimensional boundary layer flows is studied. Data for network training is extracted from numerical solution of an ODE obtained from Von Karman integral equation with approximate one parameter Pohlhousen velocity profile. As an appropriate neural network, a two layer radial basis generalized regression artificial neural network is used. The results shows good agreements between the overall behavior of the flow fields predicted by the artificial neural network and the actual flow fields for some cases. The method easily can be extended to unsteady separation and turbulent as well as compressible boundary layer flows. (author)

  12. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods

    International Nuclear Information System (INIS)

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-01-01

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate O(1/k 2 ). In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques. (paper)

  13. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods.

    Science.gov (United States)

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-21

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  14. Convergence of a numerical method for the compressible Navier-Stokes system on general domains

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Karper, T.; Michálek, Martin

    2016-01-01

    Roč. 134, č. 4 (2016), s. 667-704 ISSN 0029-599X R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : numerical methods * Navier - Stokes system Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016 http://link.springer.com/article/10.1007%2Fs00211-015-0786-6

  15. Convergence of a numerical method for the compressible Navier-Stokes system on general domains

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Karper, T.; Michálek, Martin

    2016-01-01

    Roč. 134, č. 4 (2016), s. 667-704 ISSN 0029-599X R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : numerical methods * Navier-Stokes system Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016 http://link.springer.com/article/10.1007%2Fs00211-015-0786-6

  16. An h-p Taylor-Galerkin finite element method for compressible Euler equations

    Science.gov (United States)

    Demkowicz, L.; Oden, J. T.; Rachowicz, W.; Hardy, O.

    1991-01-01

    An extension of the familiar Taylor-Galerkin method to arbitrary h-p spatial approximations is proposed. Boundary conditions are analyzed, and a linear stability result for arbitrary meshes is given, showing the unconditional stability for the parameter of implicitness alpha not less than 0.5. The wedge and blunt body problems are solved with both linear, quadratic, and cubic elements and h-adaptivity, showing the feasibility of higher orders of approximation for problems with shocks.

  17. Collagen immobilized PVA hydrogel-hydroxyapatite composites prepared by kneading methods as a material for peripheral cuff of artificial cornea

    International Nuclear Information System (INIS)

    Kobayashi, Hisatoshi; Kato, Masabumi; Taguchi, Tetsushi; Ikoma, Toshiyuki; Miyashita, Hideyuki; Shimmura, Shigeto; Tsubota, Kazuo; Tanaka, Junzo

    2004-01-01

    In order to achieve the firm fixation of the artificial cornea to host tissues, composites of collagen-immobilized poly(vinyl alcohol) hydrogel with hydroxyapatite were synthesized by a hydroxyapatite particles kneading method. The preparation method, characterization, and the results of corneal cell adhesion and proliferation on the composite material were studied. PVA-COL-HAp composites were successfully synthesized. A micro-porous structure of the PVA-COL-HAp could be introduced by hydrochloric acid treatment and the porosity could be controlled by the pH of the hydrochloric acid solution, the treatment time, and the crystallinity of the HAp particles. Chick embryonic keratocyto-like cells were well attached and proliferated on the PVA-COL-HAp composites. This material showed potential for keratoprosthesis application. Further study such as a long-term animal study is now required

  18. Proposal of a New Method for Neutron Dosimetry Based on Spectral Information Obtained by Application of Artificial Neural Networks

    International Nuclear Information System (INIS)

    Fehrenbacher, G.; Schuetz, R.; Hahn, K.; Sprunck, M.; Cordes, E.; Biersack, J.P.; Wahl, W.

    1999-01-01

    A new method for the monitoring of neutron radiation is proposed. It is based on the determination of spectral information on the neutron field in order to derive dose quantities like the ambient dose equivalent, the dose equivalent, or other dose quantities which depend on the neutron energy. The method uses a multi-element system consisting of converter type silicon detectors. The unfolding procedure is based on an artificial neural network (ANN). The response function of each element is determined by a computational model considering the neutron interaction with the dosemeter layers and the subsequent transport of produced ions. An example is given for a multi-element system. The ANN is trained by a given set of neutron spectra and then applied to count responses obtained in neutron fields. Four examples of spectra unfolded using the ANN are presented. (author)

  19. Comparative study of landslides susceptibility mapping methods: Multi-Criteria Decision Making (MCDM) and Artificial Neural Network (ANN)

    Science.gov (United States)

    Salleh, S. A.; Rahman, A. S. A. Abd; Othman, A. N.; Mohd, W. M. N. Wan

    2018-02-01

    As different approach produces different results, it is crucial to determine the methods that are accurate in order to perform analysis towards the event. This research aim is to compare the Rank Reciprocal (MCDM) and Artificial Neural Network (ANN) analysis techniques in determining susceptible zones of landslide hazard. The study is based on data obtained from various sources such as local authority; Dewan Bandaraya Kuala Lumpur (DBKL), Jabatan Kerja Raya (JKR) and other agencies. The data were analysed and processed using Arc GIS. The results were compared by quantifying the risk ranking and area differential. It was also compared with the zonation map classified by DBKL. The results suggested that ANN method gives better accuracy compared to MCDM with 18.18% higher accuracy assessment of the MCDM approach. This indicated that ANN provides more reliable results and it is probably due to its ability to learn from the environment thus portraying realistic and accurate result.

  20. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Mustafa Serter Uzer

    2013-01-01

    Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

  1. Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data.

    Science.gov (United States)

    Ching, Travers; Zhu, Xun; Garmire, Lana X

    2018-04-01

    Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.

  2. Modeling of the Cutting Forces in Turning Process Using Various Methods of Cooling and Lubricating: An Artificial Intelligence Approach

    Directory of Open Access Journals (Sweden)

    Djordje Cica

    2013-01-01

    Full Text Available Cutting forces are one of the inherent phenomena and a very significant indicator of the metal cutting process. The work presented in this paper is an investigation of the prediction of these parameters in turning using soft computing techniques. During the experimental research focus is placed on the application of various methods of cooling and lubricating of the cutting zone. On this occasion were used the conventional method of cooling and lubricating, high pressure jet assisted machining, and minimal quantity lubrication technique. The data obtained by experiment are used to create two different models, namely, artificial neural network and adaptive networks based fuzzy inference systems for prediction of cutting forces. Furthermore, both models are compared with the experimental data and results are indicated.

  3. A comparative study of laser induced breakdown spectroscopy analysis for element concentrations in aluminum alloy using artificial neural networks and calibration methods

    International Nuclear Information System (INIS)

    Inakollu, Prasanthi; Philip, Thomas; Rai, Awadhesh K.; Yueh Fangyu; Singh, Jagdish P.

    2009-01-01

    A comparative study of analysis methods (traditional calibration method and artificial neural networks (ANN) prediction method) for laser induced breakdown spectroscopy (LIBS) data of different Al alloy samples was performed. In the calibration method, the intensity of the analyte lines obtained from different samples are plotted against their concentration to form calibration curves for different elements from which the concentrations of unknown elements were deduced by comparing its LIBS signal with the calibration curves. Using ANN, an artificial neural network model is trained with a set of input data of known composition samples. The trained neural network is then used to predict the elemental concentration from the test spectra. The present results reveal that artificial neural networks are capable of predicting values better than traditional method in most cases

  4. Artificial intelligent methods for thermodynamic evaluation of ammonia-water refrigeration systems

    International Nuclear Information System (INIS)

    Sencan, Arzu

    2006-01-01

    In this paper, Linear Regression and M5'Rules models within Data Mining Process and Artificial Neural Network (ANN) model for thermodynamic evaluation of ammonia-water absorption refrigeration systems was carried out. A new formulation based on ANN model is presented for the analysis of ammonia-water absorption refrigeration systems (AWRS) because the optimal result was obtained by using ANN Model. Thermodynamic analysis of the AWRS is very complex because of analytic functions used for calculating the properties of fluid couples and simulation programs. Therefore, it is extremely difficult to perform analysis of this system. COP and f are estimated depending on the temperatures of system component and concentration values. Using the weights obtained from the trained network a new formulation is presented for the calculation of the COP and f; the use of ANN is proliferating with high speed in simulation. The R 2 -values obtained when unknown data were used to the networks was 0.9996 and 0.9873 for the circulation ratio and COP respectively which is very satisfactory. The use of this new formulation, which can be employed with any programming language or spreadsheet program for the estimation of the circulation ratio and COP of AWRS, as described in this paper, may make the use of dedicated ANN software unnecessary

  5. Computer vision-based method for classification of wheat grains using artificial neural network.

    Science.gov (United States)

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  6. Development of methods for remediation of artificial polluted soils and improvement of soils for ecologically clean agricultural production systems

    International Nuclear Information System (INIS)

    Bogachev, V.; Adrianova, G.; Zaitzev, V.; Kalinin, V.; Kovalenko, E.; Makeev, A.; Malikova, L.; Popov, Yu.; Savenkov, A.; Shnyakina, V.

    1996-01-01

    The purpose of the research: Development of methods for the remediation of artificial polluted soils and the improvement of polluted lands to ecologically clean agricultural production.The following tasks will be implemented in this project to achieve viable practical solutions: - To determine the priority pollutants, their ecological pathways, and sources of origin. - To form a supervised environmental monitoring data bank throughout the various geo system conditions. - To evaluate the degree of the bio geo system pollution and the influence on the health of the local human populations. - To establish agricultural plant tolerance levels to the priority pollutants. - To calculate the standard concentrations of the priority pollutants for main agricultural plant groups. - To develop a soil remediation methodology incorporating the structural, functional geo system features. - To establish a territory zone division methodology in consideration of the degree of component pollution, plant tolerance to pollutants, plant production conditions, and human health. - Scientific grounding of the soil remediation proposals and agricultural plant material introductions with soil pollution levels and relative plant tolerances to pollutants. Technological Means, Methods, and Approaches Final proposed solutions will be based upon geo system and ecosystem approaches and methodologies. The complex ecological valuation methods of the polluted territories will be used in this investigation. Also, laboratory culture in vitro, application work, and multi-factor field experiments will be conducted. The results will be statistically analyzed using appropriate methods. Expected Results Complex biogeochemical artificial province assessment according to primary pollutant concentrations. Development of agricultural plant tolerance levels relative to the priority pollutants. Assessment of newly introduced plant materials that may possess variable levels of pollution tolerance. Remediation

  7. A relaxation-projection method for compressible flows. Part I: The numerical equation of state for the Euler equations

    International Nuclear Information System (INIS)

    Saurel, Richard; Franquet, Erwin; Daniel, Eric; Le Metayer, Olivier

    2007-01-01

    A new projection method is developed for the Euler equations to determine the thermodynamic state in computational cells. It consists in the resolution of a mechanical relaxation problem between the various sub-volumes present in a computational cell. These sub-volumes correspond to the ones traveled by the various waves that produce states with different pressures, velocities, densities and temperatures. Contrarily to Godunov type schemes the relaxed state corresponds to mechanical equilibrium only and remains out of thermal equilibrium. The pressure computation with this relaxation process replaces the use of the conventional equation of state (EOS). A simplified relaxation method is also derived and provides a specific EOS (named the Numerical EOS). The use of the Numerical EOS gives a cure to spurious pressure oscillations that appear at contact discontinuities for fluids governed by real gas EOS. It is then extended to the computation of interface problems separating fluids with different EOS (liquid-gas interface for example) with the Euler equations. The resulting method is very robust, accurate, oscillation free and conservative. For the sake of simplicity and efficiency the method is developed in a Lagrange-projection context and is validated over exact solutions. In a companion paper [F. Petitpas, E. Franquet, R. Saurel, A relaxation-projection method for compressible flows. Part II: computation of interfaces and multiphase mixtures with stiff mechanical relaxation. J. Comput. Phys. (submitted for publication)], the method is extended to the numerical approximation of a non-conservative hyperbolic multiphase flow model for interface computation and shock propagation into mixtures

  8. Computerized detection of multiple sclerosis candidate regions based on a level set method using an artificial neural network

    International Nuclear Information System (INIS)

    Kuwazuru, Junpei; Magome, Taiki; Arimura, Hidetaka; Yamashita, Yasuo; Oki, Masafumi; Toyofuku, Fukai; Kakeda, Shingo; Yamamoto, Daisuke

    2010-01-01

    Yamamoto et al. developed the system for computer-aided detection of multiple sclerosis (MS) candidate regions. In a level set method in their proposed method, they employed the constant threshold value for the edge indicator function related to a speed function of the level set method. However, it would be appropriate to adjust the threshold value to each MS candidate region, because the edge magnitudes in MS candidates differ from each other. Our purpose of this study was to develop a computerized detection of MS candidate regions in MR images based on a level set method using an artificial neural network (ANN). To adjust the threshold value for the edge indicator function in the level set method to each true positive (TP) and false positive (FP) region, we constructed the ANN. The ANN could provide the suitable threshold value for each candidate region in the proposed level set method so that TP regions can be segmented and FP regions can be removed. Our proposed method detected MS regions at a sensitivity of 82.1% with 0.204 FPs per slice and similarity index of MS candidate regions was 0.717 on average. (author)

  9. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    DEFF Research Database (Denmark)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.

    2017-01-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT...... matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via ℓ1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate...... and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1...

  10. Leak Detection Modeling and Simulation for Oil Pipeline with Artificial Intelligence Method

    Directory of Open Access Journals (Sweden)

    Pudjo Sukarno

    2007-05-01

    Full Text Available Leak detection is always interesting research topic, where leak location and leak rate are two pipeline leaking parameters that should be determined accurately to overcome pipe leaking problems. In this research those two parameters are investigated by developing transmission pipeline model and the leak detection model which is developed using Artificial Neural Network. The mathematical approach needs actual leak data to train the leak detection model, however such data could not be obtained from oil fields. Therefore, for training purposes hypothetical data are developed using the transmission pipeline model, by applying various physical configuration of pipeline and applying oil properties correlations to estimate the value of oil density and viscosity. The various leak locations and leak rates are also represented in this model. The prediction of those two leak parameters will be completed until the total error is less than certain value of tolerance, or until iterations level is reached. To recognize the pattern, forward procedure is conducted. The application of this approach produces conclusion that for certain pipeline network configuration, the higher number of iterations will produce accurate result. The number of iterations depend on the leakage rate, the smaller leakage rate, the higher number of iterations are required. The accuracy of this approach is clearly determined by the quality of training data. Therefore, in the preparation of training data the results of pressure drop calculations should be validated by the real measurement of pressure drop along the pipeline. For the accuracy purposes, there are possibility to change the pressure drop and fluid properties correlations, to get the better results. The results of this research are expected to give real contribution for giving an early detection of oil-spill in oil fields.

  11. Comparative evaluation of the powder and compression properties of various grades and brands of microcrystalline cellulose by multivariate methods.

    Science.gov (United States)

    Haware, Rahul V; Bauer-Brandl, Annette; Tho, Ingunn

    2010-01-01

    The present work challenges a newly developed approach to tablet formulation development by using chemically identical materials (grades and brands of microcrystalline cellulose). Tablet properties with respect to process and formulation parameters (e.g. compression speed, added lubricant and Emcompress fractions) were evaluated by 2(3)-factorial designs. Tablets of constant true volume were prepared on a compaction simulator at constant pressure (approx. 100 MPa). The highly repeatable and accurate force-displacement data obtained was evaluated by simple 'in-die' Heckel method and work descriptors. Relationships and interactions between formulation, process and tablet parameters were identified and quantified by multivariate analysis techniques; principal component analysis (PCA) and partial least square regressions (PLS). The method proved to be able to distinguish between different grades of MCC and even between two different brands of the same grade (Avicel PH 101 and Vivapur 101). One example of interaction was studied in more detail by mixed level design: The interaction effect of lubricant and Emcompress on elastic recovery of Avicel PH 102 was demonstrated to be complex and non-linear using the development tool under investigation.

  12. In vitro biomechanical properties of 2 compression fixation methods for midbody proximal sesamoid bone fractures in horses.

    Science.gov (United States)

    Woodie, J B; Ruggles, A J; Litsky, A S

    2000-01-01

    To evaluate 2 methods of midbody proximal sesamoid bone repair--fixation by a screw placed in lag fashion and circumferential wire fixation--by comparing yield load and the adjacent soft-tissue strain during monotonic loading. Experimental study. 10 paired equine cadaver forelimbs from race-trained horses. A transverse midbody osteotomy of the medial proximal sesamoid bone (PSB) was created. The osteotomy was repaired with a 4.5-mm cortex bone screw placed in lag fashion or a 1.25-mm circumferential wire. The limbs were instrumented with differential variable reluctance transducers placed in the suspensory apparatus and distal sesamoidean ligaments. The limbs were tested in axial compression in a single cycle until failure. The cortex bone screw repairs had a mean yield load of 2,908.2 N; 1 limb did not fail when tested to 5,000 N. All circumferential wire repairs failed with a mean yield load of 3,406.3 N. There was no statistical difference in mean yield load between the 2 repair methods. The maximum strain generated in the soft tissues attached to the proximal sesamoid bones was not significantly different between repair groups. All repaired limbs were able to withstand loads equal to those reportedly applied to the suspensory apparatus in vivo during walking. Each repair technique should have adequate yield strength for repair of midbody fractures of the PSB immediately after surgery.

  13. Comparative analysis of methods for measurements of food intake and utilization using the soybean looper, Pseudoplusia includens and artificial media

    International Nuclear Information System (INIS)

    Parra, J.R.P.; Kogan, M.; Illinois Agricultural Experiment Station, Urbana; Illinois Univ., Urbana

    1981-01-01

    An analysis of intake and utilization of an artificial medium by larvae of the soybean looper, Pseudoplusia includens Walker, Lepidoptera: Noctuidae, was performed using 4 methods: standard gravimetric, chromic oxide, Calco Oil Red, and 14 C-glucose. Each method was used in conjunction with standard gravimetry. The relative merits of the indirect methods were analyzed in terms of precision and accuracy for ECI and ECD estimation, cost, and overall versatility. Only the gravimetric method combined ca. 80% precision in ECI and ECD estimation with low cost and maximum versatility. Calco Oil Red at 0.1% w/v was detrimental to the larvae. Cr 2 O 3 caused reduced intake but conversion was increased resulting in normal development and growth of larvae. The radioisotopic method had the advantage of providing a direct means of measuring expired CO 2 . The need to operate under a totally enclosed system, however, poses some serious difficulties in the use of radioisotopes. There seems to be little advantage in any of the proposed indirect methods, except if there are unusual difficulties in separating the excreta from the medium. (orig.)

  14. An oscillation free shock-capturing method for compressible van der Waals supercritical fluid flows

    International Nuclear Information System (INIS)

    Pantano, C.; Saurel, R.; Schmitt, T.

    2017-01-01

    Numerical solutions of the Euler equations using real gas equations of state (EOS) often exhibit serious inaccuracies. The focus here is the van der Waals EOS and its variants (often used in supercritical fluid computations). The problems are not related to a lack of convexity of the EOS since the EOS are considered in their domain of convexity at any mesh point and at any time. The difficulties appear as soon as a density discontinuity is present with the rest of the fluid in mechanical equilibrium and typically result in spurious pressure and velocity oscillations. This is reminiscent of well-known pressure oscillations occurring with ideal gas mixtures when a mass fraction discontinuity is present, which can be interpreted as a discontinuity in the EOS parameters. We are concerned with pressure oscillations that appear just for a single fluid each time a density discontinuity is present. As a result, the combination of density in a nonlinear fashion in the EOS with diffusion by the numerical method results in violation of mechanical equilibrium conditions which are not easy to eliminate, even under grid refinement.

  15. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    Science.gov (United States)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  16. A simple method for preparing artificial larval diet of the West Indian sweetpotato weevil, Euscepes postfasciatus (Fairmaire) (Coleoptera: Curculionidae)

    International Nuclear Information System (INIS)

    Uesato, T.; Kohama, T.

    2008-01-01

    The method for preparing ordinary larval artificial diet for Euscepes postfasciatus (old diet) was complicated and time consuming. Some ingredients (casein, saccharose, salt mixture, etc.) of the diet were added to boiled agar solution, others (vitamin mixture, sweetpotato powder, etc.) were added after the solution was cooled to 55degC. To simplify the diet preparation, we combined all ingredients before mixing with water, and then boiled the solution (new diet). There were no significant differences of survival rate (from egg hatching to adult eclosion) and right elytron length between the weevils reared on the old and new diets, but the development period (from egg to adult) of the weevils fed the new diet was significantly (1.3 days) longer than that of those fed the old diet. Preparation time of the new diet was half that of the old diet. These results suggest that simplified diet preparation can be introduced into the mass-rearing of E. postfasciatus

  17. Artificial muscles for a novel simulator in minimally invasive spine surgery.

    Science.gov (United States)

    Hollensteiner, Marianne; Fuerst, David; Schrempf, Andreas

    2014-01-01

    Vertebroplasty and kyphoplasty are commonly used minimally invasive methods to treat vertebral compression fractures. Novice surgeons gather surgical skills in different ways, mainly by "learning by doing" or training on models, specimens or simulators. Currently, a new training modality, an augmented reality simulator for minimally invasive spine surgeries, is going to be developed. An important step in investigating this simulator is the accurate establishment of artificial tissues. Especially vertebrae and muscles, reproducing a comparable haptical feedback during tool insertion, are necessary. Two artificial tissues were developed to imitate natural muscle tissue. The axial insertion force was used as validation parameter. It appropriates the mechanical properties of artificial and natural muscles. Validation was performed on insertion measurement data from fifteen artificial muscle tissues compared to human muscles measurement data. Based on the resulting forces during needle insertion into human muscles, a suitable material composition for manufacturing artificial muscles was found.

  18. The influence of kind of coating additive on the compressive strength of RCA-based concrete prepared by triple-mixing method

    Science.gov (United States)

    Urban, K.; Sicakova, A.

    2017-10-01

    The paper deals with the use of alternative powder additives (fly ash and fine fraction of recycled concrete) to improve the recycled concrete aggregate and this occurs directly in the concrete mixing process. Specific mixing process (triple mixing method) is applied as it is favourable for this goal. Results of compressive strength after 2 and 28 days of hardening are given. Generally, using powder additives for coating the coarse recycled concrete aggregate in the first stage of triple mixing resulted in decrease of compressive strength, comparing the cement. There is no very important difference between samples based on recycled concrete aggregate and those based on natural aggregate as far as the cement is used for coating. When using both the fly ash and recycled concrete powder, the kind of aggregate causes more significant differences in compressive strength, with the values of those based on the recycled concrete aggregate being worse.

  19. Artificial Intelligence.

    Science.gov (United States)

    Information Technology Quarterly, 1985

    1985-01-01

    This issue of "Information Technology Quarterly" is devoted to the theme of "Artificial Intelligence." It contains two major articles: (1) Artificial Intelligence and Law" (D. Peter O'Neill and George D. Wood); (2) "Artificial Intelligence: A Long and Winding Road" (John J. Simon, Jr.). In addition, it contains two sidebars: (1) "Calculating and…

  20. A stable penalty method for the compressible Navier-Stokes equations: II: One-dimensional domain decomposition schemes

    DEFF Research Database (Denmark)

    Hesthaven, Jan

    1997-01-01

    This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions and as a res......This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions...... and as a result the patching of subdomains is local in space. The scheme is studied in detail for Burgers's equation and developed for the compressible Navier-Stokes equations in general curvilinear coordinates. The versatility of the proposed scheme for the compressible Navier-Stokes equations is illustrated...

  1. Application of multicriteria decision making methods to compression ignition engine efficiency and gaseous, particulate, and greenhouse gas emissions.

    Science.gov (United States)

    Surawski, Nicholas C; Miljevic, Branka; Bodisco, Timothy A; Brown, Richard J; Ristovski, Zoran D; Ayoko, Godwin A

    2013-02-19

    Compression ignition (CI) engine design is subject to many constraints, which present a multicriteria optimization problem that the engine researcher must solve. In particular, the modern CI engine must not only be efficient but must also deliver low gaseous, particulate, and life cycle greenhouse gas emissions so that its impact on urban air quality, human health, and global warming is minimized. Consequently, this study undertakes a multicriteria analysis, which seeks to identify alternative fuels, injection technologies, and combustion strategies that could potentially satisfy these CI engine design constraints. Three data sets are analyzed with the Preference Ranking Organization Method for Enrichment Evaluations and Geometrical Analysis for Interactive Aid (PROMETHEE-GAIA) algorithm to explore the impact of (1) an ethanol fumigation system, (2) alternative fuels (20% biodiesel and synthetic diesel) and alternative injection technologies (mechanical direct injection and common rail injection), and (3) various biodiesel fuels made from 3 feedstocks (i.e., soy, tallow, and canola) tested at several blend percentages (20-100%) on the resulting emissions and efficiency profile of the various test engines. The results show that moderate ethanol substitutions (~20% by energy) at moderate load, high percentage soy blends (60-100%), and alternative fuels (biodiesel and synthetic diesel) provide an efficiency and emissions profile that yields the most "preferred" solutions to this multicriteria engine design problem. Further research is, however, required to reduce reactive oxygen species (ROS) emissions with alternative fuels and to deliver technologies that do not significantly reduce the median diameter of particle emissions.

  2. Defining spinal instability and methods of classification to optimise care for patients with malignant spinal cord compression: A systematic review

    International Nuclear Information System (INIS)

    Sheehan, C.

    2016-01-01

    The incidence of Malignant Spinal Cord Compression (MSCC) is thought to be increasing in the UK due to an aging population and improving cancer survivorship. The impact of such a diagnosis requires emergency treatment. In 2008 the National Institute of Clinical Excellence produced guidelines on the management of MSCC which includes a recommendation to assess spinal instability. However, a lack of guidelines to assess spinal instability in oncology patients is widely acknowledged. This can result in variations in the management of care for such patients. A spinal instability assessment can influence optimum patient care (bed rest or encouraged mobilisation) and inform the best definitive treatment modality (surgery or radiotherapy) for an individual patient. The aim of this systematic review is to attempt to identify a consensus definition of spinal instability and methods by which it can be classified. - Highlights: • A lack of guidance on metastatic spinal instability results in variations of care. • Definitions and assessments for spinal instability are explored in this review. • A Spinal Instability Neoplastic Scoring (SINS) system has been identified. • SINS could potentially be adopted to optimise and standardise patient care.

  3. A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov

    International Nuclear Information System (INIS)

    Greenough, J.A.; Rider, W.J.

    2004-01-01

    A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the 'peak' shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are

  4. A quantitative comparison of numerical methods for the compressible Euler equations: fifth-order WENO and piecewise-linear Godunov

    Science.gov (United States)

    Greenough, J. A.; Rider, W. J.

    2004-05-01

    A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are

  5. Larvas output and influence of human factor in reliability of meat inspection by the method of artificial digestion

    Directory of Open Access Journals (Sweden)

    Đorđević Vesna

    2013-01-01

    Full Text Available On the basis of the performed analyses of the factors that contributed the infected meat reach food chain, we have found out that the infection occurred after consuming the meat inspected by the method of collective samples artificial digestion by using a magnetic stirrer (MM. In this work there are presented assay results which show how modifications of the method, on the level of final sedimentation, influence the reliability of Trichinella larvas detection in the infected meat samples. It has been shown that use of inadequate laboratory containers for larva collecting in final sedimentation and change of volume of digestive liquid that outflow during colouring preparations, can significantly influence inspection results. Larva detection errors ranged from 4 to 80% in presented the experimental groups in regard to the control group of samples inspected by using MM method, which had been carried out completely according to Europe Commission procedure No 2075/2005, where no errors in larva number per sample was found. We consider that the results of this work will contribute to the improvement of control of the method performance and especially of the critical point during inspection of meat samples to Trichinella larvas in Serbia.

  6. Comparison of pixel -based and artificial neural networks classification methods for detecting forest cover changes in Malaysia

    International Nuclear Information System (INIS)

    Deilmai, B R; Rasib, A W; Ariffin, A; Kanniah, K D

    2014-01-01

    According to the FAO (Food and Agriculture Organization), Malaysia lost 8.6% of its forest cover between 1990 and 2005. In forest cover change detection, remote sensing plays an important role. A lot of change detection methods have been developed, and most of them are semi-automated. These methods are time consuming and difficult to apply. One of the new and robust methods for change detection is artificial neural network (ANN). In this study, (ANN) classification scheme is used to detect the forest cover changes in the Johor state in Malaysia. Landsat Thematic Mapper images covering a period of 9 years (2000 and 2009) are used. Results obtained with ANN technique was compared with Maximum likelihood classification (MLC) to investigate whether ANN can perform better in the tropical environment. Overall accuracy of the ANN and MLC techniques are 75%, 68% (2000) and 80%, 75% (2009) respectively. Using the ANN method, it was found that forest area in Johor decreased as much as 1298 km2 between 2000 and 2009. The results also showed the potential and advantages of neural network in classification and change detection analysis

  7. Application of Classical and Lie Transform Methods to Zonal Perturbation in the Artificial Satellite

    Science.gov (United States)

    San-Juan, J. F.; San-Martin, M.; Perez, I.; Lopez-Ochoa, L. M.

    2013-08-01

    A scalable second-order analytical orbit propagator program is being carried out. This analytical orbit propagator combines modern perturbation methods, based on the canonical frame of the Lie transform, and classical perturbation methods in function of orbit types or the requirements needed for a space mission, such as catalog maintenance operations, long period evolution, and so on. As a first step on the validation of part of our orbit propagator, in this work we only consider the perturbation produced by zonal harmonic coefficients in the Earth's gravity potential, so that it is possible to analyze the behaviour of the perturbation methods involved in the corresponding analytical theories.

  8. The optimal design support system for shell components of vehicles using the methods of artificial intelligence

    Science.gov (United States)

    Szczepanik, M.; Poteralski, A.

    2016-11-01

    The paper is devoted to an application of the evolutionary methods and the finite element method to the optimization of shell structures. Optimization of thickness of a car wheel (shell) by minimization of stress functional is considered. A car wheel geometry is built from three surfaces of revolution: the central surface with the holes destined for the fastening bolts, the surface of the ring of the wheel and the surface connecting the two mentioned earlier. The last one is subjected to the optimization process. The structures are discretized by triangular finite elements and subjected to the volume constraints. Using proposed method, material properties or thickness of finite elements are changing evolutionally and some of them are eliminated. As a result the optimal shape, topology and material or thickness of the structures are obtained. The numerical examples demonstrate that the method based on evolutionary computation is an effective technique for solving computer aided optimal design.

  9. Integration of artificial intelligence methods and life cycle assessment to predict energy output and environmental impacts of paddy production.

    Science.gov (United States)

    Nabavi-Pelesaraei, Ashkan; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinzadeh-Bandbafha, Homa; Chau, Kwok-Wing

    2018-08-01

    Prediction of agricultural energy output and environmental impacts play important role in energy management and conservation of environment as it can help us to evaluate agricultural energy efficiency, conduct crops production system commissioning, and detect and diagnose faults of crop production system. Agricultural energy output and environmental impacts can be readily predicted by artificial intelligence (AI), owing to the ease of use and adaptability to seek optimal solutions in a rapid manner as well as the use of historical data to predict future agricultural energy use pattern under constraints. This paper conducts energy output and environmental impact prediction of paddy production in Guilan province, Iran based on two AI methods, artificial neural networks (ANNs), and adaptive neuro fuzzy inference system (ANFIS). The amounts of energy input and output are 51,585.61MJkg -1 and 66,112.94MJkg -1 , respectively, in paddy production. Life Cycle Assessment (LCA) is used to evaluate environmental impacts of paddy production. Results show that, in paddy production, in-farm emission is a hotspot in global warming, acidification and eutrophication impact categories. ANN model with 12-6-8-1 structure is selected as the best one for predicting energy output. The correlation coefficient (R) varies from 0.524 to 0.999 in training for energy input and environmental impacts in ANN models. ANFIS model is developed based on a hybrid learning algorithm, with R for predicting output energy being 0.860 and, for environmental impacts, varying from 0.944 to 0.997. Results indicate that the multi-level ANFIS is a useful tool to managers for large-scale planning in forecasting energy output and environmental indices of agricultural production systems owing to its higher speed of computation processes compared to ANN model, despite ANN's higher accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  11. A Rapid Identification Method for Calamine Using Near-Infrared Spectroscopy Based on Multi-Reference Correlation Coefficient Method and Back Propagation Artificial Neural Network.

    Science.gov (United States)

    Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli

    2017-07-01

    As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.

  12. Artificial Intelligence Methods in Analysis of Morphology of Selected Structures in Medical Images

    Directory of Open Access Journals (Sweden)

    Ryszard Tadeusiewicz

    2001-01-01

    Full Text Available The goal of this paper is the presentation of the possibilities of application of syntactic method of computer image analysis for recognition of local stenoscs of coronary arteries lumen and detection of pathological signs in upper parts of ureter ducts and renal calyxes. Analysis of correct morphology of these structures is possible thanks to thc application of sequence and tree methods from the group of syntactic methods of pattern recognition. In the case of analysis of coronary arteries images the main objective is computer-aided early diagnosis of different form of ischemic cardiovascular diseases. Such diseases may reveal in the form of stable or unstable disturbances of heart rhythm or infarction. ln analysis of kidney radiograms the main goal is recognition of local irregularities in ureter lumens and examination of morphology of renal pelvis and calyxes.

  13. Method in analysis of CdZnTe γ spectrum with artificial neural network

    International Nuclear Information System (INIS)

    Ai Xianyun; Wei Yixiang; Xiao Wuyun

    2005-01-01

    The analysis of gamma-ray spectra to identify lines and their intensities usually requires expert knowledge and time consuming calculations with complex fitting functions. CdZnTe detector often exhibits asymmetric peak shape particularly at high energies making peak fitting methods and sophisticated isotope identification programs difficult to use. This paper investigates the use of the neural network to process gamma spectra measured with CdZnTe detector to verify nuclear materials. Results show that the neural network method gives advantages, in particular, when large low-energetic peak tailings are observed. (authors)

  14. Research on method of nuclear power plant operation fault diagnosis based on a combined artificial neural network

    International Nuclear Information System (INIS)

    Liu Feng; Yu Ren; Li Fengyu; Zhang Meng

    2007-01-01

    To solve the online real-time diagnosis problem of the nuclear power plant in operating condition, a method based on a combined artificial neural network is put forward in the paper. Its main principle is: using the BP neural network for the fast group diagnosis, and then using the RBF neural network for distinguishing and verifying the diagnostic result. The accuracy of the method is verified using the simulation values of the key parameters in normal status and malfunction status of a nuclear power plant. The results show that the method combining the advantages of the two neural networks can not only diagnose the learned faults in similar power level of the nuclear power plant quickly and accurately, but also can identify the faults in different power status, as well as the unlearned faults. The outputs of the diagnosis system are in form of the reliability of the faults, and are changing with the lasting of the operation time of the plant. This makes the diagnosis results be more acceptable to operators. (authors)

  15. On the Application of Formal Methods to Clinical Guidelines, an Artificial Intelligence Perspective

    NARCIS (Netherlands)

    Hommersom, A.J.

    2008-01-01

    In computer science, all kinds of methods and techniques have been developed to study systems, such as simulation of the behaviour of a system. Furthermore, it is possible to study these systems by proving formal formal properties or by searching through all the possible states that a system may be

  16. Are Imaging and Lesioning Convergent Methods for Assessing Functional Specialisation? Investigations Using an Artificial Neural Network

    Science.gov (United States)

    Thomas, Michael S. C.; Purser, Harry R. M.; Tomlinson, Simon; Mareschal, Denis

    2012-01-01

    This article presents an investigation of the relationship between lesioning and neuroimaging methods of assessing functional specialisation, using synthetic brain imaging (SBI) and lesioning of a connectionist network of past-tense formation. The model comprised two processing "routes": one was a direct route between layers of input and output…

  17. Methods and procedures for the verification and validation of artificial neural networks

    CERN Document Server

    Taylor, Brian J

    2006-01-01

    Neural networks are members of a class of software that have the potential to enable intelligent computational systems capable of simulating characteristics of biological thinking and learning. This volume introduces some of the methods and techniques used for the verification and validation of neural networks and adaptive systems.

  18. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    Science.gov (United States)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  19. To the problem of control methods unification of natural and artificial radionuclide admission into environment

    International Nuclear Information System (INIS)

    Gedeonov, L.I.

    1981-01-01

    Radioactive substances (RAS) released into the environment during NPP operation form the fields of increased radiation level as compared with the natural background. Preservation of the environment from intolerable contamination requires deter-- mination of the effluent norm by concentration and quantity of RAS released to the environment for every source. The necessity of unification of the methods for radioactive nuclide control of the environment as well as means and conditions of this control are discussed [ru

  20. Event Detection Challenges, Methods, and Applications in Natural and Artificial Systems

    Science.gov (United States)

    2009-03-01

    Sauvageon, Agogino, Mehr, and Tumer [2006], for instance, use a fourth degree polynomial within an event detection algorithm to sense high... cancer , and coronary artery disease. His study examines the age at which to begin screening exams, the intervals between the exams, and (possibly...AM, Mehr AF, and Tumer IY. 2006. “Comparison of Event Detection Methods for Centralized Sensor Networks.” IEEE Sensors Applications Symposium 2006

  1. An independent evaluation of a new method for automated interpretation of lung scintigrams using artificial neural networks

    International Nuclear Information System (INIS)

    Holst, H.; Jaerund, A.; Evander, E.; Taegil, K.; Edenbrandt, L.; Maare, K.; Aastroem, K.; Ohlsson, M.

    2001-01-01

    The purpose of this study was to evaluate a new automated method for the interpretation of lung perfusion scintigrams using patients from a hospital other than that where the method was developed, and then to compare the performance of the technique against that of experienced physicians. A total of 1,087 scintigrams from patients with suspected pulmonary embolism comprised the training group. The test group consisted of scintigrams from 140 patients collected in a hospital different to that from which the training group had been drawn. An artificial neural network was trained using 18 automatically obtained features from each set of perfusion scintigrams. The image processing techniques included alignment to templates, construction of quotient images based on the perfusion/template images, and finally calculation of features describing segmental perfusion defects in the quotient images. The templates represented lungs of normal size and shape without any pathological changes. The performance of the neural network was compared with that of three experienced physicians who read the same test scintigrams according to the modified PIOPED criteria using, in addition to perfusion images, ventilation images when available and chest radiographs for all patients. Performances were measured as area under the receiver operating characteristic curve. The performance of the neural network evaluated in the test group was 0.88 (95% confidence limits 0.81-0.94). The performance of the three experienced experts was in the range 0.87-0.93 when using the perfusion images, chest radiographs and ventilation images when available. Perfusion scintigrams can be interpreted regarding the diagnosis of pulmonary embolism by the use of an automated method also in a hospital other than that where it was developed. The performance of this method is similar to that of experienced physicians even though the physicians, in addition to perfusion images, also had access to ventilation images for

  2. Investigation of test methods for measuring compressive strength and modulus of two-dimensional carbon-carbon composites

    Science.gov (United States)

    Ohlhorst, Craig W.; Sawyer, James Wayne; Yamaki, Y. Robert

    1989-01-01

    An experimental evaluation has been conducted to ascertain the the usefulness of two techniques for measuring in-plane compressive failure strength and modulus in coated and uncoated carbon-carbon composites. The techniques involved testing specimens with potted ends as well as testing them in a novel clamping fixture; specimen shape, length, gage width, and thickness were the test parameters investigated for both coated and uncoated 0/90 deg and +/-45 deg laminates. It is found that specimen shape does not have a significant effect on the measured compressive properties. The potting of specimen ends results in slightly higher measured compressive strengths than those obtained with the new clamping fixture. Comparable modulus values are obtained by both techniques.

  3. An Applied Method for Predicting the Load-Carrying Capacity in Compression of Thin-Wall Composite Structures with Impact Damage

    Science.gov (United States)

    Mitrofanov, O.; Pavelko, I.; Varickis, S.; Vagele, A.

    2018-03-01

    The necessity for considering both strength criteria and postbuckling effects in calculating the load-carrying capacity in compression of thin-wall composite structures with impact damage is substantiated. An original applied method ensuring solution of these problems with an accuracy sufficient for practical design tasks is developed. The main advantage of the method is its applicability in terms of computing resources and the set of initial data required. The results of application of the method to solution of the problem of compression of fragments of thin-wall honeycomb panel damaged by impacts of various energies are presented. After a comparison of calculation results with experimental data, a working algorithm for calculating the reduction in the load-carrying capacity of a composite object with impact damage is adopted.

  4. An Integrated Start-Up Method for Pumped Storage Units Based on a Novel Artificial Sheep Algorithm

    Directory of Open Access Journals (Sweden)

    Zanbin Wang

    2018-01-01

    Full Text Available Pumped storage units (PSUs are an important storage tool for power systems containing large-scale renewable energy, and the merit of rapid start-up enable PSUs to modulate and stabilize the power system. In this paper, PSU start-up strategies have been studied and a new integrated start-up method has been proposed for the purpose of achieving swift and smooth start-up. A two-phase closed-loop startup strategy, composed of switching Proportion Integration (PI and Proportion Integration Differentiation (PID controller is designed, and an integrated optimization scheme is proposed for a synchronous optimization of the parameters in the strategy. To enhance the optimization performance, a novel meta-heuristic called Artificial Sheep Algorithm (ASA is proposed and applied to solve the optimization task after a sufficient verification with seven popular meta-heuristic algorithms and 13 typical benchmark functions. Simulation model has been built for a China’s PSU and comparative experiments are conducted to evaluate the proposed integrated method. Results show that the start-up performance could be significantly improved on both indices on overshoot and start-up, and up to 34%-time consumption has been reduced under different working condition. The significant improvements on start-up of PSU is interesting and meaning for further application on real unit.

  5. Modeling of glucose release from native and modified wheat starch gels during in vitro gastrointestinal digestion using artificial intelligence methods.

    Science.gov (United States)

    Yousefi, A R; Razavi, Seyed M A

    2017-04-01

    Estimation of the amounts of glucose release (AGR) during gastrointestinal digestion can be useful to identify food of potential use in the diet of individuals with diabetes. In this work, adaptive neuro-fuzzy inference system (ANFIS), genetic algorithm-artificial neural network (GA-ANN) and group method of data handling (GMDH) models were applied to estimate the AGR from native (NWS), cross-linked (CLWS) and hydroxypropylated wheat starch (HPWS) gels during digestion under simulated gastrointestinal conditions. The GA-ANN and ANFIS were fed with 3 inputs of digestion time (1-120min), gel volume (7.5 and 15ml) and concentration (8 and 12%, w/w) for prediction of the AGR. The developed ANFIS predictions were close to the experimental data (r=0.977-0.996 and RMSE=0.225-0.619). The optimized GA-ANN, which included 6-7 hidden neurons, predicted the AGR with a good precision (r=0.984-0.993 and RMSE=0.338-0.588). Also, a three layers GMDH model with 3 neurons accurately predicted the AGR (r=0.979-0.986 and RMSE=0.339-0.443). Sensitivity analysis data demonstrated that the gel concentration was the most sensitive factor for prediction of the AGR. The results dedicated that the AGR will be accurately predictable through such soft computing methods providing less computational cost and time. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Simulation of CO2 Solubility in Polystyrene-b-Polybutadieneb-Polystyrene (SEBS) by artificial intelligence network (ANN) method

    Science.gov (United States)

    Sharudin, R. W.; AbdulBari Ali, S.; Zulkarnain, M.; Shukri, M. A.

    2018-05-01

    This study reports on the integration of Artificial Neural Network (ANNs) with experimental data in predicting the solubility of carbon dioxide (CO2) blowing agent in SEBS by generating highest possible value for Regression coefficient (R2). Basically, foaming of thermoplastic elastomer with CO2 is highly affected by the CO2 solubility. The ability of ANN in predicting interpolated data of CO2 solubility was investigated by comparing training results via different method of network training. Regards to the final prediction result for CO2 solubility by ANN, the prediction trend (output generate) was corroborated with the experimental results. The obtained result of different method of training showed the trend of output generated by Gradient Descent with Momentum & Adaptive LR (traingdx) required longer training time and required more accurate input to produce better output with final Regression Value of 0.88. However, it goes vice versa with Levenberg-Marquardt (trainlm) technique as it produced better output in quick detention time with final Regression Value of 0.91.

  7. Application of stochastic and artificial intelligence methods for nuclear material identification

    International Nuclear Information System (INIS)

    Pozzi, S.; Segovia, F.J.

    1999-01-01

    Nuclear materials safeguard efforts necessitate the use of non-destructive methods to determine the attributes of fissile samples enclosed in special, non-accessible containers. To this end, a large variety of methods has been developed at the Oak Ridge National Laboratory (ORNL) and elsewhere. Usually, a given set of statistics of the stochastic neutron-photon coupled field, such as source-detector, detector-detector cross correlation functions, and multiplicities are measured over a range of known samples to develop calibration algorithms. In this manner, the attributes of unknown samples can be inferred by the use of the calibration results. The organization of this paper is as follows: Section 2 describes the Monte Carlo simulations of source-detector cross correlation functions for a set of uranium metallic samples interrogated by the neutrons and photons from a 252 Cf source. From this database, a set of features is extracted in Section 3. The use of neural networks (NN) and genertic programming to provide sample mass and enrichment values from the input sets of features is illustrated in Sections 4 and 5, respectivelyl. Section 6 is a comparison of the results, while Section 7 is a brief summary of the work

  8. Neural interface methods and apparatus to provide artificial sensory capabilities to a subject

    Energy Technology Data Exchange (ETDEWEB)

    Buerger, Stephen P.; Olsson, III, Roy H.; Wojciechowski, Kenneth E.; Novick, David K.; Kholwadwala, Deepesh K.

    2017-01-24

    Embodiments of neural interfaces according to the present invention comprise sensor modules for sensing environmental attributes beyond the natural sensory capability of a subject, and communicating the attributes wirelessly to an external (ex-vivo) portable module attached to the subject. The ex-vivo module encodes and communicates the attributes via a transcutaneous inductively coupled link to an internal (in-vivo) module implanted within the subject. The in-vivo module converts the attribute information into electrical neural stimuli that are delivered to a peripheral nerve bundle within the subject, via an implanted electrode. Methods and apparatus according to the invention incorporate implantable batteries to power the in-vivo module allowing for transcutaneous bidirectional communication of low voltage (e.g. on the order of 5 volts) encoded signals as stimuli commands and neural responses, in a robust, low-error rate, communication channel with minimal effects to the subjects' skin.

  9. Artificial aortic valve dysfunction due to pannus and thrombus – different methods of cardiac surgical management

    Science.gov (United States)

    Marcinkiewicz, Anna; Kośmider, Anna; Walczak, Andrzej; Zwoliński, Radosław; Jaszewski, Ryszard

    2015-01-01

    . Conclusions Precise and modern diagnostic methods facilitated selection of the treatment method. However, the intraoperative view also seems to be crucial in individualizing the surgical approach. PMID:26702274

  10. Artificial aortic valve dysfunction due to pannus and thrombus - different methods of cardiac surgical management.

    Science.gov (United States)

    Ostrowski, Stanisław; Marcinkiewicz, Anna; Kośmider, Anna; Walczak, Andrzej; Zwoliński, Radosław; Jaszewski, Ryszard

    2015-09-01

    Approximately 60 000 prosthetic valves are implanted annually in the USA. The risk of prosthesis dysfunction ranges from 0.1% to 4% per year. Prosthesis valve dysfunction is usually caused by a thrombus obstructing the prosthetic discs. However, 10% of prosthetic valves are dysfunctional due to pannus formation, and 12% of prostheses are damaged by both fibrinous and thrombotic components. The authors present two patients with dysfunctional aortic prostheses who were referred for cardiac surgery. Different surgical solutions were used in the treatment of each case. The first patient was a 71-year-old woman whose medical history included arterial hypertension, stable coronary artery disease, diabetes mellitus, chronic obstructive pulmonary disease (COPD), and hypercholesterolemia; she had previously undergone left-sided mastectomy and radiotherapy. The patient was admitted to the Cardiac Surgery Department due to aortic prosthesis dysfunction. Transthoracic echocardiography revealed complete obstruction of one disc and a severe reduction in the mobility of the second. The mean transvalvular gradient was very high. During the operation, pannus covering the discs' surface was found. A biological aortic prosthesis was reimplanted without complications. The second patient was an 87-year-old woman with arterial hypertension, persistent atrial fibrillation, and COPD, whose past medical history included gastric ulcer disease and ischemic stroke. As in the case of the first patient, she was admitted due to valvular prosthesis dysfunction. Preoperative transthoracic echocardiography revealed an obstruction of the posterior prosthetic disc and significant aortic regurgitation. Transesophageal echocardiography and fluoroscopy confirmed the prosthetic dysfunction. During the operation, a thrombus growing around a minor pannus was found. The thrombus and pannus were removed, and normal functionality of the prosthetic valve was restored. Precise and modern diagnostic methods

  11. Artificial intelligence/fuzzy logic method for analysis of combined signals from heavy metal chemical sensors

    International Nuclear Information System (INIS)

    Turek, M.; Heiden, W.; Riesen, A.; Chhabda, T.A.; Schubert, J.; Zander, W.; Krueger, P.; Keusgen, M.; Schoening, M.J.

    2009-01-01

    The cross-sensitivity of chemical sensors for several metal ions resembles in a way the overlapping sensitivity of some biological sensors, like the optical colour receptors of human retinal cone cells. While it is difficult to assign crisp classification values to measurands based on complex overlapping sensory signals, fuzzy logic offers a possibility to mathematically model such systems. Current work goes into the direction of mixed heavy metal solutions and the combination of fuzzy logic with heavy metal-sensitive, silicon-based chemical sensors for training scenarios of arbitrary sensor/probe combinations in terms of an electronic tongue. Heavy metals play an important role in environmental analysis. As trace elements as well as water impurities released from industrial processes they occur in the environment. In this work, the development of a new fuzzy logic method based on potentiometric measurements performed with three different miniaturised chalcogenide glass sensors in different heavy metal solutions will be presented. The critical validation of the developed fuzzy logic program will be demonstrated by means of measurements in unknown single- and multi-component heavy metal solutions. Limitations of this program and a comparison between calculated and expected values in terms of analyte composition and heavy metal ion concentration will be shown and discussed.

  12. Artificial intelligence/fuzzy logic method for analysis of combined signals from heavy metal chemical sensors

    Energy Technology Data Exchange (ETDEWEB)

    Turek, M. [Institute of Nano- and Biotechnologies (INB), Aachen University of Applied Sciences, Campus Juelich, Juelich (Germany); Institute of Bio- and Nanosystems (IBN), Research Centre Juelich GmbH, Juelich (Germany); Heiden, W.; Riesen, A. [Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin (Germany); Chhabda, T.A. [Institute of Nano- and Biotechnologies (INB), Aachen University of Applied Sciences, Campus Juelich, Juelich (Germany); Schubert, J.; Zander, W. [Institute of Bio- and Nanosystems (IBN), Research Centre Juelich GmbH, Juelich (Germany); Krueger, P. [Institute of Biochemistry and Molecular Biology, RWTH Aachen, Aachen (Germany); Keusgen, M. [Institute for Pharmaceutical Chemistry, Philipps-University Marburg, Marburg (Germany); Schoening, M.J. [Institute of Nano- and Biotechnologies (INB), Aachen University of Applied Sciences, Campus Juelich, Juelich (Germany); Institute of Bio- and Nanosystems (IBN), Research Centre Juelich GmbH, Juelich (Germany)], E-mail: m.j.schoening@fz-juelich.de

    2009-10-30

    The cross-sensitivity of chemical sensors for several metal ions resembles in a way the overlapping sensitivity of some biological sensors, like the optical colour receptors of human retinal cone cells. While it is difficult to assign crisp classification values to measurands based on complex overlapping sensory signals, fuzzy logic offers a possibility to mathematically model such systems. Current work goes into the direction of mixed heavy metal solutions and the combination of fuzzy logic with heavy metal-sensitive, silicon-based chemical sensors for training scenarios of arbitrary sensor/probe combinations in terms of an electronic tongue. Heavy metals play an important role in environmental analysis. As trace elements as well as water impurities released from industrial processes they occur in the environment. In this work, the development of a new fuzzy logic method based on potentiometric measurements performed with three different miniaturised chalcogenide glass sensors in different heavy metal solutions will be presented. The critical validation of the developed fuzzy logic program will be demonstrated by means of measurements in unknown single- and multi-component heavy metal solutions. Limitations of this program and a comparison between calculated and expected values in terms of analyte composition and heavy metal ion concentration will be shown and discussed.

  13. The influence of direct compression powder blend transfer method from the container to the tablet press on product critical quality attributes: a case study.

    Science.gov (United States)

    Teżyk, Michał; Jakubowska, Emilia; Milczewska, Kasylda; Milanowski, Bartłomiej; Voelkel, Adam; Lulek, Janina

    2017-06-01

    The aim of this article is to compare the gravitational powder blend loading method to the tablet press and manual loading in terms of their influence on tablets' critical quality attributes (CQA). The results of the study can be of practical relevance to the pharmaceutical industry in the area of direct compression of low-dose formulations, which could be prone to content uniformity (CU) issues. In the preliminary study, particle size distribution (PSD) and surface energy of raw materials were determined using laser diffraction method and inverse gas chromatography, respectively. For trials purpose, a formulation containing two pharmaceutical ingredients (APIs) was used. Tablet samples were collected during the compression progress to analyze their CQAs, namely assay and CU. Results obtained during trials indicate that tested direct compression powder blend is sensitive to applied powder handling method. Mild increase in both APIs content was observed during manual scooping. Gravitational approach (based on discharge into the drum) resulted in a decrease in CU, which is connected to a more pronounced assay increase at the end of tableting than in the case of manual loading. The correct design of blend transfer over single unit processes is an important issue and should be investigated during the development phase since it may influence the final product CQAs. The manual scooping method, although simplistic, can be a temporary solution to improve the results of API's content and uniformity when compared to industrial gravitational transfer.

  14. Analysis and development of adjoint-based h-adaptive direct discontinuous Galerkin method for the compressible Navier-Stokes equations

    Science.gov (United States)

    Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang

    2018-06-01

    In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.

  15. [Contact characteristics research of acetabular weight-bearing area with different internal fixation methods after compression fracture of acetabular dome].

    Science.gov (United States)

    Xu, Bowen; Zhang, Qingsong; An, Siqi; Pei, Baorui; Wu, Xiaobo

    2017-08-01

    To establish the model of compression fracture of acetabular dome, and to measure the contact characteristics of acetabular weight-bearing area of acetabulum after 3 kinds of internal fixation. Sixteen fresh adult half pelvis specimens were randomly divided into 4 groups, 4 specimens each group. Group D was the complete acetabulum (control group), and the remaining 3 groups were prepared acetabular dome compression fracture model. The fractures were fixed with reconstruction plate in group A, antegrade raft screws in group B, and retrograde raft screws in group C. The pressure sensitive films were attached to the femoral head, and the axial compression test was carried out on the inverted single leg standing position. The weight-bearing area, average stress, and peak stress were measured in each group. Under the loading of 500 N, the acetabular weight-bearing area was significantly higher in group D than in other 3 groups ( P area were significantly higher in group B and group C than in group A, and the average stress and peak stress were significantly lower than in group A ( P 0.05). For the compression fracture of the acetabular dome, the contact characteristics of the weight-bearing area can not restore to the normal level, even if the anatomical reduction and rigid internal fixation were performed; compared with the reconstruction plate fixation, antegrade and retrograde raft screws fixations can increase the weight-bearing area, reduce the average stress and peak stress, and reduce the incidence of traumatic arthritis.

  16. Ammonia volatilization from artificial dung and urine patches measured by the equilibrium concentration technique (JTI method)

    Science.gov (United States)

    Saarijärvi, K.; Mattila, P. K.; Virkajärvi, P.

    The aim of this study was to investigate the dynamics of ammonia (NH 3) volatilization from intensively managed pastures on a soil type typical of the dairy production area in Finland and to clarify the effect of rainfall on NH 3 volatilization. The study included two experiments. In Experiment 1 the total amount of NH 3-N emitted was calculated based on the annual surface coverage of dung (4%) and urine (17%). The application rate of total N in the simulated dung and urine patches was approximately 47 g N m -2 and 113 g N m -2, respectively. In Experiment 1 the general level of NH 3 emissions from the urine patches was high and the peak volatilization rate was 0.54 g NH 3-N m -2 h -1. As expected, emissions from the dung pats were clearly lower with a maximum rate of 0.10 g NH 3-N m -2 h -1. The total emission calculated for the whole pasture area (stocking rate four cows ha -1 y -1, urine coverage 17% and dung coverage 4%) was 16.1 kg NH 3-N ha -1. Approximately 96% of the total emission originated from urine. In Experiment 2 we measured the emissions from urine only and the treatments on the urine patches were: (1) no irrigation, (2) 5+5 mm and (3) 20 mm irrigation. The peak emission rates were 0.13, 0.09 and 0.04 g NH 3-N m -2 h -1 and the total emissions were 6.9, 3.0 and 1.7 kg NH 3-N ha -1, for treatments (1), (2) and (3), respectively. In both measurements over 80% of the total emission occurred during the first 48 h and there was a clear diurnal rhythm. Increasing rainfall markedly decreased NH 3 emission. Volatilization was highest with dry and warm soil. The JTI method appeared to be suitable for measuring NH 3 volatilization in this kind of experiment. According to our results, the importance of pastures as a source of NH 3 emission in Finland is minor.

  17. Development and validation of a sensitive LC-MS-MS method for the simultaneous determination of multicomponent contents in artificial Calculus Bovis.

    Science.gov (United States)

    Peng, Can; Tian, Jixin; Lv, Mengying; Huang, Yin; Tian, Yuan; Zhang, Zunjian

    2014-02-01

    Artificial Calculus Bovis is a major substitute in clinical treatment for Niuhuang, a widely used, efficacious but rare traditional Chinese medicine. However, its chemical structures and the physicochemical properties of its components are complicated, which causes difficulty in establishing a set of effective and comprehensive methods for its identification and quality control. In this study, a simple, sensitive and reliable liquid chromatography-tandem mass spectrometry method was successfully developed and validated for the simultaneous determination of bilirubin, taurine and major bile acids (including six unconjugated bile acids, two glycine-conjugated bile acids and three taurine-conjugated bile acids) in artificial Calculus Bovis using a Zorbax SB-C18 column with a gradient elution of methanol and 10 mmol/L ammonium acetate in aqueous solution (adjusted to pH 3.0 with formic acid). The mass spectra were obtained in the negative ion mode using dehydrocholic acid as the internal standard. The content of each analyte in artificial Calculus Bovis was determined by monitoring specific ion pairs in the selected reaction monitoring mode. All analytes demonstrated perfect linearity (r(2) > 0.994) in a wide dynamic range, and 10 batches of samples from different sources were further analyzed. This study provided a comprehensive method for the quality control of artificial Calculus Bovis.

  18. Three dimensional simulation of compressible and incompressible flows through the finite element method; Simulacao tridimensional de escoamentos compressiveis e incompressiveis atraves do metodo dos elementos finitos

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Gustavo Koury

    2004-11-15

    Although incompressible fluid flows can be regarded as a particular case of a general problem, numerical methods and the mathematical formulation aimed to solve compressible and incompressible flows have their own peculiarities, in such a way, that it is generally not possible to attain both regimes with a single approach. In this work, we start from a typically compressible formulation, slightly modified to make use of pressure variables and, through augmenting the stabilising parameters, we end up with a simplified model which is able to deal with a wide range of flow regimes, from supersonic to low speed gas flows. The resulting methodology is flexible enough to allow for the simulation of liquid flows as well. Examples using conservative and pressure variables are shown and the results are compared to those published in the literature, in order to validate the method. (author)

  19. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  20. A PERFORMANCE COMPARISON BETWEEN ARTIFICIAL NEURAL NETWORKS AND MULTIVARIATE STATISTICAL METHODS IN FORECASTING FINANCIAL STRENGTH RATING IN TURKISH BANKING SECTOR

    Directory of Open Access Journals (Sweden)

    MELEK ACAR BOYACIOĞLU

    2013-06-01

    Full Text Available Financial strength rating indicates the fundamental financial strength of a bank. The aim of financial strength rating is to measure a bank’s fundamental financial strength excluding the external factors. External factors can stem from the working environment or can be linked with the outside protective support mechanisms. With the evaluation, the rating of a bank free from outside supportive factors is being sought. Also the financial fundamental, franchise value, the variety of assets and working environment of a bank are being evaluated in this context. In this study, a model has been developed in order to predict the financial strength rating of Turkish banks. The methodology of this study is as follows: Selecting variables to be used in the model, creating a data set, choosing the techniques to be used and the evaluation of classification success of the techniques. It is concluded that the artificial neural network system shows a better performance in terms of classification of financial strength rating in comparison to multivariate statistical methods in the raining set. On the other hand, there is no meaningful difference could be found in the validation set in which the prediction performances of the employed techniques are tested.

  1. Facile control of silica nanoparticles using a novel solvent varying method for the fabrication of artificial opal photonic crystals

    International Nuclear Information System (INIS)

    Gao, Weihong; Rigout, Muriel; Owens, Huw

    2016-01-01

    In this work, the Stöber process was applied to produce uniform silica nanoparticles (SNPs) in the meso-scale size range. The novel aspect of this work was to control the produced silica particle size by only varying the volume of the solvent ethanol used, whilst fixing the other reaction conditions. Using this one-step Stöber-based solvent varying (SV) method, seven batches of SNPs with target diameters ranging from 70 to 400 nm were repeatedly reproduced, and the size distribution in terms of the polydispersity index (PDI) was well maintained (within 0.1). An exponential equation was used to fit the relationship between the particle diameter and ethanol volume. This equation allows the prediction of the amount of ethanol required in order to produce particles of any target diameter within this size range. In addition, it was found that the reaction was completed in approximately 2 h for all batches regardless of the volume of ethanol. Structurally coloured artificial opal photonic crystals (PCs) were fabricated from the prepared SNPs by self-assembly under gravity sedimentation.

  2. Facile control of silica nanoparticles using a novel solvent varying method for the fabrication of artificial opal photonic crystals

    Science.gov (United States)

    Gao, Weihong; Rigout, Muriel; Owens, Huw

    2016-12-01

    In this work, the Stöber process was applied to produce uniform silica nanoparticles (SNPs) in the meso-scale size range. The novel aspect of this work was to control the produced silica particle size by only varying the volume of the solvent ethanol used, whilst fixing the other reaction conditions. Using this one-step Stöber-based solvent varying (SV) method, seven batches of SNPs with target diameters ranging from 70 to 400 nm were repeatedly reproduced, and the size distribution in terms of the polydispersity index (PDI) was well maintained (within 0.1). An exponential equation was used to fit the relationship between the particle diameter and ethanol volume. This equation allows the prediction of the amount of ethanol required in order to produce particles of any target diameter within this size range. In addition, it was found that the reaction was completed in approximately 2 h for all batches regardless of the volume of ethanol. Structurally coloured artificial opal photonic crystals (PCs) were fabricated from the prepared SNPs by self-assembly under gravity sedimentation.

  3. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    Science.gov (United States)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  4. Facile control of silica nanoparticles using a novel solvent varying method for the fabrication of artificial opal photonic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Weihong [The University of Manchester, School of Materials (United Kingdom); Rigout, Muriel [University of Leeds, School of Design (United Kingdom); Owens, Huw, E-mail: Huw.Owens@manchester.ac.uk [The University of Manchester, School of Materials (United Kingdom)

    2016-12-15

    In this work, the Stöber process was applied to produce uniform silica nanoparticles (SNPs) in the meso-scale size range. The novel aspect of this work was to control the produced silica particle size by only varying the volume of the solvent ethanol used, whilst fixing the other reaction conditions. Using this one-step Stöber-based solvent varying (SV) method, seven batches of SNPs with target diameters ranging from 70 to 400 nm were repeatedly reproduced, and the size distribution in terms of the polydispersity index (PDI) was well maintained (within 0.1). An exponential equation was used to fit the relationship between the particle diameter and ethanol volume. This equation allows the prediction of the amount of ethanol required in order to produce particles of any target diameter within this size range. In addition, it was found that the reaction was completed in approximately 2 h for all batches regardless of the volume of ethanol. Structurally coloured artificial opal photonic crystals (PCs) were fabricated from the prepared SNPs by self-assembly under gravity sedimentation.

  5. Comparison of in-situ gamma ray spectrometry measurements with conventional methods in determination natural and artificial nuclides in soil

    International Nuclear Information System (INIS)

    Al-Masri, M. S.; Doubal, A. W.

    2010-12-01

    Two nuclear analytical techniques (In-Situ Gamma ray spectrometry and laboratory gamma ray spectrometry) for determination of natural and artificial radionuclides in soil have been validated. The first technique depends on determination of radioactivity content of representative samples of the studied soil after laboratory preparation, while the second technique is based on direct determination of radioactivity content of soil using in-situ gamma-ray spectrometer. Analytical validation parameter such as detection limits, repeatability, reproducibility in addition to measurement uncertainties were estimated and compared for both techniques. Comparison results have shown that the determination of radioactivity in soil should apply the two techniques together where each of techniques is characterized by its low detection limit and uncertainty suitable for defined application of measurement. Radioactive isotopes in various locations were determined using the two methods by measuring 40 k, 238 U,and 137 Cs. The results showed that there are differences in attenuation factors due to soil moisture content differences; wet weight corrections should be applied when the two techniques are compared. (author)

  6. NON-COHESIVE SOILS’ COMPRESSIBILITY AND UNEVEN GRAIN-SIZE DISTRIBUTION RELATION

    Directory of Open Access Journals (Sweden)

    Anatoliy Mirnyy

    2016-03-01

    Full Text Available This paper presents the results of laboratory investigation of soil compression phases with consideration of various granulometric composition. Materials and Methods Experimental soil box with microscale video recording for compression phases studies is described. Photo and video materials showing the differences of microscale particle movements were obtained for non-cohesive soils with different grain-size distribution. Results The analysis of the compression tests results and elastic and plastic deformations separation allows identifying each compression phase. It is shown, that soil density is correlating with deformability parameters only for the same grain-size distribution. Basing on the test results the authors suggest that compaction ratio is not sufficient for deformability estimating without grain-size distribution taken into account. Discussion and Conclusions Considering grain-size distribution allows refining technological requirements for artificial soil structures, backfills, and sand beds. Further studies could be used for developing standard documents, SP45.13330.2012 in particular.

  7. Artificial Consciousness or Artificial Intelligence

    OpenAIRE

    Spanache Florin

    2017-01-01

    Artificial intelligence is a tool designed by people for the gratification of their own creative ego, so we can not confuse conscience with intelligence and not even intelligence in its human representation with conscience. They are all different concepts and they have different uses. Philosophically, there are differences between autonomous people and automatic artificial intelligence. This is the difference between intelligence and artificial intelligence, autonomous versus a...

  8. Discrete Wigner Function Reconstruction and Compressed Sensing

    OpenAIRE

    Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin

    2011-01-01

    A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

  9. Comparison of the effectiveness of complex decongestive therapy and compression bandaging as a method of treatment of lymphedema in the elderly

    Directory of Open Access Journals (Sweden)

    Zasadzka E

    2018-05-01

    Full Text Available Ewa Zasadzka,1 Tomasz Trzmiel,1 Maria Kleczewska,2 Mariola Pawlaczyk1 1Department of Geriatric Medicine and Gerontology, Karol Marcinkowski University of Medical Sciences, Poznan, Poland; 2Day Rehabilitation Center, Hospicjum Palium, Poznań, Poland Background: Lymphedema is a chronic condition which significantly lowers the quality of patient life, particularly among elderly populations, whose mobility and physical function are often reduced. Objectives: The aim of the study was to compare the effectiveness of multi-layer compression bandaging (MCB and complex decongestive therapy (CDT, and to show that MCB is a cheaper, more accessible and less labor intensive method of treating lymphedema in elderly patients. Patients and methods: The study included 103 patients (85 women and 18 men aged ≥60 years, with unilateral lower limb lymphedema. The subjects were divided into two groups: 50 treated with CDT and 53 with MCB. Pre- and post-treatment BMI, and average and maximum circumference of the edematous extremities were analyzed. Results: Reduction in swelling in both groups was achieved after 15 interventions. Both therapies demonstrated similar efficacy in reducing limb volume and circumference, but MCB showed greater efficacy in reducing the maximum circumference. Conclusion: Compression bandaging is a vital component of CDT. Maximum lymphedema reduction during therapy and maintaining its effect cannot be achieved without it. It also demonstrates its effectiveness as an independent method, which can reduce therapy cost and accessibility. Keywords: lymphedema, elderly, therapy, compression bandaging

  10. [Artificial organs].

    Science.gov (United States)

    Raguin, Thibaut; Dupret-Bories, Agnès; Debry, Christian

    2017-01-01

    Research has been fighting against organ failure and shortage of donations by supplying artificial organs for many years. With the raise of new technologies, tissue engineering and regenerative medicine, many organs can benefit of an artificial equivalent: thanks to retinal implants some blind people can visualize stimuli, an artificial heart can be proposed in case of cardiac failure while awaiting for a heart transplant, artificial larynx enables laryngectomy patients to an almost normal life, while the diabetic can get a glycemic self-regulation controlled by smartphones with an artificial device. Dialysis devices become portable, as well as the oxygenation systems for terminal respiratory failure. Bright prospects are being explored or might emerge in a near future. However, the retrospective assessment of putative side effects is not yet sufficient. Finally, the cost of these new devices is significant even if the advent of three dimensional printers may reduce it. © 2017 médecine/sciences – Inserm.

  11. An Object-Based Image Analysis Method for Monitoring Land Conversion by Artificial Sprawl Use of RapidEye and IRS Data

    Directory of Open Access Journals (Sweden)

    Maud Balestrat

    2012-02-01

    Full Text Available In France, in the peri-urban context, urban sprawl dynamics are particularly strong with huge population growth as well as a land crisis. The increase and spreading of built-up areas from the city centre towards the periphery takes place to the detriment of natural and agricultural spaces. The conversion of land with agricultural potential is all the more worrying as it is usually irreversible. The French Ministry of Agriculture therefore needs reliable and repeatable spatial-temporal methods to locate and quantify loss of land at both local and national scales. The main objective of this study was to design a repeatable method to monitor land conversion characterized by artificial sprawl: (i We used an object-based image analysis to extract artificial areas from satellite images; (ii We built an artificial patch that consists of aggregating all the peripheral areas that characterize artificial areas. The “artificialized” patch concept is an innovative extension of the urban patch concept, but differs in the nature of its components and in the continuity distance applied; (iii The diachronic analysis of artificial patch maps enables characterization of artificial sprawl. The method was applied at the scale of four departments (similar to provinces along the coast of Languedoc-Roussillon, in the South of France, based on two satellite datasets, one acquired in 1996–1997 (Indian Remote Sensing and the other in 2009 (RapidEye. In the four departments, we measured an increase in artificial areas of from 113,000 ha in 1997 to 133,000 ha in 2009, i.e., an 18% increase in 12 years. The package comes in the form of a 1/15,000 valid cartography, usable at the scale of a commune (the smallest territorial division used for administrative purposes in France that can be adapted to departmental and regional scales. The method is reproducible in homogenous spatial-temporal terms, so that it could be used periodically to assess changes in land conversion

  12. Application of Artificial Intelligence Methods for Analysis of Material and Non-material Determinants of Functioning of Young Europeans in Times of Crisis in the Eurozone

    OpenAIRE

    Gawlik, Remigiusz

    2014-01-01

    The study presents an analysis of possible applications of artificial intelligence methods for understanding, structuring and supporting the decision-making processes of European Youth in times of crisis in the Eurozone. Its main purpose is selecting a research method suitable for grasping and explaining the relations between social, economic and psychological premises when taking important life decisions by young Europeans at the beginning of their adult life. The interdisciplinary ap...

  13. Narrowing of the middle cerebral artery: artificial intelligence methods and comparison of transcranial color coded duplex sonography with conventional TCD.

    Science.gov (United States)

    Swiercz, Miroslaw; Swiat, Maciej; Pawlak, Mikolaj; Weigele, John; Tarasewicz, Roman; Sobolewski, Andrzej; Hurst, Robert W; Mariak, Zenon D; Melhem, Elias R; Krejza, Jaroslaw

    2010-01-01

    The goal of the study was to compare performances of transcranial color-coded duplex sonography (TCCS) and transcranial Doppler sonography (TCD) in the diagnosis of the middle cerebral artery (MCA) narrowing in the same population of patients using statistical and nonstatistical intelligent models for data analysis. We prospectively collected data from 179 consecutive routine digital subtraction angiography (DSA) procedures performed in 111 patients (mean age 54.17+/-14.4 years; 59 women, 52 men) who underwent TCD and TCCS examinations simultaneously. Each patient was examined independently using both ultrasound techniques, 267 M1 segments of MCA were assessed and narrowings were classified as 50% lumen reduction. Diagnostic performance was estimated by two statistical and two artificial neural networks (ANN) classification methods. Separate models were constructed for the TCD and TCCS sonographic data, as well as for detection of "any narrowing" and "severe narrowing" of the MCA. Input for each classifier consisted of the peak-systolic, mean and end-diastolic velocities measured with each sonographic method; the output was MCA narrowing. Arterial narrowings less or equal 50% of lumen reduction were found in 55 and >50% narrowings in 26 out of 267 arteries, as indicated by DSA. In the category of "any narrowing" the rate of correct assignment by all models was 82% to 83% for TCCS and 79% to 81% for TCD. In the diagnosis of >50% narrowing the overall classification accuracy remained in the range of 89% to 90% for TCCS data and 90% to 91% for TCD data. For the diagnosis of any narrowing, the sensitivity of the TCCS was significantly higher than that of the TCD, while for diagnosis of >50% MCA narrowing, sensitivity of the TCCS was similar to sensitivity of the TCD. Our study showed that TCCS outperforms conventional TCD in detection of diagnosis of >50% MCA narrowing. (E-mail: jaroslaw.krejza@uphs.upenn.edu).

  14. A Comparison Between Heliosat-2 and Artificial Neural Network Methods for Global Horizontal Irradiance Retrievals over Desert Environments

    Science.gov (United States)

    Ghedira, H.; Eissa, Y.

    2012-12-01

    Global horizontal irradiance (GHI) retrievals at the surface of any given location could be used for preliminary solar resource assessments. More accurately, the direct normal irradiance (DNI) and diffuse horizontal irradiance (DHI) are also required to estimate the global tilt irradiance, mainly used for fixed flat plate collectors. Two different satellite-based models for solar irradiance retrievals have been applied over the desert environment of the United Arab Emirates (UAE). Both models employ channels of the SEVIRI instrument, onboard the geostationary satellite Meteosat Second Generation, as their main inputs. The satellite images used in this study have a temporal resolution of 15-min and a spatial resolution of 3-km. The objective of this study is to compare between the GHI retrieved using the Heliosat-2 method and an artificial neural network (ANN) ensemble method over the UAE. The high-resolution visible channel of SEVIRI is used in the Heliosat-2 method to derive the cloud index. The cloud index is then used to compute the cloud transmission, while the cloud-free GHI is computed from the Linke turbidity factor. The product of the cloud transmission and the cloud-free GHI denotes the estimated GHI. A constant underestimation is observed in the estimated GHI over the dataset available in the UAE. Therefore, the cloud-free DHI equation in the model was recalibrated to fix the bias. After recalibration, results over the UAE show a root mean square error (RMSE) value of 10.1% and a mean bias error (MBE) of -0.5%. As for the ANN approach, six thermal channels of SEVIRI were used to estimate the DHI and the total optical depth of the atmosphere (δ). An ensemble approach is employed to obtain a better generalizability of the results, as opposed to using one single weak network. The DNI is then computed from the estimated δ using the Beer-Bouguer-Lambert law. The GHI is computed from the DNI and DHI estimates. The RMSE for the estimated GHI obtained over an

  15. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  16. Method and algorithm for efficient calibration of compressive hyperspectral imaging system based on a liquid crystal retarder

    Science.gov (United States)

    Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian

    2017-09-01

    Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.

  17. Fluid-driven origami-inspired artificial muscles.

    Science.gov (United States)

    Li, Shuguang; Vogt, Daniel M; Rus, Daniela; Wood, Robert J

    2017-12-12

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ∼600 kPa, and produce peak power densities over 2 kW/kg-all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration. Copyright © 2017 the Author(s). Published by PNAS.

  18. Entropy Stable Staggered Grid Discontinuous Spectral Collocation Methods of any Order for the Compressible Navier--Stokes Equations

    KAUST Repository

    Parsani, Matteo

    2016-10-04

    Staggered grid, entropy stable discontinuous spectral collocation operators of any order are developed for the compressible Euler and Navier--Stokes equations on unstructured hexahedral elements. This generalization of previous entropy stable spectral collocation work [M. H. Carpenter, T. C. Fisher, E. J. Nielsen, and S. H. Frankel, SIAM J. Sci. Comput., 36 (2014), pp. B835--B867, M. Parsani, M. H. Carpenter, and E. J. Nielsen, J. Comput. Phys., 292 (2015), pp. 88--113], extends the applicable set of points from tensor product, Legendre--Gauss--Lobatto (LGL), to a combination of tensor product Legendre--Gauss (LG) and LGL points. The new semidiscrete operators discretely conserve mass, momentum, energy, and satisfy a mathematical entropy inequality for the compressible Navier--Stokes equations in three spatial dimensions. They are valid for smooth as well as discontinuous flows. The staggered LG and conventional LGL point formulations are compared on several challenging test problems. The staggered LG operators are significantly more accurate, although more costly from a theoretical point of view. The LG and LGL operators exhibit similar robustness, as is demonstrated using test problems known to be problematic for operators that lack a nonlinear stability proof for the compressible Navier--Stokes equations (e.g., discontinuous Galerkin, spectral difference, or flux reconstruction operators).

  19. Effect of High-Temperature Curing Methods on the Compressive Strength Development of Concrete Containing High Volumes of Ground Granulated Blast-Furnace Slag

    Directory of Open Access Journals (Sweden)

    Wonsuk Jung

    2017-01-01

    Full Text Available This paper investigates the effect of the high-temperature curing methods on the compressive strength of concrete containing high volumes of ground granulated blast-furnace slag (GGBS. GGBS was used to replace Portland cement at a replacement ratio of 60% by binder mass. The high-temperature curing parameters used in this study were the delay period, temperature rise, peak temperature (PT, peak period, and temperature down. Test results demonstrate that the compressive strength of the samples with PTs of 65°C and 75°C was about 88% higher than that of the samples with a PT of 55°C after 1 day. According to this investigation, there might be optimum high-temperature curing conditions for preparing a concrete containing high volumes of GGBS, and incorporating GGBS into precast concrete mixes can be a very effective tool in increasing the applicability of this by-product.

  20. Graph Compression by BFS

    Directory of Open Access Journals (Sweden)

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.