WorldWideScience

Sample records for methods include automatic

  1. Automatic Annotation Method on Learners' Opinions in Case Method Discussion

    Science.gov (United States)

    Samejima, Masaki; Hisakane, Daichi; Komoda, Norihisa

    2015-01-01

    Purpose: The purpose of this paper is to annotate an attribute of a problem, a solution or no annotation on learners' opinions automatically for supporting the learners' discussion without a facilitator. The case method aims at discussing problems and solutions in a target case. However, the learners miss discussing some of problems and solutions.…

  2. Automatic temperature control method of shipping can

    International Nuclear Information System (INIS)

    Nishikawa, Kaoru.

    1992-01-01

    The present invention provides a method of rapidly and accurately controlling the temperature of a shipping can, which is used upon shipping inspection for a nuclear fuel assembly. That is, a measured temperature value of the shipping can is converted to a gas pressure setting value in a jacket of the shipping can by conducting a predetermined logic calculation by using a fuzzy logic. A gas pressure control section compares the pressure setting value of a fuzzy estimation section and the measured value of the gas pressure in the jacket of the shipping can, and conducts air supply or exhaustion of the jacket gas so as to adjust the measured value with the setting value. These fuzzy estimation section and gas pressure control section control the gas pressure in the jacket of the shipping can to control the water level in the jacket. As a result, the temperature of the shipping can is controlled. With such procedures, since the water level in the jacket can be controlled directly and finely, temperature of the shipping can is automatically controlled rapidly and accurately compared with a conventional case. (I.S.)

  3. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  4. METHODS OF AUTOMATIC QUALITY CONTROL OF AGGLUTINANTSANDS IN FOUNDRY

    Directory of Open Access Journals (Sweden)

    D. M. Kukuj

    2004-01-01

    Full Text Available The article is dedicated to comparative analysis of the well-known methods of automatic quality control of agglutinant sands in process of their preparation and to the problems of automation control of the mix preparation processes.

  5. A rule-based automatic sleep staging method.

    Science.gov (United States)

    Liang, Sheng-Fu; Kuo, Chin-En; Hu, Yu-Han; Cheng, Yu-Shian

    2012-03-30

    In this paper, a rule-based automatic sleep staging method was proposed. Twelve features including temporal and spectrum analyses of the EEG, EOG, and EMG signals were utilized. Normalization was applied to each feature to eliminating individual differences. A hierarchical decision tree with fourteen rules was constructed for sleep stage classification. Finally, a smoothing process considering the temporal contextual information was applied for the continuity. The overall agreement and kappa coefficient of the proposed method applied to the all night polysomnography (PSG) of seventeen healthy subjects compared with the manual scorings by R&K rules can reach 86.68% and 0.79, respectively. This method can integrate with portable PSG system for sleep evaluation at-home in the near future. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Automatic numerical integration methods for Feynman integrals through 3-loop

    International Nuclear Information System (INIS)

    De Doncker, E; Olagbemi, O; Yuasa, F; Ishikawa, T; Kato, K

    2015-01-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities. (paper)

  7. Conversion of KEGG metabolic pathways to SBGN maps including automatic layout.

    Science.gov (United States)

    Czauderna, Tobias; Wybrow, Michael; Marriott, Kim; Schreiber, Falk

    2013-08-16

    Biologists make frequent use of databases containing large and complex biological networks. One popular database is the Kyoto Encyclopedia of Genes and Genomes (KEGG) which uses its own graphical representation and manual layout for pathways. While some general drawing conventions exist for biological networks, arbitrary graphical representations are very common. Recently, a new standard has been established for displaying biological processes, the Systems Biology Graphical Notation (SBGN), which aims to unify the look of such maps. Ideally, online repositories such as KEGG would automatically provide networks in a variety of notations including SBGN. Unfortunately, this is non-trivial, since converting between notations may add, remove or otherwise alter map elements so that the existing layout cannot be simply reused. Here we describe a methodology for automatic translation of KEGG metabolic pathways into the SBGN format. We infer important properties of the KEGG layout and treat these as layout constraints that are maintained during the conversion to SBGN maps. This allows for the drawing and layout conventions of SBGN to be followed while creating maps that are still recognizably the original KEGG pathways. This article details the steps in this process and provides examples of the final result.

  8. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    International Nuclear Information System (INIS)

    Wang, Y.

    2013-01-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  9. Measuring the accuracy of automatic shoeprint recognition methods.

    Science.gov (United States)

    Luostarinen, Tapio; Lehmussola, Antti

    2014-11-01

    Shoeprints are an important source of information for criminal investigation. Therefore, an increasing number of automatic shoeprint recognition methods have been proposed for detecting the corresponding shoe models. However, comprehensive comparisons among the methods have not previously been made. In this study, an extensive set of methods proposed in the literature was implemented, and their performance was studied in varying conditions. Three datasets of different quality shoeprints were used, and the methods were evaluated also with partial and rotated prints. The results show clear differences between the algorithms: while the best performing method, based on local image descriptors and RANSAC, provides rather good results with most of the experiments, some methods are almost completely unrobust against any unidealities in the images. Finally, the results demonstrate that there is still a need for extensive research to improve the accuracy of automatic recognition of crime scene prints. © 2014 American Academy of Forensic Sciences.

  10. An Automatic High Efficient Method for Dish Concentrator Alignment

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2014-01-01

    for the alignment of faceted solar dish concentrator. The isosceles triangle configuration of facet’s footholds determines a fixed relation between light spot displacements and foothold movements, which allows an automatic determination of the amount of adjustments. Tests on a 25 kW Stirling Energy System dish concentrator verify the feasibility, accuracy, and efficiency of our method.

  11. Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images.

    Science.gov (United States)

    Chiu, Stephanie J; Izatt, Joseph A; O'Connell, Rachelle V; Winter, Katrina P; Toth, Cynthia A; Farsiu, Sina

    2012-01-05

    To automatically segment retinal spectral domain optical coherence tomography (SD-OCT) images of eyes with age-related macular degeneration (AMD) and various levels of image quality to advance the study of retinal pigment epithelium (RPE)+drusen complex (RPEDC) volume changes indicative of AMD progression. A general segmentation framework based on graph theory and dynamic programming was used to segment three retinal boundaries in SD-OCT images of eyes with drusen and geographic atrophy (GA). A validation study for eyes with nonneovascular AMD was conducted, forming subgroups based on scan quality and presence of GA. To test for accuracy, the layer thickness results from two certified graders were compared against automatic segmentation results for 220 B-scans across 20 patients. For reproducibility, automatic layer volumes were compared that were generated from 0° versus 90° scans in five volumes with drusen. The mean differences in the measured thicknesses of the total retina and RPEDC layers were 4.2 ± 2.8 and 3.2 ± 2.6 μm for automatic versus manual segmentation. When the 0° and 90° datasets were compared, the mean differences in the calculated total retina and RPEDC volumes were 0.28% ± 0.28% and 1.60% ± 1.57%, respectively. The average segmentation time per image was 1.7 seconds automatically versus 3.5 minutes manually. The automatic algorithm accurately and reproducibly segmented three retinal boundaries in images containing drusen and GA. This automatic approach can reduce time and labor costs and yield objective measurements that potentially reveal quantitative RPE changes in longitudinal clinical AMD studies. (ClinicalTrials.gov number, NCT00734487.).

  12. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.

    2013-07-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  13. Automatic treatment of multiple wound coils in 3D finite element problems including multiply connected regions

    Energy Technology Data Exchange (ETDEWEB)

    Leonard, P.J.; Lai, H.C.; Eastham, J.F.; Al-Akayshee, Q.H. [Univ. of Bath (United Kingdom)

    1996-05-01

    This paper describes an efficient scheme for incorporating multiple wire wound coils into 3D finite element models. The scheme is based on the magnetic scalar representation with an additional basis for each coil. There are no restrictions on the topology of coils with respect to ferromagnetic and conductor regions. Reduced scalar regions and cuts are automatically generated.

  14. Automatic Hypocenter Determination Method in JMA Catalog and its Application

    Science.gov (United States)

    Tamaribuchi, K.

    2017-12-01

    The number of detectable earthquakes around Japan has increased by developing the high-sensitivity seismic observation network. After the 2011 Tohoku-oki earthquake, the number of detectable earthquakes have dramatically increased due to its aftershocks and induced earthquakes. This enormous number of earthquakes caused inability of manually determination of all the hypocenters. The Japan Meteorological Agency (JMA), which produces the earthquake catalog in Japan, has developed a new automatic hypocenter determination method and started its operation from April 1, 2016. This method (named PF method; Phase combination Forward search method) can determine the hypocenters of earthquakes that occur simultaneously by searching for the optimal combination of P- and S-wave arrival times and the maximum amplitudes using a Bayesian estimation technique. In the 2016 Kumamoto earthquake sequence, we successfully detected about 70,000 aftershocks automatically during the period from April 14 to the end of May, and this method contributed to the real-time monitoring of the seismic activity. Furthermore, this method can be also applied to the Earthquake Early Warning (EEW). Application of this method for EEW is called the IPF method and has been used as the hypocenter determination method of the EEW system in JMA from December 2016. By developing this method further, it is possible to contribute to not only speeding up the catalog production, but also improving reliability of the early warning.

  15. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    International Nuclear Information System (INIS)

    Levine, S.H.; Ivanov, K.; Feltus, M.

    1996-01-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors)

  16. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Levine, S H; Ivanov, K; Feltus, M [Pennsylvania State Univ., University Park, PA (United States)

    1996-12-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors).

  17. A Simple and Automatic Method for Locating Surgical Guide Hole

    Science.gov (United States)

    Li, Xun; Chen, Ming; Tang, Kai

    2017-12-01

    Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.

  18. Automatic generation control with thyristor controlled series compensator including superconducting magnetic energy storage units

    Directory of Open Access Journals (Sweden)

    Saroj Padhan

    2014-09-01

    Full Text Available In the present work, an attempt has been made to understand the dynamic performance of Automatic Generation Control (AGC of multi-area multi-units thermal–thermal power system with the consideration of Reheat turbine, Generation Rate Constraint (GRC and Time delay. Initially, the gains of the fuzzy PID controller are optimized using Differential Evolution (DE algorithm. The superiority of DE is demonstrated by comparing the results with Genetic Algorithm (GA. After that performance of Thyristor Controlled Series Compensator (TCSC has been investigated. Further, a TCSC is placed in the tie-line and Superconducting Magnetic Energy Storage (SMES units are considered in both areas. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions from their nominal values. It is observed that the optimum gains of the proposed controller need not be reset even if the system is subjected to wide variation in loading condition and system parameters.

  19. Semi-Automatic Rating Method for Neutrophil Alkaline Phosphatase Activity.

    Science.gov (United States)

    Sugano, Kanae; Hashi, Kotomi; Goto, Misaki; Nishi, Kiyotaka; Maeda, Rie; Kono, Keigo; Yamamoto, Mai; Okada, Kazunori; Kaga, Sanae; Miwa, Keiko; Mikami, Taisei; Masauzi, Nobuo

    2017-01-01

    The neutrophil alkaline phosphatase (NAP) score is a valuable test for the diagnosis of myeloproliferative neoplasms, but it has still manually rated. Therefore, we developed a semi-automatic rating method using Photoshop ® and Image-J, called NAP-PS-IJ. Neutrophil alkaline phosphatase staining was conducted with Tomonaga's method to films of peripheral blood taken from three healthy volunteers. At least 30 neutrophils with NAP scores from 0 to 5+ were observed and taken their images. From which the outer part of neutrophil was removed away with Image-J. These were binarized with two different procedures (P1 and P2) using Photoshop ® . NAP-positive area (NAP-PA) and granule (NAP-PGC) were measured and counted with Image-J. The NAP-PA in images binarized with P1 significantly (P < 0.05) differed between images with NAP scores from 0 to 3+ (group 1) and those from 4+ to 5+ (group 2). The original images in group 1 were binarized with P2. NAP-PGC of them significantly (P < 0.05) differed among all four NAP score groups. The mean NAP-PGC with NAP-PS-IJ indicated a good correlation (r = 0.92, P < 0.001) to results by human examiners. The sensitivity and specificity of NAP-PS-IJ were 60% and 92%, which might be considered as a prototypic method for the full-automatic rating NAP score. © 2016 Wiley Periodicals, Inc.

  20. Systems and methods for automatically identifying and linking names in digital resources

    Science.gov (United States)

    Parker, Charles T.; Lyons, Catherine M.; Roston, Gerald P.; Garrity, George M.

    2017-06-06

    The present invention provides systems and methods for automatically identifying name-like-strings in digital resources, matching these name-like-string against a set of names held in an expertly curated database, and for those name-like-strings found in said database, enhancing the content by associating additional matter with the name, wherein said matter includes information about the names that is held within said database and pointers to other digital resources which include the same name and it synonyms.

  1. Automatic methods for processing track-detector data at the PAVICOM facility

    International Nuclear Information System (INIS)

    Aleksandrov, A.B.; Goncharova, L.A.; Polukhina, N.G.; Fejnberg, E.L.; Davydov, D.A.; Publichenko, P.A.; Roganova, T.M.

    2007-01-01

    New automatic methods essentially simplify and hasten the data treatment of tracking detectors. It allows handling big data files and appreciably improves their statistics; this fact predetermines an elaboration of new experiments, which suppose to use large volume targets, emulsive and solid-state large square tracking detectors. Thereupon the problem of training competent physicists able to work on modern automatic equipment is very relevant. About ten Moscow students working in LPI at PAVICOM facility master new methods every year. Most of the students working in high-energy physics take the print only about archaic hand methods of data handling from tracking detectors. In 2005 on the base of the PAVICOM facility and physics training of the MSU a new educational work for determination of the energy of neutrons passing through nuclear emulsion, which lets students acquire a base habit of data handling from tracking detectors using an automatic facility, was prepared; it can be included in the training process for students of any physical faculty. Specialists mastering methods of an automatic handling by the simple and obvious example of tracking detectors will be able to use their knowledge in various areas of science and techniques. The organization of upper division courses is a new additional aspect of using the PAVICOM facility described in an earlier paper [4

  2. Rotor assembly and method for automatically processing liquids

    Science.gov (United States)

    Burtis, C.A.; Johnson, W.F.; Walker, W.A.

    1992-12-22

    A rotor assembly is described for performing a relatively large number of processing steps upon a sample, such as a whole blood sample, and a diluent, such as water. It includes a rotor body for rotation about an axis and includes a network of chambers within which various processing steps are performed upon the sample and diluent and passageways through which the sample and diluent are transferred. A transfer mechanism is movable through the rotor body by the influence of a magnetic field generated adjacent the transfer mechanism and movable along the rotor body, and the assembly utilizes centrifugal force, a transfer of momentum and capillary action to perform any of a number of processing steps such as separation, aliquoting, transference, washing, reagent addition and mixing of the sample and diluent within the rotor body. The rotor body is particularly suitable for automatic immunoassay analyses. 34 figs.

  3. Vortex flows in the solar chromosphere. I. Automatic detection method

    Science.gov (United States)

    Kato, Y.; Wedemeyer, S.

    2017-05-01

    Solar "magnetic tornadoes" are produced by rotating magnetic field structures that extend from the upper convection zone and the photosphere to the corona of the Sun. Recent studies show that these kinds of rotating features are an integral part of atmospheric dynamics and occur on a large range of spatial scales. A systematic statistical study of magnetic tornadoes is a necessary next step towards understanding their formation and their role in mass and energy transport in the solar atmosphere. For this purpose, we develop a new automatic detection method for chromospheric swirls, meaning the observable signature of solar tornadoes or, more generally, chromospheric vortex flows and rotating motions. Unlike existing studies that rely on visual inspections, our new method combines a line integral convolution (LIC) imaging technique and a scalar quantity that represents a vortex flow on a two-dimensional plane. We have tested two detection algorithms, based on the enhanced vorticity and vorticity strength quantities, by applying them to three-dimensional numerical simulations of the solar atmosphere with CO5BOLD. We conclude that the vorticity strength method is superior compared to the enhanced vorticity method in all aspects. Applying the method to a numerical simulation of the solar atmosphere reveals very abundant small-scale, short-lived chromospheric vortex flows that have not been found previously by visual inspection.

  4. Improving the local wavenumber method by automatic DEXP transformation

    Science.gov (United States)

    Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni

    2014-12-01

    In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.

  5. Reliable clarity automatic-evaluation method for optical remote sensing images

    Science.gov (United States)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  6. A model based method for automatic facial expression recognition

    NARCIS (Netherlands)

    Kuilenburg, H. van; Wiering, M.A.; Uyl, M. den

    2006-01-01

    Automatic facial expression recognition is a research topic with interesting applications in the field of human-computer interaction, psychology and product marketing. The classification accuracy for an automatic system which uses static images as input is however largely limited by the image

  7. Research of x-ray automatic image mosaic method

    Science.gov (United States)

    Liu, Bin; Chen, Shunan; Guo, Lianpeng; Xu, Wanpeng

    2013-10-01

    Image mosaic has widely applications value in the fields of medical image analysis, and it is a technology that carries on the spatial matching to a series of image which are overlapped with each other, and finally builds a seamless and high quality image which has high resolution and big eyeshot. In this paper, the method of grayscale cutting pseudo-color enhancement was firstly used to complete the mapping transformation from gray to the pseudo-color, and to extract SIFT features from the images. And then by making use of a similar measure of NCC (normalized cross correlation - Normalized cross-correlation), the method of RANSAC (Random Sample Consensus) was used to exclude the pseudofeature points right in order to complete the exact match of feature points. Finally, seamless mosaic and color fusion were completed by using wavelet multi-decomposition. The experiment shows that the method we used can effectively improve the precision and automation of the medical image mosaic, and provide an effective technical approach for automatic medical image mosaic.

  8. Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Dam, Erik Bjørnager

    2018-01-01

    Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic...... image analysis methods used in longitudinal studies. The implication is increased measurement variation and a risk of bias in the estimations (e.g. in the volume change for a structure). We proposed two quite different approaches for scanner drift normalization and demonstrated the performance...... for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured...

  9. A simple method for automatic measurement of excitation functions

    International Nuclear Information System (INIS)

    Ogawa, M.; Adachi, M.; Arai, E.

    1975-01-01

    An apparatus has been constructed to perform the sequence control of a beam-analysing magnet for automatic excitation function measurements. This device is also applied to the feedback control of the magnet to lock the beam energy. (Auth.)

  10. A method of automatic data processing in radiometric control

    International Nuclear Information System (INIS)

    Adonin, V.M.; Gulyukina, N.A.; Nemirov, Yu.V.; Mogil'nitskij, M.I.

    1980-01-01

    Described is the algorithm for automatic data processing in gamma radiography of products. Rapidity due to application of recurrent evaluation is a specific feature of the processing. Experimental data of by-line control are presented. The results obtained have shown the applicability of automatic signal processing to the testing under industrial conditions, which would permit to increase the testing efficiency to eliminate the subjectivism in assessment of testing results and to improve working conditions

  11. Optical Methods For Automatic Rating Of Engine Test Components

    Science.gov (United States)

    Pritchard, James R.; Moss, Brian C.

    1989-03-01

    In recent years, increasing commercial and legislative pressure on automotive engine manufacturers, including increased oil drain intervals, cleaner exhaust emissions and high specific power outputs, have led to increasing demands on lubricating oil performance. Lubricant performance is defined by bench engine tests run under closely controlled conditions. After test, engines are dismantled and the parts rated for wear and accumulation of deposit. This rating must be consistently carried out in laboratories throughout the world in order to ensure lubricant quality meeting the specified standards. To this end, rating technicians evaluate components, following closely defined procedures. This process is time consuming, inaccurate and subject to drift, requiring regular recalibration of raters by means of international rating workshops. This paper describes two instruments for automatic rating of engine parts. The first uses a laser to determine the degree of polishing of the engine cylinder bore, caused by the reciprocating action of piston. This instrument has been developed to prototype stage by the NDT Centre at Harwell under contract to Exxon Chemical, and is planned for production within the next twelve months. The second instrument uses red and green filtered light to determine the type, quality and position of deposit formed on the piston surfaces. The latter device has undergone feasibility study, but no prototype exists.

  12. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  13. Catalyst support structure, catalyst including the structure, reactor including a catalyst, and methods of forming same

    Science.gov (United States)

    Van Norman, Staci A.; Aston, Victoria J.; Weimer, Alan W.

    2017-05-09

    Structures, catalysts, and reactors suitable for use for a variety of applications, including gas-to-liquid and coal-to-liquid processes and methods of forming the structures, catalysts, and reactors are disclosed. The catalyst material can be deposited onto an inner wall of a microtubular reactor and/or onto porous tungsten support structures using atomic layer deposition techniques.

  14. Development of automatic control method for cryopump system for JT-60 neutral beam injector

    International Nuclear Information System (INIS)

    Shibanuma, Kiyoshi; Akino, Noboru; Dairaku, Masayuki; Ohuchi, Yutaka; Shibata, Takemasa

    1991-10-01

    A cryopump system for JT-60 neutral beam injector (NBI) is composed of 14 cryopumps with the largest total pumping speed of 20000 m 3 /s in the world, which are cooled by liquid helium through a long-distance liquid helium transferline of about 500 m from a helium refrigerator with the largest capacity of 3000 W at 3.6 K in Japan. An automatic control method of the cryopump system has been developed and tested. Features of the automatic control method are as follows. 1) Suppression control of the thermal imbalance in cooling-down of the 14 cryopumps. 2) Stable cooling control of the cryopump due to liquid helium supply to six cryopanels by natural circulation in steady-state mode. 3) Stable liquid helium supply control for the cryopumps from the liquid helium dewar in all operation modes of the cryopumps, considering the helium quantities held in respective components of the closed helium loop. 4) Stable control of the helium refrigerator for the fluctuation in thermal load from the cryopumps and the change of operation mode of the cryopumps. In the automatic operation of the cryopump system by the newly developed control method, the cryopump system including the refrigerator was stably operated for all operation modes of the cryopumps, so that the cool-down of 14 cryopumps was completed in 16 hours from the start of cool-down of the system and the cryopumps was stably cooled by natural circulation cooling in steady-state mode. (author)

  15. Automatic intra-modality brain image registration method

    International Nuclear Information System (INIS)

    Whitaker, J.M.; Ardekani, B.A.; Braun, M.

    1996-01-01

    Full text: Registration of 3D images of brain of the same or different subjects has potential importance in clinical diagnosis, treatment planning and neurological research. The broad aim of our work is to produce an automatic and robust intra-modality, brain image registration algorithm for intra-subject and inter-subject studies. Our algorithm is composed of two stages. Initial alignment is achieved by finding the values of nine transformation parameters (representing translation, rotation and scale) that minimise the nonoverlapping regions of the head. This is achieved by minimisation of the sum of the exclusive OR of two binary head images, produced using the head extraction procedure described by Ardekani et al. (J Comput Assist Tomogr, 19:613-623, 1995). The initial alignment successfully determines the scale parameters and gross translation and rotation parameters. Fine alignment uses an objective function described for inter-modality registration in Ardekani et al. (ibid.). The algorithm segments one of the images to be aligned into a set of connected components using K-means clustering. Registration is achieved by minimising the K-means variance of the segmentation induced in the other image. Similarity of images of the same modality makes the method attractive for intra-modality registration. A 3D MR image, with voxel dimensions, 2x2x6 mm, was misaligned. The registered image shows visually accurate registration. The average displacement of a pixel from its correct location was measured to be 3.3 mm. The algorithm was tested on intra-subject MR images and was found to produce good qualitative results. Using the data available, the algorithm produced promising qualitative results in intra-subject registration. Further work is necessary in its application to intersubject registration, due to large variability in brain structure between subjects. Clinical evaluation of the algorithm for selected applications is required

  16. A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data

    Science.gov (United States)

    XU, R.; Jia, G.

    2012-12-01

    Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China

  17. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng

    2013-01-01

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  18. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    Science.gov (United States)

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Microfluidic devices and methods including porous polymer monoliths

    Science.gov (United States)

    Hatch, Anson V; Sommer, Gregory J; Singh, Anup K; Wang, Ying-Chih; Abhyankar, Vinay V

    2014-04-22

    Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.

  20. Methods of producing adsorption media including a metal oxide

    Science.gov (United States)

    Mann, Nicholas R; Tranter, Troy J

    2014-03-04

    Methods of producing a metal oxide are disclosed. The method comprises dissolving a metal salt in a reaction solvent to form a metal salt/reaction solvent solution. The metal salt is converted to a metal oxide and a caustic solution is added to the metal oxide/reaction solvent solution to adjust the pH of the metal oxide/reaction solvent solution to less than approximately 7.0. The metal oxide is precipitated and recovered. A method of producing adsorption media including the metal oxide is also disclosed, as is a precursor of an active component including particles of a metal oxide.

  1. Evaluation of an automatic brain segmentation method developed for neonates on adult MR brain images

    Science.gov (United States)

    Moeskops, Pim; Viergever, Max A.; Benders, Manon J. N. L.; Išgum, Ivana

    2015-03-01

    Automatic brain tissue segmentation is of clinical relevance in images acquired at all ages. The literature presents a clear distinction between methods developed for MR images of infants, and methods developed for images of adults. The aim of this work is to evaluate a method developed for neonatal images in the segmentation of adult images. The evaluated method employs supervised voxel classification in subsequent stages, exploiting spatial and intensity information. Evaluation was performed using images available within the MRBrainS13 challenge. The obtained average Dice coefficients were 85.77% for grey matter, 88.66% for white matter, 81.08% for cerebrospinal fluid, 95.65% for cerebrum, and 96.92% for intracranial cavity, currently resulting in the best overall ranking. The possibility of applying the same method to neonatal as well as adult images can be of great value in cross-sectional studies that include a wide age range.

  2. Feature extraction and descriptor calculation methods for automatic georeferencing of Philippines' first microsatellite imagery

    Science.gov (United States)

    Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.

    2017-10-01

    The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.

  3. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...

  4. CURRENT STATE ANALYSIS OF AUTOMATIC BLOCK SYSTEM DEVICES, METHODS OF ITS SERVICE AND MONITORING

    Directory of Open Access Journals (Sweden)

    A. M. Beznarytnyy

    2014-01-01

    Full Text Available Purpose. Development of formalized description of automatic block system of numerical code based on the analysis of characteristic failures of automatic block system and procedure of its maintenance. Methodology. For this research a theoretical and analytical methods have been used. Findings. Typical failures of the automatic block systems were analyzed, as well as basic reasons of failure occur were found out. It was determined that majority of failures occurs due to defects of the maintenance system. Advantages and disadvantages of the current service technology of automatic block system were analyzed. Works that can be automatized by means of technical diagnostics were found out. Formal description of the numerical code of automatic block system as a graph in the state space of the system was carried out. Originality. The state graph of the numerical code of automatic block system that takes into account gradual transition from the serviceable condition to the loss of efficiency was offered. That allows selecting diagnostic information according to attributes and increasing the effectiveness of recovery operations in the case of a malfunction. Practical value. The obtained results of analysis and proposed the state graph can be used as the basis for the development of new means of diagnosing devices for automatic block system, which in turn will improve the efficiency and service of automatic block system devices in general.

  5. Initiation devices, initiation systems including initiation devices and related methods

    Energy Technology Data Exchange (ETDEWEB)

    Daniels, Michael A.; Condit, Reston A.; Rasmussen, Nikki; Wallace, Ronald S.

    2018-04-10

    Initiation devices may include at least one substrate, an initiation element positioned on a first side of the at least one substrate, and a spark gap electrically coupled to the initiation element and positioned on a second side of the at least one substrate. Initiation devices may include a plurality of substrates where at least one substrate of the plurality of substrates is electrically connected to at least one adjacent substrate of the plurality of substrates with at least one via extending through the at least one substrate. Initiation systems may include such initiation devices. Methods of igniting energetic materials include passing a current through a spark gap formed on at least one substrate of the initiation device, passing the current through at least one via formed through the at least one substrate, and passing the current through an explosive bridge wire of the initiation device.

  6. Method for automatic filling of nuclear fuel rod cladding tubes

    International Nuclear Information System (INIS)

    Bezold, H.

    1979-01-01

    Prior to welding the zirconium alloy cladding tubes with end caps, they are automatically filled with nuclear fuel tablets and ceramic insulating tablets. The tablets are introduced into magazine drums and led through a drying oven to a discharging station. The empty cladding tubes are removed from this discharging station and filled with tablets. A filling stamp pushes out the columns of tablets in the magazine tubes of the magazine drum into the cladding tube. Weight and measurement of length determine the filled state of the cladding tube. The cladding tubes are then led to the welding station via a conveyor belt. (DG) [de

  7. Manual versus automatic bladder wall thickness measurements: a method comparison study

    NARCIS (Netherlands)

    Oelke, M.; Mamoulakis, C.; Ubbink, D.T.; de la Rosette, J.J.; Wijkstra, H.

    2009-01-01

    Purpose To compare repeatability and agreement of conventional ultrasound bladder wall thickness (BWT) measurements with automatically obtained BWT measurements by the BVM 6500 device. Methods Adult patients with lower urinary tract symptoms, urinary incontinence, or postvoid residual urine were

  8. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    International Nuclear Information System (INIS)

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H.

    2012-01-01

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  9. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  10. 10 CFR Appendix J1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Uniform Test Method for Measuring the Energy Consumption... Energy Consumption of Automatic and Semi-Automatic Clothes Washers The provisions of this appendix J1... means for determining the energy consumption of a clothes washer with an adaptive control system...

  11. Method and apparatus for automatic control of a humanoid robot

    Science.gov (United States)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Reiland, Matthew J (Inventor); Sanders, Adam M (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  12. Automatic speech recognition (zero crossing method). Automatic recognition of isolated vowels

    International Nuclear Information System (INIS)

    Dupeyrat, Benoit

    1975-01-01

    This note describes a recognition method of isolated vowels, using a preprocessing of the vocal signal. The processing extracts the extrema of the vocal signal and the interval time separating them (Zero crossing distances of the first derivative of the signal). The recognition of vowels uses normalized histograms of the values of these intervals. The program determines a distance between the histogram of the sound to be recognized and histograms models built during a learning phase. The results processed on real time by a minicomputer, are relatively independent of the speaker, the fundamental frequency being not allowed to vary too much (i.e. speakers of the same sex). (author) [fr

  13. An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization

    International Nuclear Information System (INIS)

    Grimson, W.E.L.; Lozano-Perez, T.; White, S.J.; Wells, W.M. III; Kikinis, R.

    1996-01-01

    There is a need for frameless guidance systems to help surgeons plan the exact location for incisions, to define the margins of tumors, and to precisely identify locations of neighboring critical structures. The authors have developed an automatic technique for registering clinical data, such as segmented magnetic resonance imaging (MRI) or computed tomography (CT) reconstructions, with any view of the patient on the operating table. They demonstrate on the specific example of neurosurgery. The method enables a visual mix of live video of the patient and the segmented three-dimensional (3-D) MRI or CT model. This supports enhanced reality techniques for planning and guiding neurosurgical procedures and allows them to interactively view extracranial or intracranial structures nonintrusively. Extensions of the method include image guided biopsies, focused therapeutic procedures, and clinical studies involving change detection over time sequences of images

  14. a Method for the Seamlines Network Automatic Selection Based on Building Vector

    Science.gov (United States)

    Li, P.; Dong, Y.; Hu, Y.; Li, X.; Tan, P.

    2018-04-01

    In order to improve the efficiency of large scale orthophoto production of city, this paper presents a method for automatic selection of seamlines network in large scale orthophoto based on the buildings' vector. Firstly, a simple model of the building is built by combining building's vector, height and DEM, and the imaging area of the building on single DOM is obtained. Then, the initial Voronoi network of the measurement area is automatically generated based on the positions of the bottom of all images. Finally, the final seamlines network is obtained by optimizing all nodes and seamlines in the network automatically based on the imaging areas of the buildings. The experimental results show that the proposed method can not only get around the building seamlines network quickly, but also remain the Voronoi network' characteristics of projection distortion minimum theory, which can solve the problem of automatic selection of orthophoto seamlines network in image mosaicking effectively.

  15. Extended morphological processing: a practical method for automatic spot detection of biological markers from microscopic images.

    Science.gov (United States)

    Kimori, Yoshitaka; Baba, Norio; Morone, Nobuhiro

    2010-07-08

    A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.

  16. An Automatic Cloud Detection Method for ZY-3 Satellite

    Directory of Open Access Journals (Sweden)

    CHEN Zhenwei

    2015-03-01

    Full Text Available Automatic cloud detection for optical satellite remote sensing images is a significant step in the production system of satellite products. For the browse images cataloged by ZY-3 satellite, the tree discriminate structure is adopted to carry out cloud detection. The image was divided into sub-images and their features were extracted to perform classification between clouds and grounds. However, due to the high complexity of clouds and surfaces and the low resolution of browse images, the traditional classification algorithms based on image features are of great limitations. In view of the problem, a prior enhancement processing to original sub-images before classification was put forward in this paper to widen the texture difference between clouds and surfaces. Afterwards, with the secondary moment and first difference of the images, the feature vectors were extended in multi-scale space, and then the cloud proportion in the image was estimated through comprehensive analysis. The presented cloud detection algorithm has already been applied to the ZY-3 application system project, and the practical experiment results indicate that this algorithm is capable of promoting the accuracy of cloud detection significantly.

  17. [Modeling and implementation method for the automatic biochemistry analyzer control system].

    Science.gov (United States)

    Wang, Dong; Ge, Wan-cheng; Song, Chun-lin; Wang, Yun-guang

    2009-03-01

    In this paper the system structure The automatic biochemistry analyzer is a necessary instrument for clinical diagnostics. First of is analyzed. The system problems description and the fundamental principles for dispatch are brought forward. Then this text puts emphasis on the modeling for the automatic biochemistry analyzer control system. The objects model and the communications model are put forward. Finally, the implementation method is designed. It indicates that the system based on the model has good performance.

  18. Automatic Morphological Sieving: Comparison between Different Methods, Application to DNA Ploidy Measurements

    Directory of Open Access Journals (Sweden)

    Christophe Boudry

    1999-01-01

    Full Text Available The aim of the present study is to propose alternative automatic methods to time consuming interactive sorting of elements for DNA ploidy measurements. One archival brain tumour and two archival breast carcinoma were studied, corresponding to 7120 elements (3764 nuclei, 3356 debris and aggregates. Three automatic classification methods were tested to eliminate debris and aggregates from DNA ploidy measurements (mathematical morphology (MM, multiparametric analysis (MA and neural network (NN. Performances were evaluated by reference to interactive sorting. The results obtained for the three methods concerning the percentage of debris and aggregates automatically removed reach 63, 75 and 85% for MM, MA and NN methods, respectively, with false positive rates of 6, 21 and 25%. Information about DNA ploidy abnormalities were globally preserved after automatic elimination of debris and aggregates by MM and MA methods as opposed to NN method, showing that automatic classification methods can offer alternatives to tedious interactive elimination of debris and aggregates, for DNA ploidy measurements of archival tumours.

  19. Automatic methods of the processing of data from track detectors on the basis of the PAVICOM facility

    Science.gov (United States)

    Aleksandrov, A. B.; Goncharova, L. A.; Davydov, D. A.; Publichenko, P. A.; Roganova, T. M.; Polukhina, N. G.; Feinberg, E. L.

    2007-02-01

    New automatic methods essentially simplify and increase the rate of the processing of data from track detectors. This provides a possibility of processing large data arrays and considerably improves their statistical significance. This fact predetermines the development of new experiments which plan to use large-volume targets, large-area emulsion, and solid-state track detectors [1]. In this regard, the problem of training qualified physicists who are capable of operating modern automatic equipment is very important. Annually, about ten Moscow students master the new methods, working at the Lebedev Physical Institute at the PAVICOM facility [2 4]. Most students specializing in high-energy physics are only given an idea of archaic manual methods of the processing of data from track detectors. In 2005, on the basis of the PAVICOM facility and the physicstraining course of Moscow State University, a new training work was prepared. This work is devoted to the determination of the energy of neutrons passing through a nuclear emulsion. It provides the possibility of acquiring basic practical skills of the processing of data from track detectors using automatic equipment and can be included in the educational process of students of any physical faculty. Those who have mastered the methods of automatic data processing in a simple and pictorial example of track detectors will be able to apply their knowledge in various fields of science and technique. Formulation of training works for pregraduate and graduate students is a new additional aspect of application of the PAVICOM facility described earlier in [4].

  20. Method for automatic control rod operation using rule-based control

    International Nuclear Information System (INIS)

    Kinoshita, Mitsuo; Yamada, Naoyuki; Kiguchi, Takashi

    1988-01-01

    An automatic control rod operation method using rule-based control is proposed. Its features are as follows: (1) a production system to recognize plant events, determine control actions and realize fast inference (fast selection of a suitable production rule), (2) use of the fuzzy control technique to determine quantitative control variables. The method's performance was evaluated by simulation tests on automatic control rod operation at a BWR plant start-up. The results were as follows; (1) The performance which is related to stabilization of controlled variables and time required for reactor start-up, was superior to that of other methods such as PID control and program control methods, (2) the process time to select and interpret the suitable production rule, which was the same as required for event recognition or determination of control action, was short (below 1 s) enough for real time control. The results showed that the method is effective for automatic control rod operation. (author)

  1. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    Science.gov (United States)

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  2. Automatic control logics to eliminate xenon oscillation based on Axial Offsets Trajectory Method

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    1996-01-01

    We have proposed Axial Offsets (AO) Trajectory Method for xenon oscillation control in pressurized water reactors. The features of this method are described as such that it can clearly give necessary control operations to eliminate xenon oscillations. It is expected that using the features automatic control logics for xenon oscillations can be simple and be realized easily. We investigated automatic control logics. The AO Trajectory Method could realize a very simple logic only for eliminating xenon oscillations. However it was necessary to give another considerations to eliminate the xenon oscillation with a given axial power distribution. The other control logic based on the modern control theory was also studied for comparison of the control performance of the new control logic. As the results, it is presented that the automatic control logics based on the AO Trajectory Method are very simple and effective. (author)

  3. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  4. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  5. [Improved methods for researching isolated carotid sinus baroreceptors automatically controlling for sinus pressure].

    Science.gov (United States)

    Wei, Hua; Zhao, Hai-Yan; Liu, Ping; Huang, Hai-Xia; Wang, Wei; Fu, Xiao-Suo; Niu, Wei-Zhen

    2013-01-01

    To develop a system for automatically controlling carotid sinus pressure in the study on baroreceptors. The preparation containing carotid sinus with parts of the connected vessels and carotid sinus nerve (CS-CSN) were isolated and perfused. A critical pressure controlling component (PRE-U, Hoerbiger, Deutschland) dictated by a computer was integrated into the system to clamp the intrasinus pressure. The pressure command and the relevant intrasinus pressure were compared to evaluate the validity of the pressure controlling system. A variety of sinus pressure-controlling patterns, including pulsation, ramp and step pressures, could be achieved accurately by using the system, and the pressure-dependent discharge activities of sinus nerve were confirmed. This system for clamping carotid sinus pressure could realize multiple pressure-controlling patterns and is a useful and flexible pressure controlling method that could applied in the study on mechano-electric transduction of baroreceptors.

  6. Transducer-actuator systems and methods for performing on-machine measurements and automatic part alignment

    Science.gov (United States)

    Barkman, William E.; Dow, Thomas A.; Garrard, Kenneth P.; Marston, Zachary

    2016-07-12

    Systems and methods for performing on-machine measurements and automatic part alignment, including: a measurement component operable for determining the position of a part on a machine; and an actuation component operable for adjusting the position of the part by contacting the part with a predetermined force responsive to the determined position of the part. The measurement component consists of a transducer. The actuation component consists of a linear actuator. Optionally, the measurement component and the actuation component consist of a single linear actuator operable for contacting the part with a first lighter force for determining the position of the part and with a second harder force for adjusting the position of the part. The actuation component is utilized in a substantially horizontal configuration and the effects of gravitational drop of the part are accounted for in the force applied and the timing of the contact.

  7. DEVELOPMENT OF AUTOMATIC EXTRACTION METHOD FOR ROAD UPDATE INFORMATION BASED ON PUBLIC WORK ORDER OUTLOOK

    Science.gov (United States)

    Sekimoto, Yoshihide; Nakajo, Satoru; Minami, Yoshitaka; Yamaguchi, Syohei; Yamada, Harutoshi; Fuse, Takashi

    Recently, disclosure of statistic data, representing financial effects or burden for public work, through each web site of national or local government, enables us to discuss macroscopic financial trends. However, it is still difficult to grasp a basic property nationwide how each spot was changed by public work. In this research, our research purpose is to collect road update information reasonably which various road managers provide, in order to realize efficient updating of various maps such as car navigation maps. In particular, we develop the system extracting public work concerned and registering summary including position information to database automatically from public work order outlook, released by each local government, combinating some web mining technologies. Finally, we collect and register several tens of thousands from web site all over Japan, and confirm the feasibility of our method.

  8. Automatic Traffic Advisory and Resolution Service (ATARS) Algorithms Including Resolution-Advisory-Register Logic. Volume 1. Sections 1 through 11,

    Science.gov (United States)

    1981-06-01

    span required or allowed for each task in a single scan is outlined in Table 3-1. The executive program controls the initiation and termination of each...by-step manner throughout the ATARS process. At the same time, the executive program controls and determines when each task is ready to accept the next...AD-AI04 147 MITRE CORP MCLEAN VA METREK UI V ’ 1AUTOMATIC TRAFFIC AUVISORY AND RESOLUTION SERVICE (ATARS) ALGOR--ETC(U) JUN a R H LENTZ. W D LOVE, T L

  9. Automatic diagnostic methods of nuclear reactor collected signals

    International Nuclear Information System (INIS)

    Lavison, P.

    1978-03-01

    This work is the first phase of an opwall study of diagnosis limited to problems of monitoring the operating state; this allows to show all what the pattern recognition methods bring at the processing level. The present problem is the research of the control operations. The analysis of the state of the reactor gives a decision which is compared with the history of the control operations, and if there is not correspondence, the state subjected to the analysis will be said 'abnormal''. The system subjected to the analysis is described and the problem to solve is defined. Then, one deals with the gaussian parametric approach and the methods to evaluate the error probability. After one deals with non parametric methods and an on-line detection has been tested experimentally. Finally a non linear transformation has been studied to reduce the error probability previously obtained. All the methods presented have been tested and compared to a quality index: the error probability [fr

  10. Automatic Extraction of Urban Built-Up Area Based on Object-Oriented Method and Remote Sensing Data

    Science.gov (United States)

    Li, L.; Zhou, H.; Wen, Q.; Chen, T.; Guan, F.; Ren, B.; Yu, H.; Wang, Z.

    2018-04-01

    Built-up area marks the use of city construction land in the different periods of the development, the accurate extraction is the key to the studies of the changes of urban expansion. This paper studies the technology of automatic extraction of urban built-up area based on object-oriented method and remote sensing data, and realizes the automatic extraction of the main built-up area of the city, which saves the manpower cost greatly. First, the extraction of construction land based on object-oriented method, the main technical steps include: (1) Multi-resolution segmentation; (2) Feature Construction and Selection; (3) Information Extraction of Construction Land Based on Rule Set, The characteristic parameters used in the rule set mainly include the mean of the red band (Mean R), Normalized Difference Vegetation Index (NDVI), Ratio of residential index (RRI), Blue band mean (Mean B), Through the combination of the above characteristic parameters, the construction site information can be extracted. Based on the degree of adaptability, distance and area of the object domain, the urban built-up area can be quickly and accurately defined from the construction land information without depending on other data and expert knowledge to achieve the automatic extraction of the urban built-up area. In this paper, Beijing city as an experimental area for the technical methods of the experiment, the results show that: the city built-up area to achieve automatic extraction, boundary accuracy of 2359.65 m to meet the requirements. The automatic extraction of urban built-up area has strong practicality and can be applied to the monitoring of the change of the main built-up area of city.

  11. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  12. Method and device for automatic supervision of plants

    International Nuclear Information System (INIS)

    Pekrul, P.J.; Thiele, A.W.

    1976-01-01

    Method and device for the supervision of plants with respect to anomalous events and especially for monitoring dynamic signals from components of plants which are in operation, e.g. nuclear power plants, and not readily accessible for an inspection. (orig./RW) [de

  13. Automatic teleaudiometry: a low cost method to auditory screening

    Directory of Open Access Journals (Sweden)

    Campelo, Victor Eulálio Sousa

    2010-03-01

    Full Text Available Introduction: The auditory screening' benefits has been demonstrated, however these programs has been restricted to the big centers. Objectives: (a Developing a auditory screening method to distance; (b Testing its accuracy and comparing to the screening audiometry test (AV. Method: The teleaudiometry (TA, consists in a developed software, installed in a computer with phone TDH39. It was realized a study in series in 73 individuals between 17 and 50 years, being 57,%% of the female sex, they were randomly selected between patients and companions of the Hospital das Clínicas. Before were subjected to a symptom questionnaire and otoscopy, the individuals realized the tests of TA AV, with scanning in 20dB in the frequencies of 1,2 and 4kHz following the ASHA (1997 protocol and to the gold standard test of audiometry of pure tones in soundproof booth in aleatory order. Results: the TA has lasted average 125+11s and the AV 65+18s. 69 individuals (94,5% declaring to be found difficult or very easy to performing the TA and 61 (83,6% have considered easy or very easy the AV. The accuracy results of TA and AV were respectively: sensibility (86,7% / 86,7%, specificity (75,9%/ 72,4% and negative predictive value (95,7% / 95,5%, positive predictive value (48,1% / 55,2%. Conclusion: The teleaudiometry has showed a good option as an auditory screening method, presenting accuracy next to screening audiometry. In comparison with this method, the teleaudiometry has presented a similar sensibility, major specificity, negative predictive value and endurance time and, under positive predictive value.

  14. Towards Automatic Testing of Reference Point Based Interactive Methods

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2016-01-01

    In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating ...

  15. Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method.

    Science.gov (United States)

    Veta, Mitko; van Diest, Paul J; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P W

    2016-01-01

    Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an "external" dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial

  16. Generic and robust method for automatic segmentation of PET images using an active contour model

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, Mingzan [Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB Groningen (Netherlands)

    2016-08-15

    Purpose: Although positron emission tomography (PET) images have shown potential to improve the accuracy of targeting in radiation therapy planning and assessment of response to treatment, the boundaries of tumors are not easily distinguishable from surrounding normal tissue owing to the low spatial resolution and inherent noisy characteristics of PET images. The objective of this study is to develop a generic and robust method for automatic delineation of tumor volumes using an active contour model and to evaluate its performance using phantom and clinical studies. Methods: MASAC, a method for automatic segmentation using an active contour model, incorporates the histogram fuzzy C-means clustering, and localized and textural information to constrain the active contour to detect boundaries in an accurate and robust manner. Moreover, the lattice Boltzmann method is used as an alternative approach for solving the level set equation to make it faster and suitable for parallel programming. Twenty simulated phantom studies and 16 clinical studies, including six cases of pharyngolaryngeal squamous cell carcinoma and ten cases of nonsmall cell lung cancer, were included to evaluate its performance. Besides, the proposed method was also compared with the contourlet-based active contour algorithm (CAC) and Schaefer’s thresholding method (ST). The relative volume error (RE), Dice similarity coefficient (DSC), and classification error (CE) metrics were used to analyze the results quantitatively. Results: For the simulated phantom studies (PSs), MASAC and CAC provide similar segmentations of the different lesions, while ST fails to achieve reliable results. For the clinical datasets (2 cases with connected high-uptake regions excluded) (CSs), CAC provides for the lowest mean RE (−8.38% ± 27.49%), while MASAC achieves the best mean DSC (0.71 ± 0.09) and mean CE (53.92% ± 12.65%), respectively. MASAC could reliably quantify different types of lesions assessed in this work

  17. A new method for automatic discontinuity traces sampling on rock mass 3D model

    Science.gov (United States)

    Umili, G.; Ferrero, A.; Einstein, H. H.

    2013-02-01

    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  18. A METHOD OF AUTOMATIC DETERMINATION OF THE NUMBER OF THE ELECTRICAL MOTORS SIMULTANEOUSLY WORKING IN GROUP

    Directory of Open Access Journals (Sweden)

    A. V. Voloshko

    2016-11-01

    Full Text Available Purpose. Propose a method of automatic determination of the number of operating high voltage electric motors in the group of the same type based on the determination and analysis of the account data of power consumption, obtained from of electric power meters installed at the connection of motors. Results. The algorithm of the automatic determination program for the number of working in the same group of electric motors, which is based on the determination of the motor power minimum value at which it is considered on, was developed. Originality. For the first time a method of automatic determination of the number of working of the same type high-voltage motors group was proposed. Practical value. Obtained results may be used for the introduction of an automated accounting run of each motor, calculating the parameters of the equivalent induction motor or a synchronous motor.

  19. A semi-automatic method for peak and valley detection in free-breathing respiratory waveforms

    International Nuclear Information System (INIS)

    Lu Wei; Nystrom, Michelle M.; Parikh, Parag J.; Fooshee, David R.; Hubenschmidt, James P.; Bradley, Jeffrey D.; Low, Daniel A.

    2006-01-01

    The existing commercial software often inadequately determines respiratory peaks for patients in respiration correlated computed tomography. A semi-automatic method was developed for peak and valley detection in free-breathing respiratory waveforms. First the waveform is separated into breath cycles by identifying intercepts of a moving average curve with the inspiration and expiration branches of the waveform. Peaks and valleys were then defined, respectively, as the maximum and minimum between pairs of alternating inspiration and expiration intercepts. Finally, automatic corrections and manual user interventions were employed. On average for each of the 20 patients, 99% of 307 peaks and valleys were automatically detected in 2.8 s. This method was robust for bellows waveforms with large variations

  20. A new method for the automatic calculation of prosody

    International Nuclear Information System (INIS)

    GUIDINI, Annie

    1981-01-01

    An algorithm is presented for the calculation of the prosodic parameters for speech synthesis. It uses the melodic patterns, composed of rising and falling slopes, suggested by G. CAELEN, and rests on: 1. An analysis into units of meaning to determine a melodic pattern 2. the calculation of the numeric values for the prosodic variations of each syllable; 3. The use of a table of vocalic values for the three parameters for each vowel according to the consonantal environment and of a table of standard duration for consonants. This method was applied in the 'SARA' program of synthesis with satisfactory results. (author) [fr

  1. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    Science.gov (United States)

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  2. A FILTRATION METHOD AND APPARATUS INCLUDING A ROLLER WITH PORES

    DEFF Research Database (Denmark)

    2008-01-01

    The present invention offers a method for separating dry matter from a medium. A separation chamber is at least partly defined by a plurality of rollers (2,7) and is capable of being pressure regulated. At least one of the rollers is a pore roller (7) having a surface with pores allowing permeabi...

  3. A method for the automatic control method of the carbonation of alkylsalicylic acids

    Energy Technology Data Exchange (ETDEWEB)

    Manoilo, A M; Alekseev, A K; Antonov, V N; Gordash, Iu T; Mikhailov, Iu A; Vavilov, N E; Zvonarev, A P

    1980-03-17

    In the method for the automatic control of the process of carbonation of alkylsalicylic acids (in the production of alkylsalicylic additives for motor oils) by a hydrate of an oxide of alkali earth metal (AEM) and CO/sub 2/ in a medium of petroleum oil by changing the consumption of CO/sub 2/, oil, and AEM, for the purpose of reducing the consumption of the reagent with the preservation of the stability of the quality of the target product, the consumption of CO/sub 2/ is changed depending on the viscosity of the target product. With this, it is necessary the observe the equality of the unit of the ratio of the values of its viscosity, measured at two different velocites of the shift, and at an inclination of the given value from one, the supply of CO/sub 2//sup -/ is curtailed. The total consumption of oil and consumption of AEM is changed, in addition, proportionally to the change in the viscosity of alkylsalicylic acids. The method makes it possible to stabilize the concentration of the active substance and the general alkalinity of the carbonation product and to maintain the given properties with great accuracy, which improves the quality of the additives with economically valuable reagents and an increase in the productivity of the installation.

  4. Automatic segmentation of MRI head images by 3-D region growing method which utilizes edge information

    International Nuclear Information System (INIS)

    Jiang, Hao; Suzuki, Hidetomo; Toriwaki, Jun-ichiro

    1991-01-01

    This paper presents a 3-D segmentation method that automatically extracts soft tissue from multi-sliced MRI head images. MRI produces a sequence of two-dimensional (2-D) images which contains three-dimensional (3-D) information of organs. To utilize such information we need effective algorithms to treat 3-D digital images and to extract organs and tissues of interest. We developed a method to extract the brain from MRI images which uses a region growing procedure and integrates information of uniformity of gray levels and information of the presence of edge segments in the local area around the pixel of interest. First we generate a kernel region which is a part of brain tissue by simple thresholding. Then we grow the region by means of a region growing algorithm under the control of 3-D edge existence to obtain the region of the brain. Our method is rather simple because it uses basic 3-D image processing techniques like spatial difference. It is robust for variation of gray levels inside a tissue since it also refers to the edge information in the process of region growing. Therefore, the method is flexible enough to be applicable to the segmentation of other images including soft tissues which have complicated shapes and fluctuation in gray levels. (author)

  5. A new robust markerless method for automatic image-to-patient registration in image-guided neurosurgery system.

    Science.gov (United States)

    Liu, Yinlong; Song, Zhijian; Wang, Manning

    2017-12-01

    Compared with the traditional point-based registration in the image-guided neurosurgery system, surface-based registration is preferable because it does not use fiducial markers before image scanning and does not require image acquisition dedicated for navigation purposes. However, most existing surface-based registration methods must include a manual step for coarse registration, which increases the registration time and elicits some inconvenience and uncertainty. A new automatic surface-based registration method is proposed, which applies 3D surface feature description and matching algorithm to obtain point correspondences for coarse registration and uses the iterative closest point (ICP) algorithm in the last step to obtain an image-to-patient registration. Both phantom and clinical data were used to execute automatic registrations and target registration error (TRE) calculated to verify the practicality and robustness of the proposed method. In phantom experiments, the registration accuracy was stable across different downsampling resolutions (18-26 mm) and different support radii (2-6 mm). In clinical experiments, the mean TREs of two patients by registering full head surfaces were 1.30 mm and 1.85 mm. This study introduced a new robust automatic surface-based registration method based on 3D feature matching. The method achieved sufficient registration accuracy with different real-world surface regions in phantom and clinical experiments.

  6. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  7. Composite material including nanocrystals and methods of making

    Science.gov (United States)

    Bawendi, Moungi G.; Sundar, Vikram C.

    2010-04-06

    Temperature-sensing compositions can include an inorganic material, such as a semiconductor nanocrystal. The nanocrystal can be a dependable and accurate indicator of temperature. The intensity of emission of the nanocrystal varies with temperature and can be highly sensitive to surface temperature. The nanocrystals can be processed with a binder to form a matrix, which can be varied by altering the chemical nature of the surface of the nanocrystal. A nanocrystal with a compatibilizing outer layer can be incorporated into a coating formulation and retain its temperature sensitive emissive properties.

  8. The development of an automatic scanning method for CR-39 neutron dosimeter

    International Nuclear Information System (INIS)

    Tawara, Hiroko; Miyajima, Mitsuhiro; Sasaki, Shin-ichi; Hozumi, Ken-ichi

    1989-01-01

    A method of measuring low level neutron dose has been developed with CR-39 track detectors using an automatic scanning system. It is composed of the optical microscope with a video camera, an image processor and a personal computer. The focus point of the microscope and the X-Y stage are controlled from the computer. The minimum detectable neutron dose is estimated at 4.6 mrem in the uniform field of neutron with equivalent energy spectrum to Am-Be source from the results of automatic measurements. (author)

  9. Method and apparatus for mounting or dismounting a semi-automatic twist-lock

    NARCIS (Netherlands)

    Klein Breteler, A.J.; Tekeli, G.

    2001-01-01

    The invention relates to a method for mounting or dismounting a semi-automatic twistlock at a corner of a deck container, wherein the twistlock is mounted or dismounted on a quayside where a ship may be docked for loading or unloading, in a loading or unloading terminal installed on the quayside,

  10. Assessment of automatic segmentation of teeth using a watershed-based method.

    Science.gov (United States)

    Galibourg, Antoine; Dumoncel, Jean; Telmon, Norbert; Calvet, Adèle; Michetti, Jérôme; Maret, Delphine

    2018-01-01

    Tooth 3D automatic segmentation (AS) is being actively developed in research and clinical fields. Here, we assess the effect of automatic segmentation using a watershed-based method on the accuracy and reproducibility of 3D reconstructions in volumetric measurements by comparing it with a semi-automatic segmentation(SAS) method that has already been validated. The study sample comprised 52 teeth, scanned with micro-CT (41 µm voxel size) and CBCT (76; 200 and 300 µm voxel size). Each tooth was segmented by AS based on a watershed method and by SAS. For all surface reconstructions, volumetric measurements were obtained and analysed statistically. Surfaces were then aligned using the SAS surfaces as the reference. The topography of the geometric discrepancies was displayed by using a colour map allowing the maximum differences to be located. AS reconstructions showed similar tooth volumes when compared with SAS for the 41 µm voxel size. A difference in volumes was observed, and increased with the voxel size for CBCT data. The maximum differences were mainly found at the cervical margins and incisal edges but the general form was preserved. Micro-CT, a modality used in dental research, provides data that can be segmented automatically, which is timesaving. AS with CBCT data enables the general form of the region of interest to be displayed. However, our AS method can still be used for metrically reliable measurements in the field of clinical dentistry if some manual refinements are applied.

  11. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  12. Methods for forming complex oxidation reaction products including superconducting articles

    International Nuclear Information System (INIS)

    Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.

    1992-01-01

    This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product

  13. Membrane for distillation including nanostructures, methods of making membranes, and methods of desalination and separation

    KAUST Repository

    Lai, Zhiping; Huang, Kuo-Wei; Chen, Wei

    2016-01-01

    In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure provide membranes, methods of making the membrane, systems including the membrane, methods of separation, methods of desalination, and the like.

  14. Membrane for distillation including nanostructures, methods of making membranes, and methods of desalination and separation

    KAUST Repository

    Lai, Zhiping

    2016-01-21

    In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure provide membranes, methods of making the membrane, systems including the membrane, methods of separation, methods of desalination, and the like.

  15. Automatic registration method for multisensor datasets adopted for dimensional measurements on cutting tools

    International Nuclear Information System (INIS)

    Shaw, L; Mehari, F; Weckenmann, A; Ettl, S; Häusler, G

    2013-01-01

    Multisensor systems with optical 3D sensors are frequently employed to capture complete surface information by measuring workpieces from different views. During coarse and fine registration the resulting datasets are afterward transformed into one common coordinate system. Automatic fine registration methods are well established in dimensional metrology, whereas there is a deficit in automatic coarse registration methods. The advantage of a fully automatic registration procedure is twofold: it enables a fast and contact-free alignment and further a flexible application to datasets of any kind of optical 3D sensor. In this paper, an algorithm adapted for a robust automatic coarse registration is presented. The method was originally developed for the field of object reconstruction or localization. It is based on a segmentation of planes in the datasets to calculate the transformation parameters. The rotation is defined by the normals of three corresponding segmented planes of two overlapping datasets, while the translation is calculated via the intersection point of the segmented planes. First results have shown that the translation is strongly shape dependent: 3D data of objects with non-orthogonal planar flanks cannot be registered with the current method. In the novel supplement for the algorithm, the translation is additionally calculated via the distance between centroids of corresponding segmented planes, which results in more than one option for the transformation. A newly introduced measure considering the distance between the datasets after coarse registration evaluates the best possible transformation. Results of the robust automatic registration method are presented on the example of datasets taken from a cutting tool with a fringe-projection system and a focus-variation system. The successful application in dimensional metrology is proven with evaluations of shape parameters based on the registered datasets of a calibrated workpiece. (paper)

  16. Statistical Methods in Assembly Quality Management of Multi-Element Products on Automatic Rotor Lines

    Science.gov (United States)

    Pries, V. V.; Proskuriakov, N. E.

    2018-04-01

    To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.

  17. Another Method of Building 2D Entropy to Realize Automatic Segmentation

    International Nuclear Information System (INIS)

    Zhang, Y F; Zhang, Y

    2006-01-01

    2D entropy formed during the process of building 2D histogram can realize automatic segmentation. Traditional method utilizes central pixel grey value and the others or all of pixels grey mean value in 4-neighbor to build 2D histogram. In fact, the change of the greyscale value between two ''invariable position vectors'' cannot represent the total characteristics among neighbour pixels very well. A new method is proposed which makes use of minimum grey value in the 4-neighbor and of maximum grey value in the 3x3 neighbour except pixels of the 4-neighbor. New method and traditional one are used in contrast to realize image automatic segmentation. The experimental results of the classical image prove the new method is effective

  18. An automatic rat brain extraction method based on a deformable surface model.

    Science.gov (United States)

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Comparing a novel automatic 3D method for LGE-CMR quantification of scar size with established methods.

    Science.gov (United States)

    Woie, Leik; Måløy, Frode; Eftestøl, Trygve; Engan, Kjersti; Edvardsen, Thor; Kvaløy, Jan Terje; Ørn, Stein

    2014-02-01

    Current methods for the estimation of infarct size by late-enhanced cardiac magnetic imaging are based upon 2D analysis that first determines the size of the infarction in each slice, and thereafter adds the infarct sizes from each slice to generate a volume. We present a novel, automatic 3D method that estimates infarct size by a simultaneous analysis of all pixels from all slices. In a population of 54 patients with ischemic scars, the infarct size estimated by the automatic 3D method was compared with four established 2D methods. The new 3D method defined scar as the sum of all pixels with signal intensity (SI) ≥35 % of max SI from the complete myocardium, border zone: SI 35-50 % of max SI and core as SI ≥50 % of max SI. The 3D method yielded smaller infarct size (-2.8 ± 2.3 %) and core size (-3.0 ± 1.7 %) than the 2D method most similar to ours. There was no difference in the size of the border zone (0.2 ± 1.4 %). The 3D method demonstrated stronger correlations between scar size and left ventricular (LV) remodelling parameters (LV ejection fraction: r = -0.71, p 3D automatic method is without the need for manual demarcation of the scar; it is less time-consuming and has a stronger correlation with remodelling parameters compared with existing methods.

  20. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  1. A Review on Energy-Saving Optimization Methods for Robotic and Automatic Systems

    Directory of Open Access Journals (Sweden)

    Giovanni Carabin

    2017-12-01

    Full Text Available In the last decades, increasing energy prices and growing environmental awareness have driven engineers and scientists to find new solutions for reducing energy consumption in manufacturing. Although many processes of a high energy consumption (e.g., chemical, heating, etc. are considered to have reached high levels of efficiency, this is not the case for many other industrial manufacturing activities. Indeed, this is the case for robotic and automatic systems, for which, in the past, the minimization of energy demand was not considered a design objective. The proper design and operation of industrial robots and automation systems represent a great opportunity for reducing energy consumption in the industry, for example, by the substitution with more efficient systems and the energy optimization of operation. This review paper classifies and analyses several methodologies and technologies that have been developed with the aim of providing a reference of existing methods, techniques and technologies for enhancing the energy performance of industrial robotic and mechatronic systems. Hardware and software methods, including several subcategories, are considered and compared, and emerging ideas and possible future perspectives are discussed.

  2. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds.

    Science.gov (United States)

    Cristiano, Bárbara F G; Delgado, José Ubiratan; da Silva, José Wanderley S; de Barros, Pedro D; de Araújo, Radier M S; Dias, Fábio C; Lopes, Ricardo T

    2012-09-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    Science.gov (United States)

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  4. A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans

    Directory of Open Access Journals (Sweden)

    SUN Weixin

    2016-06-01

    Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.

  5. Advanced theoretical and experimental studies in automatic control and information systems. [including mathematical programming and game theory

    Science.gov (United States)

    Desoer, C. A.; Polak, E.; Zadeh, L. A.

    1974-01-01

    A series of research projects is briefly summarized which includes investigations in the following areas: (1) mathematical programming problems for large system and infinite-dimensional spaces, (2) bounded-input bounded-output stability, (3) non-parametric approximations, and (4) differential games. A list of reports and papers which were published over the ten year period of research is included.

  6. A Plant Control Technology Using Reinforcement Learning Method with Automatic Reward Adjustment

    Science.gov (United States)

    Eguchi, Toru; Sekiai, Takaaki; Yamada, Akihiro; Shimizu, Satoru; Fukai, Masayuki

    A control technology using Reinforcement Learning (RL) and Radial Basis Function (RBF) Network has been developed to reduce environmental load substances exhausted from power and industrial plants. This technology consists of the statistic model using RBF Network, which estimates characteristics of plants with respect to environmental load substances, and RL agent, which learns the control logic for the plants using the statistic model. In this technology, it is necessary to design an appropriate reward function given to the agent immediately according to operation conditions and control goals to control plants flexibly. Therefore, we propose an automatic reward adjusting method of RL for plant control. This method adjusts the reward function automatically using information of the statistic model obtained in its learning process. In the simulations, it is confirmed that the proposed method can adjust the reward function adaptively for several test functions, and executes robust control toward the thermal power plant considering the change of operation conditions and control goals.

  7. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Directory of Open Access Journals (Sweden)

    Dorothée Coppieters ’t Wallant

    2016-01-01

    Full Text Available Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation and individual characteristics (intellectual quotient. Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  8. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds

    International Nuclear Information System (INIS)

    Cristiano, Bárbara F.G.; Delgado, José Ubiratan; Wanderley S da Silva, José; Barros, Pedro D. de; Araújo, Radier M.S. de; Dias, Fábio C.; Lopes, Ricardo T.

    2012-01-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. - Highlights: ► A semi-automatic potentiometric titration method was developed for U charaterization. ► K 2 Cr 2 O 7 was the only certified reference material used. ► Values obtained for U 3 O 8 samples were consistent with certified. ► Uncertainty of 0.01% was useful for characterization and intercomparison program.

  9. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  10. A method for automatically constructing the initial contour of the common carotid artery

    Directory of Open Access Journals (Sweden)

    Yara Omran

    2013-10-01

    Full Text Available In this article we propose a novel method to automatically set the initial contour that is used by the Active contours algorithm.The proposed method exploits the accumulative intensity profiles to locate the points on the arterial wall. The intensity profiles of sections that intersect the artery show distinguishable characterstics that make it possible to recognize them from the profiles of sections that do not intersect the artery walls. The proposed method is applied on ultrasound images of the transverse section of the common carotid artery, but it can be extended to be used on the images of the longitudinal section. The intensity profiles are classified using Support vector machine algorithm, and the results of different kernels are compared. The extracted features used for the classification are basically statistical features of the intensity profiles. The echogenicity of the arterial lumen, and gives the profiles that intersect the artery a special shape that helps recognizing these profiles from other general profiles.The outlining of the arterial walls may seem a classic task in image processing. However, most of the methods used to outline the artery start from a manual, or semi-automatic, initial contour.The proposed method is highly appreciated in automating the entire process of automatic artery detection and segmentation.

  11. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  12. A Review of Automatic Methods Based on Image Processing Techniques for Tuberculosis Detection from Microscopic Sputum Smear Images.

    Science.gov (United States)

    Panicker, Rani Oomman; Soman, Biju; Saini, Gagan; Rajan, Jeny

    2016-01-01

    Tuberculosis (TB) is an infectious disease caused by the bacteria Mycobacterium tuberculosis. It primarily affects the lungs, but it can also affect other parts of the body. TB remains one of the leading causes of death in developing countries, and its recent resurgences in both developed and developing countries warrant global attention. The number of deaths due to TB is very high (as per the WHO report, 1.5 million died in 2013), although most are preventable if diagnosed early and treated. There are many tools for TB detection, but the most widely used one is sputum smear microscopy. It is done manually and is often time consuming; a laboratory technician is expected to spend at least 15 min per slide, limiting the number of slides that can be screened. Many countries, including India, have a dearth of properly trained technicians, and they often fail to detect TB cases due to the stress of a heavy workload. Automatic methods are generally considered as a solution to this problem. Attempts have been made to develop automatic approaches to identify TB bacteria from microscopic sputum smear images. In this paper, we provide a review of automatic methods based on image processing techniques published between 1998 and 2014. The review shows that the accuracy of algorithms for the automatic detection of TB increased significantly over the years and gladly acknowledges that commercial products based on published works also started appearing in the market. This review could be useful to researchers and practitioners working in the field of TB automation, providing a comprehensive and accessible overview of methods of this field of research.

  13. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  14. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  15. Development of automatic extraction method of left ventricular contours on long axis view MR cine images

    International Nuclear Information System (INIS)

    Utsunomiya, Shinichi; Iijima, Naoto; Yamasaki, Kazunari; Fujita, Akinori

    1995-01-01

    In the MRI cardiac function analysis, left ventricular volume curves and diagnosis parameters are obtained by extracting the left ventricular cavities as regions of interest (ROI) from long axis view MR cine images. The ROI extractions had to be done by manual operations, because automatization of the extraction is difficult. A long axis view left ventricular contour consists of a cardiac wall part and an aortic valve part. The above mentioned difficulty is due to the decline of contrast on the cardiac wall part, and the disappearance of edge on the aortic valve part. In this paper, we report a new automatic extraction method for long axis view MR cine images, which needs only 3 manually indicated points on the 1st image to extract all the contours from the total sequence of images. At first, candidate points of a contour are detected by edge detection. Then, selecting the best matched combination of candidate points by Dynamic Programming, the cardiac wall part is automatically extracted. The aortic valve part is manually extracted for the 1st image by indicating both the end points, and is automatically extracted for the rest of the images, by utilizing the aortic valve motion characteristics throughout a cardiac cycle. (author)

  16. A method for the automatic quantification of the completeness of pulmonary fissures: evaluation in a database of subjects with severe emphysema

    Energy Technology Data Exchange (ETDEWEB)

    Rikxoort, Eva M. van; Goldin, Jonathan G.; Galperin-Aizenberg, Maya; Abtin, Fereidoun; Kim, Hyun J.; Lu, Peiyun; Shaw, Greg; Brown, Matthew S. [University of California-Los Angeles, Center for Computer Vision and Imaging Biomarkers and Thoracic Imaging Research Group, Department of Radiological Sciences, David Geffen School of Medicine, Los Angeles, CA (United States); Ginneken, Bram van [Radboud University Nijmegen Medical Centre, Diagnostic Image Analysis Group, Department of Radiology, Nijmegen (Netherlands); University Medical Center Utrecht, Image Sciences Institute, Department of Radiology, Utrecht (Netherlands)

    2012-02-15

    To propose and evaluate a technique for automatic quantification of fissural completeness from chest computed tomography (CT) in a database of subjects with severe emphysema. Ninety-six CT studies of patients with severe emphysema were included. The lungs, fissures and lobes were automatically segmented. The completeness of the fissures was calculated as the percentage of the lobar border defined by a fissure. The completeness score of the automatic method was compared with a visual consensus read by three radiologists using boxplots, rank sum tests and ROC analysis. The consensus read found 49% (47/96), 15% (14/96) and 67% (64/96) of the right major, right minor and left major fissures to be complete. For all fissures visually assessed as being complete the automatic method resulted in significantly higher completeness scores (mean 92.78%) than for those assessed as being partial or absent (mean 77.16%; all p values <0.001). The areas under the curves for the automatic fissural completeness were 0.88, 0.91 and 0.83 for the right major, right minor and left major fissures respectively. An automatic method is able to quantify fissural completeness in a cohort of subjects with severe emphysema consistent with a visual consensus read of three radiologists. (orig.)

  17. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds

    International Nuclear Information System (INIS)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da; Lopes, Ricardo T.

    2011-01-01

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  18. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    Science.gov (United States)

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  19. Automatic crack detection method for loaded coal in vibration failure process.

    Directory of Open Access Journals (Sweden)

    Chengwu Li

    Full Text Available In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM. A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically.

  20. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  1. Method of automatic image registration of three-dimensional range of archaeological restoration

    International Nuclear Information System (INIS)

    Garcia, O.; Perez, M.; Morales, N.

    2012-01-01

    We propose an automatic registration system for reconstruction of various positions of a large object based on a static structured light pattern. The system combines the technology of stereo vision, structured light pattern, the positioning system of the vision sensor and an algorithm that simplifies the process of finding correspondence for the modeling of large objects. A new structured light pattern based on Kautz sequence is proposed, using this pattern as static implement a proposed new registration method. (Author)

  2. AUTOMR: An automatic processing program system for the molecular replacement method

    International Nuclear Information System (INIS)

    Matsuura, Yoshiki

    1991-01-01

    An automatic processing program system of the molecular replacement method AUTMR is presented. The program solves the initial model of the target crystal structure using a homologous molecule as the search model. It processes the structure-factor calculation of the model molecule, the rotation function, the translation function and the rigid-group refinement successively in one computer job. Test calculations were performed for six protein crystals and the structures were solved in all of these cases. (orig.)

  3. Automatic planning for robots: review of methods and some ideas about structure and learning

    Energy Technology Data Exchange (ETDEWEB)

    Cuena, J.; Salmeron, C.

    1983-01-01

    After a brief review of the problems involved in the design of an automatic planner system, the attention is focused in the particular problems that appear when the planner is used to control the actions of a robot. As conclusion, the introduction of techniques for learning in order to improve the efficiency of a planner are suggested, and a method for it, at present in development, is presented. 14 references.

  4. Method and Tool for Design Process Navigation and Automatic Generation of Simulation Models for Manufacturing Systems

    Science.gov (United States)

    Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji

    Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.

  5. Technical characterization by image analysis: an automatic method of mineralogical studies

    International Nuclear Information System (INIS)

    Oliveira, J.F. de

    1988-01-01

    The application of a modern method of image analysis fully automated for the study of grain size distribution modal assays, degree of liberation and mineralogical associations is discussed. The image analyser is interfaced with a scanning electron microscope and an energy dispersive X-rays analyser. The image generated by backscattered electrons is analysed automatically and the system has been used in accessment studies of applied mineralogy as well as in process control in the mining industry. (author) [pt

  6. SU-E-I-24: Method for CT Automatic Exposure Control Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gracia, M; Olasolo, J; Martin, M; Bragado, L; Gallardo, N; Miquelez, S; Maneru, F; Lozares, S; Pellejero, S; Rubio, A [Complejo Hospitalario de Navarra, Pamplona, Navarra (Spain)

    2015-06-15

    Purpose: Design of a phantom and a simple method for the automatic exposure control (AEC) verification in CT. This verification is included in the computed tomography (CT) Spanish Quality Assurance Protocol. Methods: The phantom design is made from the head and the body phantom used for the CTDI measurement and PMMA plates (35×35 cm2) of 10 cm thickness. Thereby, three different thicknesses along the longitudinal axis are obtained which permit to evaluate the longitudinal AEC performance. Otherwise, the existent asymmetry in the PMMA layers helps to assess angular and 3D AEC operation.Recent acquisition in our hospital (August 2014) of Nomex electrometer (PTW), together with the 10 cm pencil ionization chamber, led to register dose rate as a function of time. Measurements with this chamber fixed at 0° and 90° on the gantry where made on five multidetector-CTs from principal manufacturers. Results: Individual analysis of measurements shows dose rate variation as a function of phantom thickness. The comparative analysis shows that dose rate is kept constant in the head and neck phantom while the PMMA phantom exhibits an abrupt variation between both results, being greater results at 90° as the thickness of the phantom is 3.5 times larger than in the perpendicular direction. Conclusion: Proposed method is simple, quick and reproducible. Results obtained let a qualitative evaluation of the AEC and they are consistent with the expected behavior. A line of future development is to quantitatively study the intensity modulation and parameters of image quality, and a possible comparative study between different manufacturers.

  7. AN AUTOMATIC OPTICAL AND SAR IMAGE REGISTRATION METHOD USING ITERATIVE MULTI-LEVEL AND REFINEMENT MODEL

    Directory of Open Access Journals (Sweden)

    C. Xu

    2016-06-01

    Full Text Available Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using –level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  8. Most probable dimension value and most flat interval methods for automatic estimation of dimension from time series

    International Nuclear Information System (INIS)

    Corana, A.; Bortolan, G.; Casaleggio, A.

    2004-01-01

    We present and compare two automatic methods for dimension estimation from time series. Both methods, based on conceptually different approaches, work on the derivative of the bi-logarithmic plot of the correlation integral versus the correlation length (log-log plot). The first method searches for the most probable dimension values (MPDV) and associates to each of them a possible scaling region. The second one searches for the most flat intervals (MFI) in the derivative of the log-log plot. The automatic procedures include the evaluation of the candidate scaling regions using two reliability indices. The data set used to test the methods consists of time series from known model attractors with and without the addition of noise, structured time series, and electrocardiographic signals from the MIT-BIH ECG database. Statistical analysis of results was carried out by means of paired t-test, and no statistically significant differences were found in the large majority of the trials. Consistent results are also obtained dealing with 'difficult' time series. In general for a more robust and reliable estimate, the use of both methods may represent a good solution when time series from complex systems are analyzed. Although we present results for the correlation dimension only, the procedures can also be used for the automatic estimation of generalized q-order dimensions and pointwise dimension. We think that the proposed methods, eliminating the need of operator intervention, allow a faster and more objective analysis, thus improving the usefulness of dimension analysis for the characterization of time series obtained from complex dynamical systems

  9. A method for the automatic separation of the images of galaxies and stars from measurements made with the COSMOS machine

    International Nuclear Information System (INIS)

    MacGillivray, H.T.; Martin, R.; Pratt, N.M.; Reddish, V.C.; Seddon, H.; Alexander, L.W.G.; Walker, G.S.; Williams, P.R.

    1976-01-01

    A method has been developed which allows the computer to distinguish automatically between the images of galaxies and those of stars from measurements made with the COSMOS automatic plate-measuring machine at the Royal Observatory, Edinburgh. Results have indicated that a 90 to 95 per cent separation between galaxies and stars is possible. (author)

  10. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    Science.gov (United States)

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.

  11. A new automatic design method to develop multilayer thin film devices for high power laser applications

    International Nuclear Information System (INIS)

    Sahoo, N.K.; Apparao, K.V.S.R.

    1992-01-01

    Optical thin film devices play a major role in many areas of frontier technology like development of various laser systems to the designing of complex and precision optical systems. Design and development of these devices are really challenging when they are meant for high power laser applications. In these cases besides desired optical characteristics, the devices are expected to satisfy a whole range of different needs like high damage threshold, durability etc. In the present work a novel completely automatic design method based on Modified Complex Method has been developed for designing of high power thin film devices. Unlike most of the other methods it does not need any suitable starting design. A quarterwave design is sufficient to start with. If required, it is capable of generating its own starting design. The computer code of the method is very simple to implement. This report discusses this novel automatic design method and presents various practicable output designs generated by it. The relative efficiency of the method along with other powerful methods has been presented while designing a broadband IR antireflection coating. The method is also incorporated with 2D and 3D electric field analysis programmes to produce high damage threshold designs. Some experimental devices developed using such designs are also presented in the report. (author). 36 refs., 41 figs

  12. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  13. Semi-automatic watershed medical image segmentation methods for customized cancer radiation treatment planning simulation

    International Nuclear Information System (INIS)

    Kum Oyeon; Kim Hye Kyung; Max, N.

    2007-01-01

    A cancer radiation treatment planning simulation requires image segmentation to define the gross tumor volume, clinical target volume, and planning target volume. Manual segmentation, which is usual in clinical settings, depends on the operator's experience and may, in addition, change for every trial by the same operator. To overcome this difficulty, we developed semi-automatic watershed medical image segmentation tools using both the top-down watershed algorithm in the insight segmentation and registration toolkit (ITK) and Vincent-Soille's bottom-up watershed algorithm with region merging. We applied our algorithms to segment two- and three-dimensional head phantom CT data and to find pixel (or voxel) numbers for each segmented area, which are needed for radiation treatment optimization. A semi-automatic method is useful to avoid errors incurred by both human and machine sources, and provide clear and visible information for pedagogical purpose. (orig.)

  14. An Automatic Detection Method of Nanocomposite Film Element Based on GLCM and Adaboost M1

    Directory of Open Access Journals (Sweden)

    Hai Guo

    2015-01-01

    Full Text Available An automatic detection model adopting pattern recognition technology is proposed in this paper; it can realize the measurement to the element of nanocomposite film. The features of gray level cooccurrence matrix (GLCM can be extracted from different types of surface morphology images of film; after that, the dimension reduction of film can be handled by principal component analysis (PCA. So it is possible to identify the element of film according to the Adaboost M1 algorithm of a strong classifier with ten decision tree classifiers. The experimental result shows that this model is superior to the ones of SVM (support vector machine, NN and BayesNet. The method proposed can be widely applied to the automatic detection of not only nanocomposite film element but also other nanocomposite material elements.

  15. An automatic tuning method of a fuzzy logic controller for nuclear reactors

    International Nuclear Information System (INIS)

    Ramaswamy, P.; Lee, K.Y.; Edwards, R.M.

    1993-01-01

    The design and evaluation by simulation of an automatically tuned fuzzy logic controller is presented. Typically, fuzzy logic controllers are designed based on an expert's knowledge of the process. However, this approach has its limitations in the fact that the controller is hard to optimize or tune to get the desired control action. A method to automate the tuning process using a simplified Kalman filter approach is presented for the fuzzy logic controller to track a suitable reference trajectory. Here, for purposes of illustration an optimal controller's response is used as a reference trajectory to determine automatically the rules for the fuzzy logic controller. To demonstrate the robustness of this design approach, a nonlinear six-delayed neutron group plant is controlled using a fuzzy logic controller that utilizes estimated reactor temperatures from a one-delayed neutron group observer. The fuzzy logic controller displayed good stability and performance robustness characteristics for a wide range of operation

  16. An image-based automatic recognition method for the flowering stage of maize

    Science.gov (United States)

    Yu, Zhenghong; Zhou, Huabing; Li, Cuina

    2018-03-01

    In this paper, we proposed an image-based approach for automatic recognizing the flowering stage of maize. A modified HOG/SVM detection framework is first adopted to detect the ears of maize. Then, we use low-rank matrix recovery technology to precisely extract the ears at pixel level. At last, a new feature called color gradient histogram, as an indicator, is proposed to determine the flowering stage. Comparing experiment has been carried out to testify the validity of our method and the results indicate that our method can meet the demand for practical observation.

  17. A comparison of coronal mass ejections identified by manual and automatic methods

    Directory of Open Access Journals (Sweden)

    S. Yashiro

    2008-10-01

    Full Text Available Coronal mass ejections (CMEs are related to many phenomena (e.g. flares, solar energetic particles, geomagnetic storms, thus compiling of event catalogs is important for a global understanding these phenomena. CMEs have been identified manually for a long time, but in the SOHO era, automatic identification methods are being developed. In order to clarify the advantage and disadvantage of the manual and automatic CME catalogs, we examined the distributions of CME properties listed in the CDAW (manual and CACTus (automatic catalogs. Both catalogs have a good agreement on the wide CMEs (width>120° in their properties, while there is a significant discrepancy on the narrow CMEs (width≤30°: CACTus has a larger number of narrow CMEs than CDAW. We carried out an event-by-event examination of a sample of events and found that the CDAW catalog have missed many narrow CMEs during the solar maximum. Another significant discrepancy was found on the fast CMEs (speed>1000 km/s: the majority of the fast CDAW CMEs are wide and originate from low latitudes, while the fast CACTus CMEs are narrow and originate from all latitudes. Event-by-event examination of a sample of events suggests that CACTus has a problem on the detection of the fast CMEs.

  18. A method of applying two-pump system in automatic transmissions for energy conservation

    Directory of Open Access Journals (Sweden)

    Peng Dong

    2015-06-01

    Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.

  19. Creation of voxel-based models for paediatric dosimetry from automatic segmentation methods

    International Nuclear Information System (INIS)

    Acosta, O.; Li, R.; Ourselin, S.; Caon, M.

    2006-01-01

    Full text: The first computational models representing human anatomy were mathematical phantoms, but still far from accurate representations of human body. These models have been used with radiation transport codes (Monte Carlo) to estimate organ doses from radiological procedures. Although new medical imaging techniques have recently allowed the construction of voxel-based models based on the real anatomy, few children models from individual CT or MRI data have been reported [1,3]. For pediatric dosimetry purposes, a large range of voxel models by ages is required since scaling the anatomy from existing models is not sufficiently accurate. The small number of models available arises from the small number of CT or MRI data sets of children available and the long amount of time required to segment the data sets. The existing models have been constructed by manual segmentation slice by slice and using simple thresholding techniques. In medical image segmentation, considerable difficulties appear when applying classical techniques like thresholding or simple edge detection. Until now, any evidence of more accurate or near-automatic methods used in construction of child voxel models exists. We aim to construct a range of pediatric voxel models, integrating automatic or semi-automatic 3D segmentation techniques. In this paper we present the first stage of this work using pediatric CT data.

  20. Evaluation of advanced automatic PET segmentation methods using nonspherical thin-wall inserts

    International Nuclear Information System (INIS)

    Berthon, B.; Marshall, C.; Evans, M.; Spezi, E.

    2014-01-01

    Purpose: The use of positron emission tomography (PET) within radiotherapy treatment planning requires the availability of reliable and accurate segmentation tools. PET automatic segmentation (PET-AS) methods have been recommended for the delineation of tumors, but there is still a lack of thorough validation and cross-comparison of such methods using clinically relevant data. In particular, studies validating PET segmentation tools mainly use phantoms with thick plastic walls inserts of simple spherical geometry and have not specifically investigated the effect of the target object geometry on the delineation accuracy. Our work therefore aimed at generating clinically realistic data using nonspherical thin-wall plastic inserts, for the evaluation and comparison of a set of eight promising PET-AS approaches. Methods: Sixteen nonspherical inserts were manufactured with a plastic wall of 0.18 mm and scanned within a custom plastic phantom. These included ellipsoids and toroids derived with different volumes, as well as tubes, pear- and drop-shaped inserts with different aspect ratios. A set of six spheres of volumes ranging from 0.5 to 102 ml was used for a baseline study. A selection of eight PET-AS methods, written in house, was applied to the images obtained. The methods represented promising segmentation approaches such as adaptive iterative thresholding, region-growing, clustering and gradient-based schemes. The delineation accuracy was measured in terms of overlap with the computed tomography reference contour, using the dice similarity coefficient (DSC), and error in dimensions. Results: The delineation accuracy was lower for nonspherical inserts than for spheres of the same volume in 88% cases. Slice-by-slice gradient-based methods, showed particularly lower DSC for tori (DSC 0.76 except for tori) but showed the largest errors in the recovery of pears and drops dimensions (higher than 10% and 30% of the true length, respectively). Large errors were visible

  1. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  2. Automatic optimized reload and depletion method for a pressurized water reactor

    International Nuclear Information System (INIS)

    Ahn, D.H.; Levene, S.H.

    1985-01-01

    A new method has been developed to automatically reload and deplete a pressurized water reactor (PWR) so that both the enriched inventory requirements during the reactor cycle and the cost of reloading the core are minimized. This is achieved through four stepwise optimization calculations: (a) determination of the minimum fuel requirement for an equivalent three-region core model, (b) optimal selection and allocation of fuel assemblies for each of the three regions to minimize the reload cost, (c) optimal placement of fuel assemblies to conserve regionwise optimal conditions, and (d) optimal control through poison management to deplete individual fuel assemblies to maximize end-of-cycle k /SUB eff/ . The new method differs from previous methods in that the optimization process automatically performs all tasks required to reload and deplete a PWR. In addition, the previous work that developed optimization methods principally for the initial reactor cycle was modified to handle subsequent cycles with fuel assemblies having burnup at beginning of cycle. Application of the method to the fourth reactor cycle at Three Mile Island Unit 1 has shown that both the enrichment and the number of fresh reload fuel assemblies can be decreased and fully amortized fuel assemblies can be reused to minimize the fuel cost of the reactor

  3. fgui: A Method for Automatically Creating Graphical User Interfaces for Command-Line R Packages

    Science.gov (United States)

    Hoffmann, Thomas J.; Laird, Nan M.

    2009-01-01

    The fgui R package is designed for developers of R packages, to help rapidly, and sometimes fully automatically, create a graphical user interface for a command line R package. The interface is built upon the Tcl/Tk graphical interface included in R. The package further facilitates the developer by loading in the help files from the command line functions to provide context sensitive help to the user with no additional effort from the developer. Passing a function as the argument to the routines in the fgui package creates a graphical interface for the function, and further options are available to tweak this interface for those who want more flexibility. PMID:21625291

  4. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    International Nuclear Information System (INIS)

    Mazzurana, M; Sandrini, L; Vaccari, A; Malacarne, C; Cristoforetti, L; Pontalti, R

    2003-01-01

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight

  5. Adaptive and automatic red blood cell counting method based on microscopic hyperspectral imaging technology

    Science.gov (United States)

    Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting

    2017-12-01

    Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.

  6. A FUZZY AUTOMATIC CAR DETECTION METHOD BASED ON HIGH RESOLUTION SATELLITE IMAGERY AND GEODESIC MORPHOLOGY

    Directory of Open Access Journals (Sweden)

    N. Zarrinpanjeh

    2017-09-01

    Full Text Available Automatic car detection and recognition from aerial and satellite images is mostly practiced for the purpose of easy and fast traffic monitoring in cities and rural areas where direct approaches are proved to be costly and inefficient. Towards the goal of automatic car detection and in parallel with many other published solutions, in this paper, morphological operators and specifically Geodesic dilation are studied and applied on GeoEye-1 images to extract car items in accordance with available vector maps. The results of Geodesic dilation are then segmented and labeled to generate primitive car items to be introduced to a fuzzy decision making system, to be verified. The verification is performed inspecting major and minor axes of each region and the orientations of the cars with respect to the road direction. The proposed method is implemented and tested using GeoEye-1 pansharpen imagery. Generating the results it is observed that the proposed method is successful according to overall accuracy of 83%. It is also concluded that the results are sensitive to the quality of available vector map and to overcome the shortcomings of this method, it is recommended to consider spectral information in the process of hypothesis verification.

  7. a Fuzzy Automatic CAR Detection Method Based on High Resolution Satellite Imagery and Geodesic Morphology

    Science.gov (United States)

    Zarrinpanjeh, N.; Dadrassjavan, F.

    2017-09-01

    Automatic car detection and recognition from aerial and satellite images is mostly practiced for the purpose of easy and fast traffic monitoring in cities and rural areas where direct approaches are proved to be costly and inefficient. Towards the goal of automatic car detection and in parallel with many other published solutions, in this paper, morphological operators and specifically Geodesic dilation are studied and applied on GeoEye-1 images to extract car items in accordance with available vector maps. The results of Geodesic dilation are then segmented and labeled to generate primitive car items to be introduced to a fuzzy decision making system, to be verified. The verification is performed inspecting major and minor axes of each region and the orientations of the cars with respect to the road direction. The proposed method is implemented and tested using GeoEye-1 pansharpen imagery. Generating the results it is observed that the proposed method is successful according to overall accuracy of 83%. It is also concluded that the results are sensitive to the quality of available vector map and to overcome the shortcomings of this method, it is recommended to consider spectral information in the process of hypothesis verification.

  8. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction.

    Science.gov (United States)

    Najafi, Elham; Darooneh, Amir H

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction.

  9. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction

    Science.gov (United States)

    Najafi, Elham; Darooneh, Amir H.

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction. PMID:26091207

  10. Analysis of facial expressions in parkinson's disease through video-based automatic methods.

    Science.gov (United States)

    Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia

    2017-04-01

    The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A Method for Automatic Image Rectification and Stitching for Vehicle Yaw Marks Trajectory Estimation

    Directory of Open Access Journals (Sweden)

    Vidas Žuraulis

    2016-02-01

    Full Text Available The aim of this study has been to propose a new method for automatic rectification and stitching of the images taken on the accident site. The proposed method does not require any measurements to be performed on the accident site and thus it is frsjebalaee of measurement errors. The experimental investigation was performed in order to compare the vehicle trajectory estimation according to the yaw marks in the stitched image and the trajectory, reconstructed using the GPS data. The overall mean error of the trajectory reconstruction, produced by the method proposed in this paper was 0.086 m. It was only 0.18% comparing to the whole trajectory length.

  12. A semi-automatic method for positioning a femoral bone reconstruction for strict view generation.

    Science.gov (United States)

    Milano, Federico; Ritacco, Lucas; Gomez, Adrian; Gonzalez Bernaldo de Quiros, Fernan; Risk, Marcelo

    2010-01-01

    In this paper we present a semi-automatic method for femoral bone positioning after 3D image reconstruction from Computed Tomography images. This serves as grounding for the definition of strict axial, longitudinal and anterior-posterior views, overcoming the problem of patient positioning biases in 2D femoral bone measuring methods. After the bone reconstruction is aligned to a standard reference frame, new tomographic slices can be generated, on which unbiased measures may be taken. This could allow not only accurate inter-patient comparisons but also intra-patient comparisons, i.e., comparisons of images of the same patient taken at different times. This method could enable medical doctors to diagnose and follow up several bone deformities more easily.

  13. Development on quantitative safety analysis method of accident scenario. The automatic scenario generator development for event sequence construction of accident

    International Nuclear Information System (INIS)

    Kojima, Shigeo; Onoue, Akira; Kawai, Katsunori

    1998-01-01

    This study intends to develop a more sophisticated tool that will advance the current event tree method used in all PSA, and to focus on non-catastrophic events, specifically a non-core melt sequence scenario not included in an ordinary PSA. In the non-catastrophic event PSA, it is necessary to consider various end states and failure combinations for the purpose of multiple scenario construction. Therefore it is anticipated that an analysis work should be reduced and automated method and tool is required. A scenario generator that can automatically handle scenario construction logic and generate the enormous size of sequences logically identified by state-of-the-art methodology was developed. To fulfill the scenario generation as a technical tool, a simulation model associated with AI technique and graphical interface, was introduced. The AI simulation model in this study was verified for the feasibility of its capability to evaluate actual systems. In this feasibility study, a spurious SI signal was selected to test the model's applicability. As a result, the basic capability of the scenario generator could be demonstrated and important scenarios were generated. The human interface with a system and its operation, as well as time dependent factors and their quantification in scenario modeling, was added utilizing human scenario generator concept. Then the feasibility of an improved scenario generator was tested for actual use. Automatic scenario generation with a certain level of credibility, was achieved by this study. (author)

  14. Automatic method for selective enhancement of different tissue densities at digital chest radiography

    International Nuclear Information System (INIS)

    McNitt-Gray, M.F.; Taira, R.K.; Eldredge, S.L.; Razavi, M.

    1991-01-01

    This paper reports that digital chest radiographs often are too bright and/or lack contrast when viewed on a video display. The authors have developed a method that can automatically provide a series of look-up tables that selectively enhance the radiographically soft or dense tissues on a digital chest radiograph. This reduces viewer interaction and improves displayed image quality. On the basis of a histogram analysis, gray-level ranges are approximated for the patient background, radiographically soft tissues, and radiographically dense tissues. A series of look-up tables is automatically created by varying the contrast in each range to achieve a level of enhancement for a selected tissue range. This is repeated for differing amounts of enhancement and for each tissue range. This allows the viewer to interactively select a tissue density range and degree of enhancement at the time of display via precalculated look-up tables. Preclinical trials in pediatric radiology using computed radiography images show that this method reduces viewer interaction and improves or maintains the displayed image quality

  15. Automatic recognition of coronal type II radio bursts: The ARBIS 2 method and first observations

    Science.gov (United States)

    Lobzin, Vasili; Cairns, Iver; Robinson, Peter; Steward, Graham; Patterson, Garth

    Major space weather events such as solar flares and coronal mass ejections are usually accompa-nied by solar radio bursts, which can potentially be used for real-time space weather forecasts. Type II radio bursts are produced near the local plasma frequency and its harmonic by fast electrons accelerated by a shock wave moving through the corona and solar wind with a typi-cal speed of 1000 km s-1 . The coronal bursts have dynamic spectra with frequency gradually falling with time and durations of several minutes. We present a new method developed to de-tect type II coronal radio bursts automatically and describe its implementation in an extended Automated Radio Burst Identification System (ARBIS 2). Preliminary tests of the method with spectra obtained in 2002 show that the performance of the current implementation is quite high, ˜ 80%, while the probability of false positives is reasonably low, with one false positive per 100-200 hr for high solar activity and less than one false event per 10000 hr for low solar activity periods. The first automatically detected coronal type II radio bursts are also presented. ARBIS 2 is now operational with IPS Radio and Space Services, providing email alerts and event lists internationally.

  16. A novel automatic method for monitoring Tourette motor tics through a wearable device.

    Science.gov (United States)

    Bernabei, Michel; Preatoni, Ezio; Mendez, Martin; Piccini, Luca; Porta, Mauro; Andreoni, Giuseppe

    2010-09-15

    The aim of this study was to propose a novel automatic method for quantifying motor-tics caused by the Tourette Syndrome (TS). In this preliminary report, the feasibility of the monitoring process was tested over a series of standard clinical trials in a population of 12 subjects affected by TS. A wearable instrument with an embedded three-axial accelerometer was used to detect and classify motor tics during standing and walking activities. An algorithm was devised to analyze acceleration data by: eliminating noise; detecting peaks connected to pathological events; and classifying intensity and frequency of motor tics into quantitative scores. These indexes were compared with the video-based ones provided by expert clinicians, which were taken as the gold-standard. Sensitivity, specificity, and accuracy of tic detection were estimated, and an agreement analysis was performed through the least square regression and the Bland-Altman test. The tic recognition algorithm showed sensitivity = 80.8% ± 8.5% (mean ± SD), specificity = 75.8% ± 17.3%, and accuracy = 80.5% ± 12.2%. The agreement study showed that automatic detection tended to overestimate the number of tics occurred. Although, it appeared this may be a systematic error due to the different recognition principles of the wearable and video-based systems. Furthermore, there was substantial concurrency with the gold-standard in estimating the severity indexes. The proposed methodology gave promising performances in terms of automatic motor-tics detection and classification in a standard clinical context. The system may provide physicians with a quantitative aid for TS assessment. Further developments will focus on the extension of its application to everyday long-term monitoring out of clinical environments. © 2010 Movement Disorder Society.

  17. An automatic method to determine cutoff frequency based on image power spectrum

    International Nuclear Information System (INIS)

    Beis, J.S.; Vancouver Hospital and Health Sciences Center, British Columbia; Celler, A.; Barney, J.S.

    1995-01-01

    The authors present an algorithm for automatically choosing filter cutoff frequency (F c ) using the power spectrum of the projections. The method is based on the assumption that the expectation of the image power spectrum is the sum of the expectation of the blurred object power spectrum (dominant at low frequencies) plus a constant value due to Poisson noise. By considering the discrete components of the noise-dominated high-frequency spectrum as a Gaussian distribution N(μ,σ), the Student t-test determines F c as the highest frequency for which the image frequency components are unlikely to be drawn from N (μ,σ). The method is general and can be applied to any filter. In this work, the authors tested the approach using the Metz restoration filter on simulated, phantom, and patient data with good results. Quantitative performance of the technique was evaluated by plotting recovery coefficient (RC) versus NMSE of reconstructed images

  18. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2015-01-01

    Full Text Available Ultrasound examination (US does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases in extracting appendix.

  19. Automatic Method for Controlling the Iodine Adsorption Number in Carbon Black Oil Furnaces

    Directory of Open Access Journals (Sweden)

    Zečević, N.

    2008-12-01

    Full Text Available There are numerous of different inlet process factors in carbon black oil furnaces which must be continuously and automatically adjusted, due to stable quality of final product. The most important six inlet process factors in carbon black oil-furnaces are:1. volume flow of process air for combustion2. temperature of process air for combustion3. volume flow of natural gas for insurance the necessary heat for thermal reaction of conversionthe hydrocarbon oil feedstock in oil-furnace carbon black4. mass flow rate of hydrocarbon oil feedstock5. type and quantity of additive for adjustment the structure of oil-furnace carbon black6. quantity and position of the quench water for cooling the reaction of oil-furnace carbon black.The control of oil-furnace carbon black adsorption capacity is made with mass flow rate of hydrocarbon feedstock, which is the most important inlet process factor. Oil-furnace carbon black adsorption capacity in industrial process is determined with laboratory analyze of iodine adsorption number. It is shown continuously and automatically method for controlling iodine adsorption number in carbon black oil-furnaces to get as much as possible efficient control of adsorption capacity. In the proposed method it can be seen the correlation between qualitatively-quantitatively composition of the process tail gasses in the production of oil-furnace carbon black and relationship between air for combustion and hydrocarbon feedstock. It is shown that the ratio between air for combustion and hydrocarbon oil feedstock is depended of adsorption capacity summarized by iodine adsorption number, regarding to BMCI index of hydrocarbon oil feedstock.The mentioned correlation can be seen through the figures from 1. to 4. From the whole composition of the process tail gasses the best correlation for continuously and automatically control of iodine adsorption number is show the volume fraction of methane. The volume fraction of methane in the

  20. ISS Contingency Attitude Control Recovery Method for Loss of Automatic Thruster Control

    Science.gov (United States)

    Bedrossian, Nazareth; Bhatt, Sagar; Alaniz, Abran; McCants, Edward; Nguyen, Louis; Chamitoff, Greg

    2008-01-01

    In this paper, the attitude control issues associated with International Space Station (ISS) loss of automatic thruster control capability are discussed and methods for attitude control recovery are presented. This scenario was experienced recently during Shuttle mission STS-117 and ISS Stage 13A in June 2007 when the Russian GN&C computers, which command the ISS thrusters, failed. Without automatic propulsive attitude control, the ISS would not be able to regain attitude control after the Orbiter undocked. The core issues associated with recovering long-term attitude control using CMGs are described as well as the systems engineering analysis to identify recovery options. It is shown that the recovery method can be separated into a procedure for rate damping to a safe harbor gravity gradient stable orientation and a capability to maneuver the vehicle to the necessary initial conditions for long term attitude hold. A manual control option using Soyuz and Progress vehicle thrusters is investigated for rate damping and maneuvers. The issues with implementing such an option are presented and the key issue of closed-loop stability is addressed. A new non-propulsive alternative to thruster control, Zero Propellant Maneuver (ZPM) attitude control method is introduced and its rate damping and maneuver performance evaluated. It is shown that ZPM can meet the tight attitude and rate error tolerances needed for long term attitude control. A combination of manual thruster rate damping to a safe harbor attitude followed by a ZPM to Stage long term attitude control orientation was selected by the Anomaly Resolution Team as the alternate attitude control method for such a contingency.

  1. 3D automatic segmentation method for retinal optical coherence tomography volume data using boundary surface enhancement

    Directory of Open Access Journals (Sweden)

    Yankui Sun

    2016-03-01

    Full Text Available With the introduction of spectral-domain optical coherence tomography (SD-OCT, much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, there is a critical need for the development of three-dimensional (3D segmentation methods for processing these data. We present here a novel 3D automatic segmentation method for retinal OCT volume data. Briefly, to segment a boundary surface, two OCT volume datasets are obtained by using a 3D smoothing filter and a 3D differential filter. Their linear combination is then calculated to generate new volume data with an enhanced boundary surface, where pixel intensity, boundary position information, and intensity changes on both sides of the boundary surface are used simultaneously. Next, preliminary discrete boundary points are detected from the A-Scans of the volume data. Finally, surface smoothness constraints and a dynamic threshold are applied to obtain a smoothed boundary surface by correcting a small number of error points. Our method can extract retinal layer boundary surfaces sequentially with a decreasing search region of volume data. We performed automatic segmentation on eight human OCT volume datasets acquired from a commercial Spectralis OCT system, where each volume of datasets contains 97 OCT B-Scan images with a resolution of 496×512 (each B-Scan comprising 512 A-Scans containing 496 pixels; experimental results show that this method can accurately segment seven layer boundary surfaces in normal as well as some abnormal eyes.

  2. An inverse method for non linear ablative thermics with experimentation of automatic differentiation

    Energy Technology Data Exchange (ETDEWEB)

    Alestra, S [Simulation Information Technology and Systems Engineering, EADS IW Toulouse (France); Collinet, J [Re-entry Systems and Technologies, EADS ASTRIUM ST, Les Mureaux (France); Dubois, F [Professor of Applied Mathematics, Conservatoire National des Arts et Metiers Paris (France)], E-mail: stephane.alestra@eads.net, E-mail: jean.collinet@astrium.eads.net, E-mail: fdubois@cnam.fr

    2008-11-01

    Thermal Protection System is a key element for atmospheric re-entry missions of aerospace vehicles. The high level of heat fluxes encountered in such missions has a direct effect on mass balance of the heat shield. Consequently, the identification of heat fluxes is of great industrial interest but is in flight only available by indirect methods based on temperature measurements. This paper is concerned with inverse analyses of highly evolutive heat fluxes. An inverse problem is used to estimate transient surface heat fluxes (convection coefficient), for degradable thermal material (ablation and pyrolysis), by using time domain temperature measurements on thermal protection. The inverse problem is formulated as a minimization problem involving an objective functional, through an optimization loop. An optimal control formulation (Lagrangian, adjoint and gradient steepest descent method combined with quasi-Newton method computations) is then developed and applied, using Monopyro, a transient one-dimensional thermal model with one moving boundary (ablative surface) that has been developed since many years by ASTRIUM-ST. To compute numerically the adjoint and gradient quantities, for the inverse problem in heat convection coefficient, we have used both an analytical manual differentiation and an Automatic Differentiation (AD) engine tool, Tapenade, developed at INRIA Sophia-Antipolis by the TROPICS team. Several validation test cases, using synthetic temperature measurements are carried out, by applying the results of the inverse method with minimization algorithm. Accurate results of identification on high fluxes test cases, and good agreement for temperatures restitutions, are obtained, without and with ablation and pyrolysis, using bad fluxes initial guesses. First encouraging results with an automatic differentiation procedure are also presented in this paper.

  3. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  4. Automatically classifying sentences in full-text biomedical articles into Introduction, Methods, Results and Discussion.

    Science.gov (United States)

    Agarwal, Shashank; Yu, Hong

    2009-12-01

    Biomedical texts can be typically represented by four rhetorical categories: Introduction, Methods, Results and Discussion (IMRAD). Classifying sentences into these categories can benefit many other text-mining tasks. Although many studies have applied different approaches for automatically classifying sentences in MEDLINE abstracts into the IMRAD categories, few have explored the classification of sentences that appear in full-text biomedical articles. We first evaluated whether sentences in full-text biomedical articles could be reliably annotated into the IMRAD format and then explored different approaches for automatically classifying these sentences into the IMRAD categories. Our results show an overall annotation agreement of 82.14% with a Kappa score of 0.756. The best classification system is a multinomial naïve Bayes classifier trained on manually annotated data that achieved 91.95% accuracy and an average F-score of 91.55%, which is significantly higher than baseline systems. A web version of this system is available online at-http://wood.ims.uwm.edu/full_text_classifier/.

  5. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    Science.gov (United States)

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-03-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

  6. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2007-06-01

    Full Text Available Abstract Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8% and from 94.2% of HuGE PubMed records (accuracy 87.0. We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit, indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a

  7. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Science.gov (United States)

    Yu, Wei; Yesupriya, Ajay; Wulf, Anja; Qu, Junfeng; Gwinn, Marta; Khoury, Muin J

    2007-01-01

    Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit) as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8%) and from 94.2% of HuGE PubMed records (accuracy 87.0). We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit), indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a web-based prototype

  8. An evaluation of automatic coronary artery calcium scoring methods with cardiac CT using the orCaScore framework.

    Science.gov (United States)

    Wolterink, Jelmer M; Leiner, Tim; de Vos, Bob D; Coatrieux, Jean-Louis; Kelm, B Michael; Kondo, Satoshi; Salgado, Rodrigo A; Shahzad, Rahil; Shu, Huazhong; Snoeren, Miranda; Takx, Richard A P; van Vliet, Lucas J; van Walsum, Theo; Willems, Tineke P; Yang, Guanyu; Zheng, Yefeng; Viergever, Max A; Išgum, Ivana

    2016-05-01

    The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen's kappa

  9. Automatic Detection of Microaneurysms in Color Fundus Images using a Local Radon Transform Method

    Directory of Open Access Journals (Sweden)

    Hamid Reza Pourreza

    2009-03-01

    Full Text Available Introduction: Diabetic retinopathy (DR is one of the most serious and most frequent eye diseases in the world and the most common cause of blindness in adults between 20 and 60 years of age. Following 15 years of diabetes, about 2% of the diabetic patients are blind and 10% suffer from vision impairment due to DR complications. This paper addresses the automatic detection of microaneurysms (MA in color fundus images, which plays a key role in computer-assisted early diagnosis of diabetic retinopathy. Materials and Methods: The algorithm can be divided into three main steps. The purpose of the first step or pre-processing is background normalization and contrast enhancement of the images. The second step aims to detect candidates, i.e., all patterns possibly corresponding to MA, which is achieved using a local radon transform, Then, features are extracted, which are used in the last step to automatically classify the candidates into real MA or other objects using the SVM method. A database of 100 annotated images was used to test the algorithm. The algorithm was compared to manually obtained gradings of these images. Results: The sensitivity of diagnosis for DR was 100%, with specificity of 90% and the sensitivity of precise MA localization was 97%, at an average number of 5 false positives per image. Discussion and Conclusion: Sensitivity and specificity of this algorithm make it one of the best methods in this field. Using the local radon transform in this algorithm eliminates the noise sensitivity for MA detection in retinal image analysis.

  10. Evaluating current automatic de-identification methods with Veteran’s health administration clinical documents

    Directory of Open Access Journals (Sweden)

    Ferrández Oscar

    2012-07-01

    Full Text Available Abstract Background The increased use and adoption of Electronic Health Records (EHR causes a tremendous growth in digital information useful for clinicians, researchers and many other operational purposes. However, this information is rich in Protected Health Information (PHI, which severely restricts its access and possible uses. A number of investigators have developed methods for automatically de-identifying EHR documents by removing PHI, as specified in the Health Insurance Portability and Accountability Act “Safe Harbor” method. This study focuses on the evaluation of existing automated text de-identification methods and tools, as applied to Veterans Health Administration (VHA clinical documents, to assess which methods perform better with each category of PHI found in our clinical notes; and when new methods are needed to improve performance. Methods We installed and evaluated five text de-identification systems “out-of-the-box” using a corpus of VHA clinical documents. The systems based on machine learning methods were trained with the 2006 i2b2 de-identification corpora and evaluated with our VHA corpus, and also evaluated with a ten-fold cross-validation experiment using our VHA corpus. We counted exact, partial, and fully contained matches with reference annotations, considering each PHI type separately, or only one unique ‘PHI’ category. Performance of the systems was assessed using recall (equivalent to sensitivity and precision (equivalent to positive predictive value metrics, as well as the F2-measure. Results Overall, systems based on rules and pattern matching achieved better recall, and precision was always better with systems based on machine learning approaches. The highest “out-of-the-box” F2-measure was 67% for partial matches; the best precision and recall were 95% and 78%, respectively. Finally, the ten-fold cross validation experiment allowed for an increase of the F2-measure to 79% with partial matches

  11. Automatic interpretation of seismic micro facies using the fuzzy mathematics method

    Energy Technology Data Exchange (ETDEWEB)

    Dongrun, G.; Gardner, G.H.F.

    1988-01-01

    The interpretation of seismic micro facies concentrates on changes involving single reflection or several reflections, and endeavors to explain the relations between these changes and stratigraphic variation or hydrocarbon accumulation. In most cases, one can not determine the geological significance of reflection character anomalies on single or several seismic sections. But when one maps them on a plane, their distribution may on the whole indicate the geological significance. It is stated how the fuzzy method is used on a VAX computer to automatically construct a plane map of the reflection character changes in a time window. What an interpreter needs to do for whole interpretation is only to provide some parameters, such as time window, threshold, weight coefficients etc.

  12. AUTOMATIC GENERALIZABILITY METHOD OF URBAN DRAINAGE PIPE NETWORK CONSIDERING MULTI-FEATURES

    Directory of Open Access Journals (Sweden)

    S. Zhu

    2018-05-01

    Full Text Available Urban drainage systems are indispensable dataset for storm-flooding simulation. Given data availability and current computing power, the structure and complexity of urban drainage systems require to be simplify. However, till data, the simplify procedure mainly depend on manual operation that always leads to mistakes and lower work efficiency. This work referenced the classification methodology of road system, and proposed a conception of pipeline stroke. Further, length of pipeline, angle between two pipelines, the pipeline belonged road level and diameter of pipeline were chosen as the similarity criterion to generate the pipeline stroke. Finally, designed the automatic method to generalize drainage systems with the concern of multi-features. This technique can improve the efficiency and accuracy of the generalization of drainage systems. In addition, it is beneficial to the study of urban storm-floods.

  13. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    Science.gov (United States)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  14. Unsupervised method for automatic construction of a disease dictionary from a large free text collection.

    Science.gov (United States)

    Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan

    2008-11-06

    Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting con-textual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35-88%) over available, manually created disease terminologies.

  15. Automatic Generalizability Method of Urban Drainage Pipe Network Considering Multi-Features

    Science.gov (United States)

    Zhu, S.; Yang, Q.; Shao, J.

    2018-05-01

    Urban drainage systems are indispensable dataset for storm-flooding simulation. Given data availability and current computing power, the structure and complexity of urban drainage systems require to be simplify. However, till data, the simplify procedure mainly depend on manual operation that always leads to mistakes and lower work efficiency. This work referenced the classification methodology of road system, and proposed a conception of pipeline stroke. Further, length of pipeline, angle between two pipelines, the pipeline belonged road level and diameter of pipeline were chosen as the similarity criterion to generate the pipeline stroke. Finally, designed the automatic method to generalize drainage systems with the concern of multi-features. This technique can improve the efficiency and accuracy of the generalization of drainage systems. In addition, it is beneficial to the study of urban storm-floods.

  16. A novel method based on learning automata for automatic lesion detection in breast magnetic resonance imaging.

    Science.gov (United States)

    Salehi, Leila; Azmi, Reza

    2014-07-01

    Breast cancer continues to be a significant public health problem in the world. Early detection is the key for improving breast cancer prognosis. In this way, magnetic resonance imaging (MRI) is emerging as a powerful tool for the detection of breast cancer. Breast MRI presently has two major challenges. First, its specificity is relatively poor, and it detects many false positives (FPs). Second, the method involves acquiring several high-resolution image volumes before, during, and after the injection of a contrast agent. The large volume of data makes the task of interpretation by the radiologist both complex and time-consuming. These challenges have led to the development of the computer-aided detection systems to improve the efficiency and accuracy of the interpretation process. Detection of suspicious regions of interests (ROIs) is a critical preprocessing step in dynamic contrast-enhanced (DCE)-MRI data evaluation. In this regard, this paper introduces a new automatic method to detect the suspicious ROIs for breast DCE-MRI based on region growing. The results indicate that the proposed method is thoroughly able to identify suspicious regions (accuracy of 75.39 ± 3.37 on PIDER breast MRI dataset). Furthermore, the FP per image in this method is averagely 7.92, which shows considerable improvement comparing to other methods like ROI hunter.

  17. Automatic Registration Method for Fusion of ZY-1-02C Satellite Images

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2013-12-01

    Full Text Available Automatic image registration (AIR has been widely studied in the fields of medical imaging, computer vision, and remote sensing. In various cases, such as image fusion, high registration accuracy should be achieved to meet application requirements. For satellite images, the large image size and unstable positioning accuracy resulting from the limited manufacturing technology of charge-coupled device, focal plane distortion, and unrecorded spacecraft jitter lead to difficulty in obtaining agreeable corresponding points for registration using only area-based matching or feature-based matching. In this situation, a coarse-to-fine matching strategy integrating two types of algorithms is proven feasible and effective. In this paper, an AIR method for application to the fusion of ZY-1-02C satellite imagery is proposed. First, the images are geometrically corrected. Coarse matching, based on scale invariant feature transform, is performed for the subsampled corrected images, and a rough global estimation is made with the matching results. Harris feature points are then extracted, and the coordinates of the corresponding points are calculated according to the global estimation results. Precise matching is conducted, based on normalized cross correlation and least squares matching. As complex image distortion cannot be precisely estimated, a local estimation using the structure of triangulated irregular network is applied to eliminate the false matches. Finally, image resampling is conducted, based on local affine transformation, to achieve high-precision registration. Experiments with ZY-1-02C datasets demonstrate that the accuracy of the proposed method meets the requirements of fusion application, and its efficiency is also suitable for the commercial operation of the automatic satellite data process system.

  18. Presenting automatic demand control (ADC) as a new frequency control method in smart grids

    Energy Technology Data Exchange (ETDEWEB)

    Ameli, Mohammad Taghi; Ameli, Ali; Maleki, Hamed [Power and Water Univ. of Technology, Tehran (Iran, Islamic Republic of); Mobarhani, Alireza [Amir Kabir Univ. of Technology, Tehran (Iran, Islamic Republic of)

    2011-07-01

    Electric power is the most important part of human energy consumption, and since it has a low storage coefficient it is of particular importance to establish a balance in demand and generation in order to modify and optimize consumption patterns. The expression ''Smart Grid'' can be used to describe technologies which are applied for the automation and optimization of the generation, transmission and distribution network management. This technology requires the integration of information and communication technology in electrical network operation. This paper will study how the Smart Grid capabilites can be used to manage and optimize power network consumption, as well as how the consumers collaboration process using an AGC (Automatic Generation Control) system acts to provide secondary frequency control through consumed load shedding. Reestablishing the balance between demand and generation in critical network operation is also investigated. In other words, utilizing the above method, a new system, ADC (Automatic Demand Control), is offered for use alongside the AGC system in Smart Grids to restore the frequency value to its nominal value. This can lead to a more competitive electricity market and reduce the system storage while maintaining adequate security and network reliability. One of the benefits of the proposed methods described in this paper, in addition to restoring the frequency value to its nominal value, is lower costs and a more economic network operation through reducing fuel and CO2 emission by managing and controlling the amount of the consumed load in the Smart Grid. Also consumers are given the capability to have a specific timetable to economize on their energy requirements which will also reduce the load peak and the network losses. (orig.)

  19. A new ICA-based fingerprint method for the automatic removal of physiological artifacts from EEG recordings

    Science.gov (United States)

    Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens

    2018-01-01

    Background EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Methods Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity (p). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. Results The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1

  20. An automatic method to discriminate malignant masses from normal tissue in digital mammograms

    International Nuclear Information System (INIS)

    Brake, Guido M. te; Karssemeijer, Nico; Hendriks, Jan H.C.L.

    2000-01-01

    Specificity levels of automatic mass detection methods in mammography are generally rather low, because suspicious looking normal tissue is often hard to discriminate from real malignant masses. In this work a number of features were defined that are related to image characteristics that radiologists use to discriminate real lesions from normal tissue. An artificial neural network was used to map the computed features to a measure of suspiciousness for each region that was found suspicious by a mass detection method. Two data sets were used to test the method. The first set of 72 malignant cases (132 films) was a consecutive series taken from the Nijmegen screening programme, 208 normal films were added to improve the estimation of the specificity of the method. The second set was part of the new DDSM data set from the University of South Florida. A total of 193 cases (772 films) with 372 annotated malignancies was used. The measure of suspiciousness that was computed using the image characteristics was successful in discriminating tumours from false positive detections. Approximately 75% of all cancers were detected in at least one view at a specificity level of 0.1 false positive per image. (author)

  1. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-06-01

    Full Text Available The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1 road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2 local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3 hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for “Urban Classification and 3D Building Reconstruction” project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  2. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  3. Automatic Removal of Physiological Artifacts in EEG: The Optimized Fingerprint Method for Sports Science Applications.

    Science.gov (United States)

    Stone, David B; Tamburro, Gabriella; Fiedler, Patrique; Haueisen, Jens; Comani, Silvia

    2018-01-01

    Data contamination due to physiological artifacts such as those generated by eyeblinks, eye movements, and muscle activity continues to be a central concern in the acquisition and analysis of electroencephalographic (EEG) data. This issue is further compounded in EEG sports science applications where the presence of artifacts is notoriously difficult to control because behaviors that generate these interferences are often the behaviors under investigation. Therefore, there is a need to develop effective and efficient methods to identify physiological artifacts in EEG recordings during sports applications so that they can be isolated from cerebral activity related to the activities of interest. We have developed an EEG artifact detection model, the Fingerprint Method, which identifies different spatial, temporal, spectral, and statistical features indicative of physiological artifacts and uses these features to automatically classify artifactual independent components in EEG based on a machine leaning approach. Here, we optimized our method using artifact-rich training data and a procedure to determine which features were best suited to identify eyeblinks, eye movements, and muscle artifacts. We then applied our model to an experimental dataset collected during endurance cycling. Results reveal that unique sets of features are suitable for the detection of distinct types of artifacts and that the Optimized Fingerprint Method was able to correctly identify over 90% of the artifactual components with physiological origin present in the experimental data. These results represent a significant advancement in the search for effective means to address artifact contamination in EEG sports science applications.

  4. Computerization of reporting and data storage using automatic coding method in the department of radiology

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byung Hee; Lee, Kyung Sang; Kim, Woo Ho; Han, Joon Koo; Choi, Byung Ihn; Han, Man Chung [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)

    1990-10-15

    The authors developed a computer program for use in printing report as well as data storage and retrieval in the Radiology department. This program used IBM PC AT and was written in dBASE III plus language. The automatic coding method of the ACR code, developed by Kim et al was applied in this program, and the framework of this program is the same as that developed for the surgical pathology department. The working sheet, which contained the name card for X-ray film identification and the results of previous radiologic studies, were printed during registration. The word precessing function was applied for issuing the formal report of radiologic study, and the data storage was carried out during the typewriting of the report. Two kinds of data files were stored in the hard disk ; the temporary file contained full information and the permanent file contained patient's identification data, and ACR code. Searching of a specific case was performed by chart number, patients name, date of study, or ACR code within a second. All the cases were arranged by ACR codes of procedure code, anatomy code, and pathology code. Every new data was copied to the diskette after daily work automatically, with which data could be restored in case of hard diskette failure. The main advantage of this program with comparison to the larger computer system is its low price. Based on the experience in the Seoul District Armed Forces General Hospital, we assume that this program provides solution to various problems in the radiology department where a large computer system with well designed software is not available.

  5. Composite materials and bodies including silicon carbide and titanium diboride and methods of forming same

    Science.gov (United States)

    Lillo, Thomas M.; Chu, Henry S.; Harrison, William M.; Bailey, Derek

    2013-01-22

    Methods of forming composite materials include coating particles of titanium dioxide with a substance including boron (e.g., boron carbide) and a substance including carbon, and reacting the titanium dioxide with the substance including boron and the substance including carbon to form titanium diboride. The methods may be used to form ceramic composite bodies and materials, such as, for example, a ceramic composite body or material including silicon carbide and titanium diboride. Such bodies and materials may be used as armor bodies and armor materials. Such methods may include forming a green body and sintering the green body to a desirable final density. Green bodies formed in accordance with such methods may include particles comprising titanium dioxide and a coating at least partially covering exterior surfaces thereof, the coating comprising a substance including boron (e.g., boron carbide) and a substance including carbon.

  6. An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.

    Science.gov (United States)

    Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong

    2018-04-11

    In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.

  7. Automatic calibration method of voxel size for cone-beam 3D-CT scanning system

    International Nuclear Information System (INIS)

    Yang Min; Wang Xiaolong; Wei Dongbo; Liu Yipeng; Meng Fanyong; Li Xingdong; Liu Wenli

    2014-01-01

    For a cone-beam three-dimensional computed tomography (3D-CT) scanning system, voxel size is an important indicator to guarantee the accuracy of data analysis and feature measurement based on 3D-CT images. Meanwhile, the voxel size changes with the movement of the rotary stage along X-ray direction. In order to realize the automatic calibration of the voxel size, a new and easily-implemented method is proposed. According to this method, several projections of a spherical phantom are captured at different imaging positions and the corresponding voxel size values are calculated by non-linear least-square fitting. Through these interpolation values, a linear equation is obtained that reflects the relationship between the voxel size and the rotary stage translation distance from its nominal zero position. Finally, the linear equation is imported into the calibration module of the 3D-CT scanning system. When the rotary stage is moving along X-ray direction, the accurate value of the voxel size is dynamically exported. The experimental results prove that this method meets the requirements of the actual CT scanning system, and has virtues of easy implementation and high accuracy. (authors)

  8. A Method Based on Artificial Intelligence To Fully Automatize The Evaluation of Bovine Blastocyst Images.

    Science.gov (United States)

    Rocha, José Celso; Passalia, Felipe José; Matos, Felipe Delestro; Takahashi, Maria Beatriz; Ciniciato, Diego de Souza; Maserati, Marc Peter; Alves, Mayra Fernanda; de Almeida, Tamie Guibu; Cardoso, Bruna Lopes; Basso, Andrea Cristina; Nogueira, Marcelo Fábio Gouveia

    2017-08-09

    Morphological analysis is the standard method of assessing embryo quality; however, its inherent subjectivity tends to generate discrepancies among evaluators. Using genetic algorithms and artificial neural networks (ANNs), we developed a new method for embryo analysis that is more robust and reliable than standard methods. Bovine blastocysts produced in vitro were classified as grade 1 (excellent or good), 2 (fair), or 3 (poor) by three experienced embryologists according to the International Embryo Technology Society (IETS) standard. The images (n = 482) were subjected to automatic feature extraction, and the results were used as input for a supervised learning process. One part of the dataset (15%) was used for a blind test posterior to the fitting, for which the system had an accuracy of 76.4%. Interestingly, when the same embryologists evaluated a sub-sample (10%) of the dataset, there was only 54.0% agreement with the standard (mode for grades). However, when using the ANN to assess this sub-sample, there was 87.5% agreement with the modal values obtained by the evaluators. The presented methodology is covered by National Institute of Industrial Property (INPI) and World Intellectual Property Organization (WIPO) patents and is currently undergoing a commercial evaluation of its feasibility.

  9. A Method for Automatic Extracting Intracranial Region in MR Brain Image

    Science.gov (United States)

    Kurokawa, Keiji; Miura, Shin; Nishida, Makoto; Kageyama, Yoichi; Namura, Ikuro

    It is well known that temporal lobe in MR brain image is in use for estimating the grade of Alzheimer-type dementia. It is difficult to use only region of temporal lobe for estimating the grade of Alzheimer-type dementia. From the standpoint for supporting the medical specialists, this paper proposes a data processing approach on the automatic extraction of the intracranial region from the MR brain image. The method is able to eliminate the cranium region with the laplacian histogram method and the brainstem with the feature points which are related to the observations given by a medical specialist. In order to examine the usefulness of the proposed approach, the percentage of the temporal lobe in the intracranial region was calculated. As a result, the percentage of temporal lobe in the intracranial region on the process of the grade was in agreement with the visual sense standards of temporal lobe atrophy given by the medical specialist. It became clear that intracranial region extracted by the proposed method was good for estimating the grade of Alzheimer-type dementia.

  10. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  11. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    Science.gov (United States)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  12. An automatic iterative decision-making method for intuitionistic fuzzy linguistic preference relations

    Science.gov (United States)

    Pei, Lidan; Jin, Feifei; Ni, Zhiwei; Chen, Huayou; Tao, Zhifu

    2017-10-01

    As a new preference structure, the intuitionistic fuzzy linguistic preference relation (IFLPR) was recently introduced to efficiently deal with situations in which the membership and non-membership are represented as linguistic terms. In this paper, we study the issues of additive consistency and the derivation of the intuitionistic fuzzy weight vector of an IFLPR. First, the new concepts of order consistency, additive consistency and weak transitivity for IFLPRs are introduced, and followed by a discussion of the characterisation about additive consistent IFLPRs. Then, a parameterised transformation approach is investigated to convert the normalised intuitionistic fuzzy weight vector into additive consistent IFLPRs. After that, a linear optimisation model is established to derive the normalised intuitionistic fuzzy weights for IFLPRs, and a consistency index is defined to measure the deviation degree between an IFLPR and its additive consistent IFLPR. Furthermore, we develop an automatic iterative decision-making method to improve the IFLPRs with unacceptable additive consistency until the adjusted IFLPRs are acceptable additive consistent, and it helps the decision-maker to obtain the reasonable and reliable decision-making results. Finally, an illustrative example is provided to demonstrate the validity and applicability of the proposed method.

  13. A method for automatic control of the process of producing electrode pitch

    Energy Technology Data Exchange (ETDEWEB)

    Rozenman, E.S.; Bugaysen, I.M.; Chernyshov, Yu.A.; Klyusa, M.D.; Krysin, V.P.; Livshits, B.Ya.; Martynenko, V.V.; Meniovich, B.I.; Sklyar, M.G.; Voytenko, B.I.

    1983-01-01

    A method is proposed for automatic control of the process for producing electride pitch through regulation of the feeding of the starting raw material with correction based on the pitch level in the last apparatus of the technological line and change in the feeding of air into the reactors based on the flow rates of the starting raw material and the temperature of the liquid phase in the reactors. In order to increase the stability of the quality of the electrode pitch with changes in the properties of the starting resin, the heating temperature of the dehydrated resin is regulated in the pipe furnace relative to the quality of the mean temperature pitch produced from it, while the level of the liquid phase in the reactor is regulated relative to the quality of the final product. The proposed method provides for an improvement in the quality of process regulation, which makes it possible to improve the properties of the anode mass and to reduce its expenditure for the production of Aluminum.

  14. AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION

    Energy Technology Data Exchange (ETDEWEB)

    Alipour, N.; Safari, H. [Department of Physics, University of Zanjan, P.O. Box 45195-313, Zanjan (Iran, Islamic Republic of); Innes, D. E. [Max-Planck Institut fuer Sonnensystemforschung, 37191 Katlenburg-Lindau (Germany)

    2012-02-10

    Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separated using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.

  15. Solar cells, structures including organometallic halide perovskite monocrystalline films, and methods of preparation thereof

    KAUST Repository

    Bakr, Osman; Peng, Wei; Wang, Lingfei

    2017-01-01

    Embodiments of the present disclosure provide for solar cells including an organometallic halide perovskite monocrystalline film (see fig. 1.1B), other devices including the organometallic halide perovskite monocrystalline film, methods of making

  16. Automatic Offline Formulation of Robust Model Predictive Control Based on Linear Matrix Inequalities Method

    Directory of Open Access Journals (Sweden)

    Longge Zhang

    2013-01-01

    Full Text Available Two automatic robust model predictive control strategies are presented for uncertain polytopic linear plants with input and output constraints. A sequence of nested geometric proportion asymptotically stable ellipsoids and controllers is constructed offline first. Then the feedback controllers are automatically selected with the receding horizon online in the first strategy. Finally, a modified automatic offline robust MPC approach is constructed to improve the closed system's performance. The new proposed strategies not only reduce the conservatism but also decrease the online computation. Numerical examples are given to illustrate their effectiveness.

  17. A method for automatically extracting infectious disease-related primers and probes from the literature

    Directory of Open Access Journals (Sweden)

    Pérez-Rey David

    2010-08-01

    Full Text Available Abstract Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1 convert each document into a tree of paper sections, (2 detect the candidate sequences using a set of finite state machine-based recognizers, (3 refine problem sequences using a rule-based expert system, and (4 annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch.

  18. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning.

    Science.gov (United States)

    Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J

    2016-08-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.

  19. Automatic Enhancement of the Reference Set for Multi-Criteria Sorting in The Frame of Theseus Method

    Directory of Open Access Journals (Sweden)

    Fernandez Eduardo

    2014-05-01

    Full Text Available Some recent works have established the importance of handling abundant reference information in multi-criteria sorting problems. More valid information allows a better characterization of the agent’s assignment policy, which can lead to an improved decision support. However, sometimes information for enhancing the reference set may be not available, or may be too expensive. This paper explores an automatic mode of enhancing the reference set in the framework of the THESEUS multi-criteria sorting method. Some performance measures are defined in order to test results of the enhancement. Several theoretical arguments and practical experiments are provided here, supporting a basic advantage of the automatic enhancement: a reduction of the vagueness measure that improves the THESEUS accuracy, without additional efforts from the decision agent. The experiments suggest that the errors coming from inadequate automatic assignments can be kept at a manageable level.

  20. An automatic contour propagation method to follow parotid gland deformation during head-and-neck cancer tomotherapy

    International Nuclear Information System (INIS)

    Faggiano, E; Scalco, E; Rizzo, G; Fiorino, C; Broggi, S; Cattaneo, M; Maggiulli, E; Calandrino, R; Dell'Oca, I; Di Muzio, N

    2011-01-01

    We developed an efficient technique to auto-propagate parotid gland contours from planning kVCT to daily MVCT images of head-and-neck cancer patients treated with helical tomotherapy. The method deformed a 3D surface mesh constructed from manual kVCT contours by B-spline free-form deformation to generate optimal and smooth contours. Deformation was calculated by elastic image registration between kVCT and MVCT images. Data from ten head-and-neck cancer patients were considered and manual contours by three observers were included in both kVCT and MVCT images. A preliminary inter-observer variability analysis demonstrated the importance of contour propagation in tomotherapy application: a high variability was reported in MVCT parotid volume estimation (p = 0.0176, ANOVA test) and a larger uncertainty of MVCT contouring compared with kVCT was demonstrated by DICE and volume variability indices (Wilcoxon signed rank test, p -4 for both indices). The performance analysis of our method showed no significant differences between automatic and manual contours in terms of volumes (p > 0.05, in a multiple comparison Tukey test), center-of-mass distances (p = 0.3043, ANOVA test), DICE values (p = 0.1672, Wilcoxon signed rank test) and average and maximum symmetric distances (p = 0.2043, p = 0.8228 Wilcoxon signed rank tests). Results suggested that our contour propagation method could successfully substitute human contouring on MVCT images.

  1. An automatic contour propagation method to follow parotid gland deformation during head-and-neck cancer tomotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Faggiano, E; Scalco, E; Rizzo, G [Istituto di Bioimmagini e Fisiologia Molecolare (IBFM), CNR, Milan (Italy); Fiorino, C; Broggi, S; Cattaneo, M; Maggiulli, E; Calandrino, R [Department of Medical Physics, San Raffaele Scientific Institute, Milan (Italy); Dell' Oca, I; Di Muzio, N, E-mail: fiorino.claudio@hsr.it [Department of Radiotherapy, San Raffaele Scientific Institute, Milan (Italy)

    2011-02-07

    We developed an efficient technique to auto-propagate parotid gland contours from planning kVCT to daily MVCT images of head-and-neck cancer patients treated with helical tomotherapy. The method deformed a 3D surface mesh constructed from manual kVCT contours by B-spline free-form deformation to generate optimal and smooth contours. Deformation was calculated by elastic image registration between kVCT and MVCT images. Data from ten head-and-neck cancer patients were considered and manual contours by three observers were included in both kVCT and MVCT images. A preliminary inter-observer variability analysis demonstrated the importance of contour propagation in tomotherapy application: a high variability was reported in MVCT parotid volume estimation (p = 0.0176, ANOVA test) and a larger uncertainty of MVCT contouring compared with kVCT was demonstrated by DICE and volume variability indices (Wilcoxon signed rank test, p < 10{sup -4} for both indices). The performance analysis of our method showed no significant differences between automatic and manual contours in terms of volumes (p > 0.05, in a multiple comparison Tukey test), center-of-mass distances (p = 0.3043, ANOVA test), DICE values (p = 0.1672, Wilcoxon signed rank test) and average and maximum symmetric distances (p = 0.2043, p = 0.8228 Wilcoxon signed rank tests). Results suggested that our contour propagation method could successfully substitute human contouring on MVCT images.

  2. A novel automatic quantification method for high-content screening analysis of DNA double strand-break response.

    Science.gov (United States)

    Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming

    2017-08-29

    High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.

  3. Using automatic calibration method for optimizing the performance of Pedotransfer functions of saturated hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    Ahmed M. Abdelbaki

    2016-06-01

    Full Text Available Pedotransfer functions (PTFs are an easy way to predict saturated hydraulic conductivity (Ksat without measurements. This study aims to auto calibrate 22 PTFs. The PTFs were divided into three groups according to its input requirements and the shuffled complex evolution algorithm was used in calibration. The results showed great modification in the performance of the functions compared to the original published functions. For group 1 PTFs, the geometric mean error ratio (GMER and the geometric standard deviation of error ratio (GSDER values were modified from range (1.27–6.09, (5.2–7.01 to (0.91–1.15, (4.88–5.85 respectively. For group 2 PTFs, the GMER and the GSDER values were modified from (0.3–1.55, (5.9–12.38 to (1.00–1.03, (5.5–5.9 respectively. For group 3 PTFs, the GMER and the GSDER values were modified from (0.11–2.06, (5.55–16.42 to (0.82–1.01, (5.1–6.17 respectively. The result showed that the automatic calibration is an efficient and accurate method to enhance the performance of the PTFs.

  4. A method of automatic control of the process of compressing pyrogas in olefin production

    Energy Technology Data Exchange (ETDEWEB)

    Podval' niy, M.L.; Bobrovnikov, N.R.; Kotler, L.D.; Shib, L.M.; Tuchinskiy, M.R.

    1982-01-01

    In the known method of automatically controlling the process of compressing pyrogas in olefin production by regulating the supply of cooling agents to the interstage coolers of the compression unit depending on the flow of hydrocarbons to the compression unit, to raise performance by lowering deposition of polymers on the flow through surfaces of the equipment, the coolant supply is also regulated as a function of the flows of hydrocarbons from the upper and lower parts of the demethanizer and the bottoms of the stripping tower. The coolant supply is regulated proportional to the difference between the flow of stripping tower bottoms and the ratio of the hydrocarbon flow from the upper and lower parts of the demethanizer to the hydrocarbon flow in the compression unit. With an increase in the proportion of light hydrocarbons (sum of upper and lower demethanizer products) in the total flow of pyrogas going to compression, the flow of coolant to the compression unit is reduced. Condensation of the given fractions in the separators, their amount in condensate going through the piping to the stripping tower, is reduced. With the reduction in the proportion of light hydrocarbons in the pyrogas, the flow of coolant is increased, thus improving condensation of heavy hydrocarbons in the separators and removing them from the compression unit in the bottoms of the stripping tower.

  5. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  6. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  7. Comparison of different methods to include recycling in LCAs of aluminium cans and disposable polystyrene cups

    NARCIS (Netherlands)

    Harst-Wintraecken, van der Eugenie; Potting, José; Kroeze, Carolien

    2016-01-01

    Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of

  8. Gray-Matter Volume Estimate Score: A Novel Semi-Automatic Method Measuring Early Ischemic Change on CT

    OpenAIRE

    Song, Dongbeom; Lee, Kijeong; Kim, Eun Hye; Kim, Young Dae; Lee, Hye Sun; Kim, Jinkwon; Song, Tae-Jin; Ahn, Sung Soo; Nam, Hyo Suk; Heo, Ji Hoe

    2015-01-01

    Background and Purpose We developed a novel method named Gray-matter Volume Estimate Score (GRAVES), measuring early ischemic changes on Computed Tomography (CT) semi-automatically by computer software. This study aimed to compare GRAVES and Alberta Stroke Program Early CT Score (ASPECTS) with regards to outcome prediction and inter-rater agreement. Methods This was a retrospective cohort study. Among consecutive patients with ischemic stroke in the anterior circulation who received intra-art...

  9. An automatic gain matching method for {gamma}-ray spectra obtained with a multi-detector array

    Energy Technology Data Exchange (ETDEWEB)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S. E-mail: ssg@alpha.iuc.res.in

    2004-07-01

    The increasing size of data sets from large multi-detector arrays makes the traditional approach to the pre-evaluation of the data difficult and time consuming. The pre-sorting involves detection and correction of the observed on-line drifts followed by calibration of the raw data. A new method for automatic detection and correction of these instrumental drifts is presented. An application of this method to the data acquired using a multi-Clover array is discussed.

  10. An automatic gain matching method for γ-ray spectra obtained with a multi-detector array

    International Nuclear Information System (INIS)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S.

    2004-01-01

    The increasing size of data sets from large multi-detector arrays makes the traditional approach to the pre-evaluation of the data difficult and time consuming. The pre-sorting involves detection and correction of the observed on-line drifts followed by calibration of the raw data. A new method for automatic detection and correction of these instrumental drifts is presented. An application of this method to the data acquired using a multi-Clover array is discussed

  11. Systems and Methods for Fabricating Structures Including Metallic Glass-Based Materials Using Low Pressure Casting

    Science.gov (United States)

    Hofmann, Douglas C. (Inventor); Kennett, Andrew (Inventor)

    2018-01-01

    Systems and methods to fabricate objects including metallic glass-based materials using low-pressure casting techniques are described. In one embodiment, a method of fabricating an object that includes a metallic glass-based material includes: introducing molten alloy into a mold cavity defined by a mold using a low enough pressure such that the molten alloy does not conform to features of the mold cavity that are smaller than 100 microns; and cooling the molten alloy such that it solidifies, the solid including a metallic glass-based material.

  12. Decoding Facial Esthetics to Recreate an Esthetic Hairline: A Method Which Includes Forehead Curvature.

    Science.gov (United States)

    Garg, Anil K; Garg, Seema

    2017-01-01

    The evidence suggests that our perception of physical beauty is based on how closely the features of one's face reflect phi (the golden ratio) in their proportions. By that extension, it must certainly be possible to use a mathematical parameter to design an anterior hairline in all faces. To establish a user-friendly method to design an anterior hairline in cases of male pattern alopecia. We need a flexible measuring tape and skin marker. A reference point A (glabella) is taken in between eyebrows. Mark point E, near the lateral canthus, 8 cm horizontal on either side from the central point A. A mid-frontal point (point B) is marked 8 cm from point A on the forehead in a mid-vertical plane. The frontotemporal points (C and C') are marked on the frontotemporal area, 8 cm in a horizontal plane from point B and 8 cm in a vertical plane from point E. The temporal peak points (D and D') are marked on the line joining the frontotemporal point C to the lateral canthus point E, slightly more than halfway toward lateral canthus, usually 5 cm from the frontotemporal point C. This line makes an anterior border of the temporal triangle. We have conducted a study with 431 cases of male pattern alopecia. The average distance of the mid-frontal point from glabella was 7.9 cm. The patient satisfaction reported was 94.7%. Our method gives a skeletal frame of the anterior hairline with minimal criteria, with no need of visual imagination and experience of the surgeon. It automatically takes care of the curvature of the forehead and is easy to use for a novice surgeon.

  13. A method for the computation of turbulent polymeric liquids including hydrodynamic interactions and chain entanglements

    Energy Technology Data Exchange (ETDEWEB)

    Kivotides, Demosthenes, E-mail: demosthenes.kivotides@strath.ac.uk

    2017-02-12

    An asymptotically exact method for the direct computation of turbulent polymeric liquids that includes (a) fully resolved, creeping microflow fields due to hydrodynamic interactions between chains, (b) exact account of (subfilter) residual stresses, (c) polymer Brownian motion, and (d) direct calculation of chain entanglements, is formulated. Although developed in the context of polymeric fluids, the method is equally applicable to turbulent colloidal dispersions and aerosols. - Highlights: • An asymptotically exact method for the computation of polymer and colloidal fluids is developed. • The method is valid for all flow inertia and all polymer volume fractions. • The method models entanglements and hydrodynamic interactions between polymer chains.

  14. Automatic limit switch system for scintillation device and method of operation

    International Nuclear Information System (INIS)

    Brunnett, C.J.; Ioannou, B.N.

    1976-01-01

    A scintillation scanner is described having an automatic limit switch system for setting the limits of travel of the radiation detection device which is carried by a scanning boom. The automatic limit switch system incorporates position responsive circuitry for developing a signal representative of the position of the boom, reference signal circuitry for developing a signal representative of a selected limit of travel of the boom, and comparator circuitry for comparng these signals in order to control the operation of a boom drive and indexing mechanism. (author)

  15. Development of calculation method for one-dimensional kinetic analysis in fission reactors, including feedback effects

    International Nuclear Information System (INIS)

    Paixao, S.B.; Marzo, M.A.S.; Alvim, A.C.M.

    1986-01-01

    The calculation method used in WIGLE code is studied. Because of the non availability of such a praiseworthy solution, expounding the method minutely has been tried. This developed method has been applied for the solution of the one-dimensional, two-group, diffusion equations in slab, axial analysis, including non-boiling heat transfer, accountig for feedback. A steady-state program (CITER-1D), written in FORTRAN 4, has been implemented, providing excellent results, ratifying the developed work quality. (Author) [pt

  16. Solar cells, structures including organometallic halide perovskite monocrystalline films, and methods of preparation thereof

    KAUST Repository

    Bakr, Osman M.

    2017-03-02

    Embodiments of the present disclosure provide for solar cells including an organometallic halide perovskite monocrystalline film (see fig. 1.1B), other devices including the organometallic halide perovskite monocrystalline film, methods of making organometallic halide perovskite monocrystalline film, and the like.

  17. Battery-powered transport systems. Possible methods of automatically charging drive batteries

    Energy Technology Data Exchange (ETDEWEB)

    1981-03-01

    In modern driverless transport systems, not only easy maintenance of the drive battery is important but also automatic charging during times of standstill. Some systems are presented; one system is pointed out in particular in which 100 batteries can be charged at the same time.

  18. Automatic Methods in Image Processing and Their Relevance to Map-Making.

    Science.gov (United States)

    1981-02-11

    folding fre- quency = .5) and s is the "shaoing fac- tor" which controls the spatial frequency content of the signal; the signal band- width increases...ARIZONA UNIV TUCSON DIGITAL IAgE ANALYSIS LAB Iris 8/ 2AUTOMATIC METHOOS IN IMAGE PROCESSING AND THEIR RELEVANCE TO MA-.ETC~tl;FEB 1 S R HUNT DAA629

  19. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    NARCIS (Netherlands)

    Weijers, G.; Starke, A.; Haudum, A.; Thijssen, J.M.; Rehage, J.; Korte, C.L. de

    2010-01-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty

  20. Evaluation of an automatic MR-based gold fiducial marker localisation method for MR-only prostate radiotherapy

    Science.gov (United States)

    Maspero, Matteo; van den Berg, Cornelis A. T.; Zijlstra, Frank; Sikkes, Gonda G.; de Boer, Hans C. J.; Meijer, Gert J.; Kerkmeijer, Linda G. W.; Viergever, Max A.; Lagendijk, Jan J. W.; Seevinck, Peter R.

    2017-10-01

    An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of 1.1 × 1.1 × 1.2 mm3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a

  1. Evaluation of an automatic MR-based gold fiducial marker localisation method for MR-only prostate radiotherapy.

    Science.gov (United States)

    Maspero, Matteo; van den Berg, Cornelis A T; Zijlstra, Frank; Sikkes, Gonda G; de Boer, Hans C J; Meijer, Gert J; Kerkmeijer, Linda G W; Viergever, Max A; Lagendijk, Jan J W; Seevinck, Peter R

    2017-10-03

    An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of [Formula: see text] mm 3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a

  2. An improved phase-locked loop method for automatic resonance frequency tracing based on static capacitance broadband compensation for a high-power ultrasonic transducer.

    Science.gov (United States)

    Dong, Hui-juan; Wu, Jian; Zhang, Guang-yu; Wu, Han-fu

    2012-02-01

    The phase-locked loop (PLL) method is widely used for automatic resonance frequency tracing (ARFT) of high-power ultrasonic transducers, which are usually vibrating systems with high mechanical quality factor (Qm). However, a heavily-loaded transducer usually has a low Qm because the load has a large mechanical loss. In this paper, a series of theoretical analyses is carried out to detail why the traditional PLL method could cause serious frequency tracing problems, including loss of lock, antiresonance frequency tracing, and large tracing errors. The authors propose an improved ARFT method based on static capacitance broadband compensation (SCBC), which is able to address these problems. Experiments using a generator based on the novel method were carried out using crude oil as the transducer load. The results obtained have demonstrated the effectiveness of the novel method, compared with the conventional PLL method, in terms of improved tracing accuracy (±9 Hz) and immunity to antiresonance frequency tracing and loss of lock.

  3. An automatic method to analyze the Capacity-Voltage and Current-Voltage curves of a sensor

    CERN Document Server

    AUTHOR|(CDS)2261553

    2017-01-01

    An automatic method to perform Capacity versus voltage analysis for all kind of silicon sensor is provided. It successfully calculates the depletion voltage to unirradiated and irradiated sensors, and with measurements with outliers or reaching breakdown. It is built using C++ and using ROOT trees with an analogous skeleton as TRICS, where the data as well as the results of the ts are saved, to make further analysis.

  4. A Survey of Automatic Protocol Reverse Engineering Approaches, Methods, and Tools on the Inputs and Outputs View

    OpenAIRE

    Baraka D. Sija; Young-Hoon Goo; Kyu-Seok Shim; Huru Hasanova; Myung-Sup Kim

    2018-01-01

    A network protocol defines rules that control communications between two or more machines on the Internet, whereas Automatic Protocol Reverse Engineering (APRE) defines the way of extracting the structure of a network protocol without accessing its specifications. Enough knowledge on undocumented protocols is essential for security purposes, network policy implementation, and management of network resources. This paper reviews and analyzes a total of 39 approaches, methods, and tools towards ...

  5. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin

    2014-01-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron......), it endows urinalysis methods with better reliability and repeatability compared with co-precipitation techniques. In view of the applicability of different pre-concentration techniques proposed previously in the literature, the main challenge behind relevant method development is pointed to be the release...

  6. Force measuring valve assemblies, systems including such valve assemblies and related methods

    Science.gov (United States)

    DeWall, Kevin George [Pocatello, ID; Garcia, Humberto Enrique [Idaho Falls, ID; McKellar, Michael George [Idaho Falls, ID

    2012-04-17

    Methods of evaluating a fluid condition may include stroking a valve member and measuring a force acting on the valve member during the stroke. Methods of evaluating a fluid condition may include measuring a force acting on a valve member in the presence of fluid flow over a period of time and evaluating at least one of the frequency of changes in the measured force over the period of time and the magnitude of the changes in the measured force over the period of time to identify the presence of an anomaly in a fluid flow and, optionally, its estimated location. Methods of evaluating a valve condition may include directing a fluid flow through a valve while stroking a valve member, measuring a force acting on the valve member during the stroke, and comparing the measured force to a reference force. Valve assemblies and related systems are also disclosed.

  7. Methods for CT automatic exposure control protocol translation between scanner platforms.

    Science.gov (United States)

    McKenney, Sarah E; Seibert, J Anthony; Lamba, Ramit; Boone, John M

    2014-03-01

    An imaging facility with a diverse fleet of CT scanners faces considerable challenges when propagating CT protocols with consistent image quality and patient dose across scanner makes and models. Although some protocol parameters can comfortably remain constant among scanners (eg, tube voltage, gantry rotation time), the automatic exposure control (AEC) parameter, which selects the overall mA level during tube current modulation, is difficult to match among scanners, especially from different CT manufacturers. Objective methods for converting tube current modulation protocols among CT scanners were developed. Three CT scanners were investigated, a GE LightSpeed 16 scanner, a GE VCT scanner, and a Siemens Definition AS+ scanner. Translation of the AEC parameters such as noise index and quality reference mAs across CT scanners was specifically investigated. A variable-diameter poly(methyl methacrylate) phantom was imaged on the 3 scanners using a range of AEC parameters for each scanner. The phantom consisted of 5 cylindrical sections with diameters of 13, 16, 20, 25, and 32 cm. The protocol translation scheme was based on matching either the volumetric CT dose index or image noise (in Hounsfield units) between two different CT scanners. A series of analytic fit functions, corresponding to different patient sizes (phantom diameters), were developed from the measured CT data. These functions relate the AEC metric of the reference scanner, the GE LightSpeed 16 in this case, to the AEC metric of a secondary scanner. When translating protocols between different models of CT scanners (from the GE LightSpeed 16 reference scanner to the GE VCT system), the translation functions were linear. However, a power-law function was necessary to convert the AEC functions of the GE LightSpeed 16 reference scanner to the Siemens Definition AS+ secondary scanner, because of differences in the AEC functionality designed by these two companies. Protocol translation on the basis of

  8. Electrode assemblies, plasma apparatuses and systems including electrode assemblies, and methods for generating plasma

    Science.gov (United States)

    Kong, Peter C; Grandy, Jon D; Detering, Brent A; Zuck, Larry D

    2013-09-17

    Electrode assemblies for plasma reactors include a structure or device for constraining an arc endpoint to a selected area or region on an electrode. In some embodiments, the structure or device may comprise one or more insulating members covering a portion of an electrode. In additional embodiments, the structure or device may provide a magnetic field configured to control a location of an arc endpoint on the electrode. Plasma generating modules, apparatus, and systems include such electrode assemblies. Methods for generating a plasma include covering at least a portion of a surface of an electrode with an electrically insulating member to constrain a location of an arc endpoint on the electrode. Additional methods for generating a plasma include generating a magnetic field to constrain a location of an arc endpoint on an electrode.

  9. Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment

    Science.gov (United States)

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan

    2017-12-01

    The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.

  10. System and method of self-properties for an autonomous and automatic computer environment

    Science.gov (United States)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments self health/urgency data and environment health/urgency data may be transmitted externally from an autonomic element. Other embodiments may include transmitting the self health/urgency data and environment health/urgency data together on a regular basis similar to the lub-dub of a heartbeat. Yet other embodiments may include a method for managing a system based on the functioning state and operating status of the system, wherein the method may include processing received signals from the system indicative of the functioning state and the operating status to obtain an analysis of the condition of the system, generating one or more stay alive signals based on the functioning status and the operating state of the system, transmitting the stay-alive signal, transmitting self health/urgency data, and transmitting environment health/urgency data. Still other embodiments may include an autonomic element that includes a self monitor, a self adjuster, an environment monitor, and an autonomic manager.

  11. A multiparametric automatic method to monitor long-term reproducibility in digital mammography: results from a regional screening programme.

    Science.gov (United States)

    Gennaro, G; Ballaminut, A; Contento, G

    2017-09-01

    This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.

  12. Including mixed methods research in systematic reviews: Examples from qualitative syntheses in TB and malaria control

    Science.gov (United States)

    2012-01-01

    Background Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. Methods We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Results Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Conclusions Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research. PMID:22545681

  13. A novel technique for including surface tension in PLIC-VOF methods

    Energy Technology Data Exchange (ETDEWEB)

    Meier, M.; Yadigaroglu, G. [Swiss Federal Institute of Technology, Nuclear Engineering Lab. ETH-Zentrum, CLT, Zurich (Switzerland); Smith, B. [Paul Scherrer Inst. (PSI), Villigen (Switzerland). Lab. for Thermal-Hydraulics

    2002-02-01

    Various versions of Volume-of-Fluid (VOF) methods have been used successfully for the numerical simulation of gas-liquid flows with an explicit tracking of the phase interface. Of these, Piecewise-Linear Interface Construction (PLIC-VOF) appears as a fairly accurate, although somewhat more involved variant. Including effects due to surface tension remains a problem, however. The most prominent methods, Continuum Surface Force (CSF) of Brackbill et al. and the method of Zaleski and co-workers (both referenced later), both induce spurious or 'parasitic' currents, and only moderate accuracy in regards to determining the curvature. We present here a new method to determine curvature accurately using an estimator function, which is tuned with a least-squares-fit against reference data. Furthermore, we show how spurious currents may be drastically reduced using the reconstructed interfaces from the PLIC-VOF method. (authors)

  14. Control method and device for automatic drift stabilization in radiation detection

    International Nuclear Information System (INIS)

    Berthold, F.; Kubisiak, H.

    1979-01-01

    In the automatic control circuit individual electron peaks in the detectors, e.g. NaI crystals or proportional counters, are used. These peaks exhibit no drift dependence; they may be produced in the detectors in different ways. The control circuit may be applied in nuclear radiation measurement techniques, photometry, gamma cameras and for measuring the X-ray fine structure with proportional counters. (DG) [de

  15. Comparison of HMM and DTW methods in automatic recognition of pathological phoneme pronunciation

    OpenAIRE

    Wielgat, Robert; Zielinski, Tomasz P.; Swietojanski, Pawel; Zoladz, Piotr; Król, Daniel; Wozniak, Tomasz; Grabias, Stanislaw

    2007-01-01

    In the paper recently proposed Human Factor Cepstral Coefficients (HFCC) are used to automatic recognition of pathological phoneme pronunciation in speech of impaired children and efficiency of this approach is compared to application of the standard Mel-Frequency Cepstral Coefficients (MFCC) as a feature vector. Both dynamic time warping (DTW), working on whole words or embedded phoneme patterns, and hidden Markov models (HMM) are used as classifiers in the presented research. Obtained resul...

  16. An Automatic Parameter Identification Method for a PMSM Drive with LC-Filter

    DEFF Research Database (Denmark)

    Bech, Michael Møller; Christensen, Jeppe Haals; Weber, Magnus L.

    2016-01-01

    of the PMSM fed through an LC-filter. Based on the measured current response, model parameters for both the filter (L, R, C) and the PMSM (L and R) are estimated: First, the frequency response of the system is estimated using Welch Modified Periodogram method and then an optimization algorithm is used to find...... the parameters in an analytical reference model that minimize the model error. To demonstrate the practical feasibility of the method, a fully functional drive including an embedded real-time controller has been built. In addition to modulation, data acquisition and control the whole parameter identification...... method is also implemented on the real-time controller. Based on laboratory experiments on a 22 kW drive, it is concluded that the embedded identification method can estimate the five parameters in less than ten seconds....

  17. PHOTOGRAMMETRIC MODEL BASED METHOD OF AUTOMATIC ORIENTATION OF SPACE CARGO SHIP RELATIVE TO THE INTERNATIONAL SPACE STATION

    Directory of Open Access Journals (Sweden)

    Y. B. Blokhinov

    2012-07-01

    Full Text Available The technical problem of creating the new Russian version of an automatic Space Cargo Ship (SCS for the International Space Station (ISS is inseparably connected to the development of a digital video system for automatically measuring the SCS position relative to ISS in the process of spacecraft docking. This paper presents a method for estimating the orientation elements based on the use of a highly detailed digital model of the ISS. The input data are digital frames from a calibrated video system and the initial values of orientation elements, these can be estimated from navigation devices or by fast-and-rough viewpoint-dependent algorithm. Then orientation elements should be defined precisely by means of algorithmic processing. The main idea is to solve the exterior orientation problem mainly on the basis of contour information of the frame image of ISS instead of ground control points. A detailed digital model is used for generating raster templates of ISS nodes; the templates are used to detect and locate the nodes on the target image with the required accuracy. The process is performed for every frame, the resulting parameters are considered to be the orientation elements. The Kalman filter is used for statistical support of the estimation process and real time pose tracking. Finally, the modeling results presented show that the proposed method can be regarded as one means to ensure the algorithmic support of automatic space ships docking.

  18. Automatic mesh refinement and local multigrid methods for contact problems: application to the Pellet-Cladding mechanical Interaction

    International Nuclear Information System (INIS)

    Liu, Hao

    2016-01-01

    This Ph.D. work takes place within the framework of studies on Pellet-Cladding mechanical Interaction (PCI) which occurs in the fuel rods of pressurized water reactor. This manuscript focuses on automatic mesh refinement to simulate more accurately this phenomena while maintaining acceptable computational time and memory space for industrial calculations. An automatic mesh refinement strategy based on the combination of the Local Defect Correction multigrid method (LDC) with the Zienkiewicz and Zhu a posteriori error estimator is proposed. The estimated error is used to detect the zones to be refined, where the local sub-grids of the LDC method are generated. Several stopping criteria are studied to end the refinement process when the solution is accurate enough or when the refinement does not improve the global solution accuracy anymore. Numerical results for elastic 2D test cases with pressure discontinuity show the efficiency of the proposed strategy. The automatic mesh refinement in case of unilateral contact problems is then considered. The strategy previously introduced can be easily adapted to the multi-body refinement by estimating solution error on each body separately. Post-processing is often necessary to ensure the conformity of the refined areas regarding the contact boundaries. A variety of numerical experiments with elastic contact (with or without friction, with or without an initial gap) confirms the efficiency and adaptability of the proposed strategy. (author) [fr

  19. Comparison of different methods to include recycling in LCAs of aluminium cans and disposable polystyrene cups.

    Science.gov (United States)

    van der Harst, Eugenie; Potting, José; Kroeze, Carolien

    2016-02-01

    Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of recycling loops, the recycled-content method, and the equal-share method. These six methods were first compared, with an assumed hypothetical 100% recycling rate, for an aluminium can and a disposable polystyrene (PS) cup. The substitution and recycled-content method were next applied with actual rates for recycling, incineration and landfilling for both product systems in selected countries. The six methods differ in their approaches to credit recycling. The three substitution methods stimulate the recyclability of the product and assign credits for the obtained recycled material. The choice to either apply a correction factor, or to account for alternative substituted material has a considerable influence on the LCA results, and is debatable. Nevertheless, we prefer incorporating quality reduction of the recycled material by either a correction factor or an alternative substituted material over simply ignoring quality loss. The allocation-on-number-of-recycling-loops method focusses on the life expectancy of material itself, rather than on a specific separate product. The recycled-content method stimulates the use of recycled material, i.e. credits the use of recycled material in products and ignores the recyclability of the products. The equal-share method is a compromise between the substitution methods and the recycled-content method. The results for the aluminium can follow the underlying philosophies of the methods. The results for the PS cup are additionally influenced by the correction factor or credits for the alternative material accounting for the drop in PS quality, the waste treatment management (recycling rate, incineration rate, landfilling rate), and the

  20. Methods of using structures including catalytic materials disposed within porous zeolite materials to synthesize hydrocarbons

    Science.gov (United States)

    Rollins, Harry W [Idaho Falls, ID; Petkovic, Lucia M [Idaho Falls, ID; Ginosar, Daniel M [Idaho Falls, ID

    2011-02-01

    Catalytic structures include a catalytic material disposed within a zeolite material. The catalytic material may be capable of catalyzing a formation of methanol from carbon monoxide and/or carbon dioxide, and the zeolite material may be capable of catalyzing a formation of hydrocarbon molecules from methanol. The catalytic material may include copper and zinc oxide. The zeolite material may include a first plurality of pores substantially defined by a crystal structure of the zeolite material and a second plurality of pores dispersed throughout the zeolite material. Systems for synthesizing hydrocarbon molecules also include catalytic structures. Methods for synthesizing hydrocarbon molecules include contacting hydrogen and at least one of carbon monoxide and carbon dioxide with such catalytic structures. Catalytic structures are fabricated by forming a zeolite material at least partially around a template structure, removing the template structure, and introducing a catalytic material into the zeolite material.

  1. Including mixed methods research in systematic reviews: examples from qualitative syntheses in TB and malaria control.

    Science.gov (United States)

    Atkins, Salla; Launiala, Annika; Kagaha, Alexander; Smith, Helen

    2012-04-30

    Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research.

  2. Development of a method for fast and automatic radiocarbon measurement of aerosol samples by online coupling of an elemental analyzer with a MICADAS AMS

    Energy Technology Data Exchange (ETDEWEB)

    Salazar, G., E-mail: gary.salazar@dcb.unibe.ch [Department of Chemistry and Biochemistry & Oeschger Centre for Climate Change Research, University of Bern, 3012 Bern (Switzerland); Zhang, Y.L.; Agrios, K. [Department of Chemistry and Biochemistry & Oeschger Centre for Climate Change Research, University of Bern, 3012 Bern (Switzerland); Paul Scherrer Institut (PSI), 5232 Villigen (Switzerland); Szidat, S. [Department of Chemistry and Biochemistry & Oeschger Centre for Climate Change Research, University of Bern, 3012 Bern (Switzerland)

    2015-10-15

    A fast and automatic method for radiocarbon analysis of aerosol samples is presented. This type of analysis requires high number of sample measurements of low carbon masses, but accepts precisions lower than for carbon dating analysis. The method is based on online Trapping CO{sub 2} and coupling an elemental analyzer with a MICADAS AMS by means of a gas interface. It gives similar results to a previously validated reference method for the same set of samples. This method is fast and automatic and typically provides uncertainties of 1.5–5% for representative aerosol samples. It proves to be robust and reliable and allows for overnight and unattended measurements. A constant and cross contamination correction is included, which indicates a constant contamination of 1.4 ± 0.2 μg C with 70 ± 7 pMC and a cross contamination of (0.2 ± 0.1)% from the previous sample. A Real-time online coupling version of the method was also investigated. It shows promising results for standard materials with slightly higher uncertainties than the Trapping online approach.

  3. Complete Tangent Stiffness for eXtended Finite Element Method by including crack growth parameters

    DEFF Research Database (Denmark)

    Mougaard, J.F.; Poulsen, P.N.; Nielsen, L.O.

    2013-01-01

    the crack geometry parameters, such as the crack length and the crack direction directly in the virtual work formulation. For efficiency, it is essential to obtain a complete tangent stiffness. A new method in this work is presented to include an incremental form the crack growth parameters on equal terms......The eXtended Finite Element Method (XFEM) is a useful tool for modeling the growth of discrete cracks in structures made of concrete and other quasi‐brittle and brittle materials. However, in a standard application of XFEM, the tangent stiffness is not complete. This is a result of not including...... with the degrees of freedom in the FEM‐equations. The complete tangential stiffness matrix is based on the virtual work together with the constitutive conditions at the crack tip. Introducing the crack growth parameters as direct unknowns, both equilibrium equations and the crack tip criterion can be handled...

  4. Automatic speech recognition (zero crossing method). Automatic recognition of isolated vowels; Reconnaissance automatique de la parole (methode des passages par zero). Reconnaissance automatique de voyelles isolees

    Energy Technology Data Exchange (ETDEWEB)

    Dupeyrat, Benoit

    1975-06-10

    This note describes a recognition method of isolated vowels, using a preprocessing of the vocal signal. The processing extracts the extrema of the vocal signal and the interval time separating them (Zero crossing distances of the first derivative of the signal). The recognition of vowels uses normalized histograms of the values of these intervals. The program determines a distance between the histogram of the sound to be recognized and histograms models built during a learning phase. The results processed on real time by a minicomputer, are relatively independent of the speaker, the fundamental frequency being not allowed to vary too much (i.e. speakers of the same sex). (author) [French] Cette note decrit une methode de reconnaissance automatique de voyelles isolees basee sur un pretraitement particulier du signal vocal. Ce pretraitement consiste a extraire les extrema du signal vocal et les intervalles de temps les separant (distances entre passages par zero de la derivee du signal). La reconnaissance des voyelles est faite en utilisant des histogrammes normalises des valeurs de ces interval les. Le programme de reconnaissance utilise une distance entre l'histogramme du son a reconnaitre et des histogrammes modeles provenant d'un apprentissage. Les resultats obtenus en temps reels sur un minicalculateur, sont assez independants du locuteur, pourvu que la frequence fondamentale de la voix ne varie pas trop (locuteurs de meme sexe). (auteur)

  5. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    Gasco, C.; Anton, M. P.; Ampudia, J.

    2003-01-01

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  6. Study on the Automatic Detection Method and System of Multifunctional Hydrocephalus Shunt

    Science.gov (United States)

    Sun, Xuan; Wang, Guangzhen; Dong, Quancheng; Li, Yuzhong

    2017-07-01

    Aiming to the difficulty of micro pressure detection and the difficulty of micro flow control in the testing process of hydrocephalus shunt, the principle of the shunt performance detection was analyzed.In this study, the author analyzed the principle of several items of shunt performance detection,and used advanced micro pressure sensor and micro flow peristaltic pump to overcome the micro pressure detection and micro flow control technology.At the same time,This study also puted many common experimental projects integrated, and successfully developed the automatic detection system for a shunt performance detection function, to achieve a test with high precision, high efficiency and automation.

  7. Earthquake analysis of structures including structure-soil interaction by a substructure method

    International Nuclear Information System (INIS)

    Chopra, A.K.; Guttierrez, J.A.

    1977-01-01

    A general substructure method for analysis of response of nuclear power plant structures to earthquake ground motion, including the effects of structure-soil interaction, is summarized. The method is applicable to complex structures idealized as finite element systems and the soil region treated as either a continuum, for example as a viscoelastic halfspace, or idealized as a finite element system. The halfspace idealization permits reliable analysis for sites where essentially similar soils extend to large depths and there is no rigid boundary such as soil-rock interface. For sites where layers of soft soil are underlain by rock at shallow depth, finite element idealization of the soil region is appropriate; in this case, the direct and substructure methods would lead to equivalent results but the latter provides the better alternative. Treating the free field motion directly as the earthquake input in the substructure method eliminates the deconvolution calculations and the related assumption -regarding type and direction of earthquake waves- required in the direct method. The substructure method is computationally efficient because the two substructures-the structure and the soil region- are analyzed separately; and, more important, it permits taking advantage of the important feature that response to earthquake ground motion is essentially contained in the lower few natural modes of vibration of the structure on fixed base. For sites where essentially similar soils extend to large depths and there is no obvious rigid boundary such as a soil-rock interface, numerical results for earthquake response of a nuclear reactor structure are presented to demonstrate that the commonly used finite element method may lead to unacceptable errors; but the substructure method leads to reliable results

  8. A Pressure Plate-Based Method for the Automatic Assessment of Foot Strike Patterns During Running.

    Science.gov (United States)

    Santuz, Alessandro; Ekizos, Antonis; Arampatzis, Adamantios

    2016-05-01

    The foot strike pattern (FSP, description of how the foot touches the ground at impact) is recognized to be a predictor of both performance and injury risk. The objective of the current investigation was to validate an original foot strike pattern assessment technique based on the numerical analysis of foot pressure distribution. We analyzed the strike patterns during running of 145 healthy men and women (85 male, 60 female). The participants ran on a treadmill with integrated pressure plate at three different speeds: preferred (shod and barefoot 2.8 ± 0.4 m/s), faster (shod 3.5 ± 0.6 m/s) and slower (shod 2.3 ± 0.3 m/s). A custom-designed algorithm allowed the automatic footprint recognition and FSP evaluation. Incomplete footprints were simultaneously identified and corrected from the software itself. The widely used technique of analyzing high-speed video recordings was checked for its reliability and has been used to validate the numerical technique. The automatic numerical approach showed a good conformity with the reference video-based technique (ICC = 0.93, p < 0.01). The great improvement in data throughput and the increased completeness of results allow the use of this software as a powerful feedback tool in a simple experimental setup.

  9. Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring

    Directory of Open Access Journals (Sweden)

    Wenyu Zhang

    2014-10-01

    Full Text Available Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

  10. Automatic crack detection and classification method for subway tunnel safety monitoring.

    Science.gov (United States)

    Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun

    2014-10-16

    Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

  11. Automatic and efficient methods applied to the binarization of a subway map

    Science.gov (United States)

    Durand, Philippe; Ghorbanzadeh, Dariush; Jaupi, Luan

    2015-12-01

    The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.

  12. Analytical Validation of a New Enzymatic and Automatable Method for d-Xylose Measurement in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Israel Sánchez-Moreno

    2017-01-01

    Full Text Available Hypolactasia, or intestinal lactase deficiency, affects more than half of the world population. Currently, xylose quantification in urine after gaxilose oral administration for the noninvasive diagnosis of hypolactasia is performed with the hand-operated nonautomatable phloroglucinol reaction. This work demonstrates that a new enzymatic xylose quantification method, based on the activity of xylose dehydrogenase from Caulobacter crescentus, represents an excellent alternative to the manual phloroglucinol reaction. The new method is automatable and facilitates the use of the gaxilose test for hypolactasia diagnosis in the clinical practice. The analytical validation of the new technique was performed in three different autoanalyzers, using buffer or urine samples spiked with different xylose concentrations. For the comparison between the phloroglucinol and the enzymatic assays, 224 urine samples of patients to whom the gaxilose test had been prescribed were assayed by both methods. A mean bias of −16.08 mg of xylose was observed when comparing the results obtained by both techniques. After adjusting the cut-off of the enzymatic method to 19.18 mg of xylose, the Kappa coefficient was found to be 0.9531, indicating an excellent level of agreement between both analytical procedures. This new assay represents the first automatable enzymatic technique validated for xylose quantification in urine.

  13. A method for studying the hunting oscillations of an airplane with a simple type of automatic control

    Science.gov (United States)

    Jones, R. T.

    1976-01-01

    A method is presented for predicting the amplitude and frequency, under certain simplifying conditions, of the hunting oscillations of an automatically controlled aircraft with lag in the control system or in the response of the aircraft to the controls. If the steering device is actuated by a simple right-left type of signal, the series of alternating fixed amplitude signals occuring during the hunting may ordinarily be represented by a square wave. Formulas are given expressing the response to such a variation of signal in terms of the response to a unit signal.

  14. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning

    DEFF Research Database (Denmark)

    Olesen, Alexander Neergaard; Christensen, Julie Anja Engelhard; Sørensen, Helge Bjarup Dissing

    2016-01-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography...... (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen’s kappa of 0.74 indicating substantial agreement between...

  15. Spine surgeon's kinematics during discectomy, part II: operating table height and visualization methods, including microscope.

    Science.gov (United States)

    Park, Jeong Yoon; Kim, Kyung Hyun; Kuh, Sung Uk; Chin, Dong Kyu; Kim, Keun Su; Cho, Yong Eun

    2014-05-01

    Surgeon spine angle during surgery was studied ergonomically and the kinematics of the surgeon's spine was related with musculoskeletal fatigue and pain. Spine angles varied depending on operation table height and visualization method, and in a previous paper we showed that the use of a loupe and a table height at the midpoint between the umbilicus and the sternum are optimal for reducing musculoskeletal loading. However, no studies have previously included a microscope as a possible visualization method. The objective of this study is to assess differences in surgeon spine angles depending on operating table height and visualization method, including microscope. We enrolled 18 experienced spine surgeons for this study, who each performed a discectomy using a spine surgery simulator. Three different methods were used to visualize the surgical field (naked eye, loupe, microscope) and three different operating table heights (anterior superior iliac spine, umbilicus, the midpoint between the umbilicus and the sternum) were studied. Whole spine angles were compared for three different views during the discectomy simulation: midline, ipsilateral, and contralateral. A 16-camera optoelectronic motion analysis system was used, and 16 markers were placed from the head to the pelvis. Lumbar lordosis, thoracic kyphosis, cervical lordosis, and occipital angle were compared between the different operating table heights and visualization methods as well as a natural standing position. Whole spine angles differed significantly depending on visualization method. All parameters were closer to natural standing values when discectomy was performed with a microscope, and there were no differences between the naked eye and the loupe. Whole spine angles were also found to differ from the natural standing position depending on operating table height, and became closer to natural standing position values as the operating table height increased, independent of the visualization method

  16. Earthquake analysis of structures including structure-soil interaction by a substructure method

    International Nuclear Information System (INIS)

    Chopra, A.K.; Guttierrez, J.A.

    1977-01-01

    A general substructure method for analysis of response of nuclear power plant structures to earthquake ground motion, including the effects of structure-soil interaction, is summarized. The method is applicable to complex structures idealized as finite element systems and the soil region treated as either a continuum, for example as a viscoelastic halfspace, or idealized as a finite element system. The halfspace idealization permits reliable analysis for sites where essentially similar soils extend to large depths and there is no rigid boundary such as soil-rock interface. For sites where layers of soft soil are underlain by rock at shallow depth, finite element idealization of the soil region is appropriate; in this case, the direct and substructure methods would lead to equivalent results but the latter provides the better alternative. Treating the free field motion directly as the earthquake input in the substructure eliminates the deconvolution calculations and the related assumption-regarding type and direction of earthquake waves-required in the direct method. (Auth.)

  17. Method and apparatus for controlling a powertrain system including a multi-mode transmission

    Science.gov (United States)

    Hessell, Steven M.; Morris, Robert L.; McGrogan, Sean W.; Heap, Anthony H.; Mendoza, Gil J.

    2015-09-08

    A powertrain including an engine and torque machines is configured to transfer torque through a multi-mode transmission to an output member. A method for controlling the powertrain includes employing a closed-loop speed control system to control torque commands for the torque machines in response to a desired input speed. Upon approaching a power limit of a power storage device transferring power to the torque machines, power limited torque commands are determined for the torque machines in response to the power limit and the closed-loop speed control system is employed to determine an engine torque command in response to the desired input speed and the power limited torque commands for the torque machines.

  18. Automatic Optimizer Generation Method Based on Location and Context Information to Improve Mobile Services

    Directory of Open Access Journals (Sweden)

    Yunsik Son

    2017-01-01

    Full Text Available Several location-based services (LBSs have been recently developed for smartphones. Among these are proactive LBSs, which provide services to smartphone users by periodically collecting background logs. However, because they consume considerable battery power, they are not widely used for various LBS-based services. Battery consumption, in particular, is a significant issue on account of the characteristics of mobile systems. This problem involves a greater service restriction when performing complex operations. Therefore, to successfully enable various services based on location, this problem must be solved. In this paper, we introduce a technique to automatically generate a customized service optimizer for each application, service type, and platform using location and situation information. By using the proposed technique, energy and computing resources can be more efficiently employed for each service. Thus, users should receive more effective LBSs on mobile devices, such as smartphones.

  19. An automatic on-line 2,2-diphenyl-1-picrylhydrazyl-high performance liquid chromatography method for high-throughput screening of antioxidants from natural products.

    Science.gov (United States)

    Lu, Yanzhen; Wu, Nan; Fang, Yingtong; Shaheen, Nusrat; Wei, Yun

    2017-10-27

    Many natural products are rich in antioxidants which play an important role in preventing or postponing a variety of diseases, such as cardiovascular and inflammatory disease, diabetes as well as breast cancer. In this paper, an automatic on-line 2,2-diphenyl-1-picrylhydrazyl-high performance liquid chromatography (DPPH-HPLC) method was established for antioxidants screening with nine standards including organic acids (4-hydroxyphenylacetic acid, p-coumaric acid, ferulic acid, and benzoic acid), alkaloids (coptisine and berberine), and flavonoids (quercitrin, astragalin, and quercetin). The optimal concentration of DPPH was determined, and six potential antioxidants including 4-hydroxyphenylacetic acid, p-coumaric acid, ferulic acid, quercitrin, astragalin, and quercetin, and three non-antioxidants including benzoic acid, coptisine, and berberine, were successfully screened out and validated by conventional DPPH radical scavenging activity assay. The established method has been applied to the crude samples of Saccharum officinarum rinds, Coptis chinensis powders, and Malus pumila leaves, consecutively. Two potential antioxidant compounds from Saccharum officinarum rinds and five potential antioxidant compounds from Malus pumila eaves were rapidly screened out. Then these seven potential antioxidants were purified and identified as p-coumaric acid, ferulic acid, phloridzin, isoquercitrin, quercetin-3-xyloside, quercetin-3-arabinoside, and quercetin-3-rhamnoside using countercurrent chromatography combined with mass spectrometry and their antioxidant activities were further evaluated by conventional DPPH radical scavenging assay. The activity result was in accordance with that of the established method. This established method is cheap and automatic, and could be used as an efficient tool for high-throughput antioxidant screening from various complex natural products. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A frequency domain linearized Navier-Stokes method including acoustic damping by eddy viscosity using RANS

    Science.gov (United States)

    Holmberg, Andreas; Kierkegaard, Axel; Weng, Chenyang

    2015-06-01

    In this paper, a method for including damping of acoustic energy in regions of strong turbulence is derived for a linearized Navier-Stokes method in the frequency domain. The proposed method is validated and analyzed in 2D only, although the formulation is fully presented in 3D. The result is applied in a study of the linear interaction between the acoustic and the hydrodynamic field in a 2D T-junction, subject to grazing flow at Mach 0.1. Part of the acoustic energy at the upstream edge of the junction is shed as harmonically oscillating disturbances, which are conveyed across the shear layer over the junction, where they interact with the acoustic field. As the acoustic waves travel in regions of strong shear, there is a need to include the interaction between the background turbulence and the acoustic field. For this purpose, the oscillation of the background turbulence Reynold's stress, due to the acoustic field, is modeled using an eddy Newtonian model assumption. The time averaged flow is first solved for using RANS along with a k-ε turbulence model. The spatially varying turbulent eddy viscosity is then added to the spatially invariant kinematic viscosity in the acoustic set of equations. The response of the 2D T-junction to an incident acoustic field is analyzed via a plane wave scattering matrix model, and the result is compared to experimental data for a T-junction of rectangular ducts. A strong improvement in the agreement between calculation and experimental data is found when the modification proposed in this paper is implemented. Discrepancies remaining are likely due to inaccuracies in the selected turbulence model, which is known to produce large errors e.g. for flows with significant rotation, which the grazing flow across the T-junction certainly is. A natural next step is therefore to test the proposed methodology together with more sophisticated turbulence models.

  1. Reliability and limitation of various diagnostic methods including nuclear medicine in myocardial disease

    International Nuclear Information System (INIS)

    Tokuyasu, Yoshiki; Kusakabe, Kiyoko; Yamazaki, Toshio

    1981-01-01

    Electrocardiography (ECG), echocardiography, nuclear method, cardiac catheterization, left ventriculography and endomyocardial biopsy (biopsy) were performed in 40 cases of cardiomyopathy (CM), 9 of endocardial fibroelastosis and 19 of specific heart muscle disease, and the usefulness and limitation of each method was comparatively estimated. In CM, various methods including biopsy were performed. The 40 patients were classified into 3 groups, i.e., hypertrophic (17), dilated (20) and non-hypertrophic.non-dilated (3) on the basis of left ventricular ejection fraction and hypertrophy of the ventricular wall. The hypertrophic group was divided into 4 subgroups: 9 septal, 4 apical, 2 posterior and 2 anterior. The nuclear study is useful in assessing the site of the abnormal ventricular thickening, perfusion defect and ventricular function. Echocardiography is most useful in detecting asymmetric septal hypertrophy. The biopsy gives the sole diagnostic clue, especially in non-hypertrophic.non-dilated cardiomyopathy. ECG is useful in all cases but correlation with the site of disproportional hypertrophy was not obtained. (J.P.N.)

  2. A method for including external feed in depletion calculations with CRAM and implementation into ORIGEN

    International Nuclear Information System (INIS)

    Isotalo, A.E.; Wieselquist, W.A.

    2015-01-01

    Highlights: • A method for handling external feed in depletion calculations with CRAM. • Source term can have polynomial or exponentially decaying time-dependence. • CRAM with source term and adjoint capability implemented to ORIGEN in SCALE. • The new solver is faster and more accurate than the original solver of ORIGEN. - Abstract: A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Furthermore, in most cases, the new solver is up to several times faster due to not requiring similar substepping as the original one

  3. Improved Riccati Transfer Matrix Method for Free Vibration of Non-Cylindrical Helical Springs Including Warping

    Directory of Open Access Journals (Sweden)

    A.M. Yu

    2012-01-01

    Full Text Available Free vibration equations for non-cylindrical (conical, barrel, and hyperboloidal types helical springs with noncircular cross-sections, which consist of 14 first-order ordinary differential equations with variable coefficients, are theoretically derived using spatially curved beam theory. In the formulation, the warping effect upon natural frequencies and vibrating mode shapes is first studied in addition to including the rotary inertia, the shear and axial deformation influences. The natural frequencies of the springs are determined by the use of improved Riccati transfer matrix method. The element transfer matrix used in the solution is calculated using the Scaling and Squaring method and Pad'e approximations. Three examples are presented for three types of springs with different cross-sectional shapes under clamped-clamped boundary condition. The accuracy of the proposed method has been compared with the FEM results using three-dimensional solid elements (Solid 45 in ANSYS code. Numerical results reveal that the warping effect is more pronounced in the case of non-cylindrical helical springs than that of cylindrical helical springs, which should be taken into consideration in the free vibration analysis of such springs.

  4. A Framework for the Development of Automatic DFA Method to Minimize the Number of Components and Assembly Reorientations

    Science.gov (United States)

    Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa

    2018-03-01

    Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.

  5. Automatic penalty continuation in structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    this issue is addressed. We propose an automatic continuation method, where the material penalization parameter is included as a new variable in the problem and a constraint guarantees that the requested penalty is eventually reached. The numerical results suggest that this approach is an appealing...... alternative to continuation methods. Automatic continuation also generally obtains better designs than the classical formulation using a reduced number of iterations....

  6. AUTOMATIC RECOGNITION OF CORONAL TYPE II RADIO BURSTS: THE AUTOMATED RADIO BURST IDENTIFICATION SYSTEM METHOD AND FIRST OBSERVATIONS

    International Nuclear Information System (INIS)

    Lobzin, Vasili V.; Cairns, Iver H.; Robinson, Peter A.; Steward, Graham; Patterson, Garth

    2010-01-01

    Major space weather events such as solar flares and coronal mass ejections are usually accompanied by solar radio bursts, which can potentially be used for real-time space weather forecasts. Type II radio bursts are produced near the local plasma frequency and its harmonic by fast electrons accelerated by a shock wave moving through the corona and solar wind with a typical speed of ∼1000 km s -1 . The coronal bursts have dynamic spectra with frequency gradually falling with time and durations of several minutes. This Letter presents a new method developed to detect type II coronal radio bursts automatically and describes its implementation in an extended Automated Radio Burst Identification System (ARBIS 2). Preliminary tests of the method with spectra obtained in 2002 show that the performance of the current implementation is quite high, ∼80%, while the probability of false positives is reasonably low, with one false positive per 100-200 hr for high solar activity and less than one false event per 10000 hr for low solar activity periods. The first automatically detected coronal type II radio burst is also presented.

  7. Analysis of hydrogen and methane in seawater by "Headspace" method: Determination at trace level with an automatic headspace sampler.

    Science.gov (United States)

    Donval, J P; Guyader, V

    2017-01-01

    "Headspace" technique is one of the methods for the onboard measurement of hydrogen (H 2 ) and methane (CH 4 ) in deep seawater. Based on the principle of an automatic headspace commercial sampler, a specific device has been developed to automatically inject gas samples from 300ml syringes (gas phase in equilibrium with seawater). As valves, micro pump, oven and detector are independent, a gas chromatograph is not necessary allowing a reduction of the weight and dimensions of the analytical system. The different steps from seawater sampling to gas injection are described. Accuracy of the method is checked by a comparison with the "purge and trap" technique. The detection limit is estimated to 0.3nM for hydrogen and 0.1nM for methane which is close to the background value in deep seawater. It is also shown that this system can be used to analyze other gases such as Nitrogen (N 2 ), carbon monoxide (CO), carbon dioxide (CO 2 ) and light hydrocarbons. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Toward automatic phenotyping of retinal images from genetically determined mono- and dizygotic twins using amplitude modulation-frequency modulation methods

    Science.gov (United States)

    Soliz, P.; Davis, B.; Murray, V.; Pattichis, M.; Barriga, S.; Russell, S.

    2010-03-01

    This paper presents an image processing technique for automatically categorize age-related macular degeneration (AMD) phenotypes from retinal images. Ultimately, an automated approach will be much more precise and consistent in phenotyping of retinal diseases, such as AMD. We have applied the automated phenotyping to retina images from a cohort of mono- and dizygotic twins. The application of this technology will allow one to perform more quantitative studies that will lead to a better understanding of the genetic and environmental factors associated with diseases such as AMD. A method for classifying retinal images based on features derived from the application of amplitude-modulation frequency-modulation (AM-FM) methods is presented. Retinal images from identical and fraternal twins who presented with AMD were processed to determine whether AM-FM could be used to differentiate between the two types of twins. Results of the automatic classifier agreed with the findings of other researchers in explaining the variation of the disease between the related twins. AM-FM features classified 72% of the twins correctly. Visual grading found that genetics could explain between 46% and 71% of the variance.

  9. Automatic differentiation of functions

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1990-06-01

    Automatic differentiation is a method of computing derivatives of functions to any order in any number of variables. The functions must be expressible as combinations of elementary functions. When evaluated at specific numerical points, the derivatives have no truncation error and are automatically found. The method is illustrated by simple examples. Source code in FORTRAN is provided

  10. SU-F-J-86: Method to Include Tissue Dose Response Effect in Deformable Image Registration

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, J; Liang, J; Chen, S; Qin, A; Yan, D [Beaumont Health Systeml, Royal Oak, MI (United States)

    2016-06-15

    Purpose: Organ changes shape and size during radiation treatment due to both mechanical stress and radiation dose response. However, the dose response induced deformation has not been considered in conventional deformable image registration (DIR). A novel DIR approach is proposed to include both tissue elasticity and radiation dose induced organ deformation. Methods: Assuming that organ sub-volume shrinkage was proportional to the radiation dose induced cell killing/absorption, the dose induced organ volume change was simulated applying virtual temperature on each sub-volume. Hence, both stress and heterogeneity temperature induced organ deformation. Thermal stress finite element method with organ surface boundary condition was used to solve deformation. Initial boundary correspondence on organ surface was created from conventional DIR. Boundary condition was updated by an iterative optimization scheme to minimize elastic deformation energy. The registration was validated on a numerical phantom. Treatment dose was constructed applying both the conventional DIR and the proposed method using daily CBCT image obtained from HN treatment. Results: Phantom study showed 2.7% maximal discrepancy with respect to the actual displacement. Compared with conventional DIR, subvolume displacement difference in a right parotid had the mean±SD (Min, Max) to be 1.1±0.9(−0.4∼4.8), −0.1±0.9(−2.9∼2.4) and −0.1±0.9(−3.4∼1.9)mm in RL/PA/SI directions respectively. Mean parotid dose and V30 constructed including the dose response induced shrinkage were 6.3% and 12.0% higher than those from the conventional DIR. Conclusion: Heterogeneous dose distribution in normal organ causes non-uniform sub-volume shrinkage. Sub-volume in high dose region has a larger shrinkage than the one in low dose region, therefore causing more sub-volumes to move into the high dose area during the treatment course. This leads to an unfavorable dose-volume relationship for the normal organ

  11. Engine including hydraulically actuated valvetrain and method of valve overlap control

    Science.gov (United States)

    Cowgill, Joel [White Lake, MI

    2012-05-08

    An exhaust valve control method may include displacing an exhaust valve in communication with the combustion chamber of an engine to an open position using a hydraulic exhaust valve actuation system and returning the exhaust valve to a closed position using the hydraulic exhaust valve actuation assembly. During closing, the exhaust valve may be displaced for a first duration from the open position to an intermediate closing position at a first velocity by operating the hydraulic exhaust valve actuation assembly in a first mode. The exhaust valve may be displaced for a second duration greater than the first duration from the intermediate closing position to a fully closed position at a second velocity at least eighty percent less than the first velocity by operating the hydraulic exhaust valve actuation assembly in a second mode.

  12. Flexible barrier film, method of forming same, and organic electronic device including same

    Science.gov (United States)

    Blizzard, John; Tonge, James Steven; Weidner, William Kenneth

    2013-03-26

    A flexible barrier film has a thickness of from greater than zero to less than 5,000 nanometers and a water vapor transmission rate of no more than 1.times.10.sup.-2 g/m.sup.2/day at 22.degree. C. and 47% relative humidity. The flexible barrier film is formed from a composition, which comprises a multi-functional acrylate. The composition further comprises the reaction product of an alkoxy-functional organometallic compound and an alkoxy-functional organosilicon compound. A method of forming the flexible barrier film includes the steps of disposing the composition on a substrate and curing the composition to form the flexible barrier film. The flexible barrier film may be utilized in organic electronic devices.

  13. Reactor power automatically controlling method and device for BWR type reactor

    International Nuclear Information System (INIS)

    Murata, Akira; Miyamoto, Yoshiyuki; Tanigawa, Naoshi.

    1997-01-01

    For an automatic control for a reactor power, when a deviation exceeds a predetermined value, the aimed value is kept at a predetermined value, and when the deviation is decreased to less than the predetermined value, the aimed value is increased from the predetermined value again. Alternatively, when a reactor power variation coefficient is decreased to less than a predetermine value, an aimed value is maintained at a predetermined value, and when the variation coefficient exceeds the predetermined value, the aimed value is increased. When the reactor power variation coefficient exceeds a first determined value, an aimed value is increased to a predetermined variation coefficient, and when the variation coefficient is decreased to less than the first determined value and also when the deviation between the aimed value and an actual reactor power exceeds a second determined value, the aimed value is maintained at a constant value. When the deviation is increased or when the reactor power variation coefficient is decreased, since the aimed value is maintained at predetermined value without increasing the aimed value, the deviation is not increased excessively thereby enabling to avoid excessive overshoot. (N.H.)

  14. Zirconium-based alloys, nuclear fuel rods and nuclear reactors including such alloys, and related methods

    Science.gov (United States)

    Mariani, Robert Dominick

    2014-09-09

    Zirconium-based metal alloy compositions comprise zirconium, a first additive in which the permeability of hydrogen decreases with increasing temperatures at least over a temperature range extending from 350.degree. C. to 750.degree. C., and a second additive having a solubility in zirconium over the temperature range extending from 350.degree. C. to 750.degree. C. At least one of a solubility of the first additive in the second additive over the temperature range extending from 350.degree. C. to 750.degree. C. and a solubility of the second additive in the first additive over the temperature range extending from 350.degree. C. to 750.degree. C. is higher than the solubility of the second additive in zirconium over the temperature range extending from 350.degree. C. to 750.degree. C. Nuclear fuel rods include a cladding material comprising such metal alloy compositions, and nuclear reactors include such fuel rods. Methods are used to fabricate such zirconium-based metal alloy compositions.

  15. An intuitive method to automatically detect the common and not common frequencies for two or more time-varying signals

    International Nuclear Information System (INIS)

    Doca, C.; Paunoiu, C.; Doca, L.

    2013-01-01

    Sampling a time-varying signal and his spectral analysis are, both, subjected to theoretically compelling, such as Shannon's theorem and the objectively limiting of the frequencys resolution. After obtaining the signals (Fourier) spectrum, this is processed and interpreted usually by a scientist who, presumably, has sufficient prior information about the monitored signal to conclude, for example, on the significant frequencies. Obviously, processing and interpretation of individual spectra are routine tasks that can be automated by suitable software (PC application). The problems complicate if we need to compare two or more spectra corresponding to different signals and/or phenomena. In the above context, this paper presents an intuitive method for automatic identification of the common and not common frequencies for two or more congruent spectra. The method is illustrated by numerical simulations, and by the results obtained in the analysis of the noise from some experimental measured signals. (authors)

  16. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    International Nuclear Information System (INIS)

    Schoot, A. J. A. J. van de; Schooneveldt, G.; Wognum, S.; Stalpers, L. J. A.; Rasch, C. R. N.; Bel, A.; Hoogeman, M. S.; Chai, X.

    2014-01-01

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation

  17. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    Science.gov (United States)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long

  18. A standardized method to create peripheral nerve injury in dogs using an automatic non-serrated forceps

    Institute of Scientific and Technical Information of China (English)

    Xuhui Wang; Shiting Li; Liang Wan; Xinyuan Li; Youqiang Meng; Ningxi Zhu; Min Yang; Baohui Feng; Wenchuan Zhang; Shugan Zhu

    2012-01-01

    This study describes a method that not only generates an automatic and standardized crush injury in the skull base, but also provides investigators with the option to choose from a range of varying pressure levels. We designed an automatic, non-serrated forceps that exerts a varying force of 0 to 100 g and lasts for a defined period of 0 to 60 seconds. This device was then used to generate a crush injury to the right oculomotor nerve of dogs with a force of 10 g for 15 seconds, resulting in a deficit in the pupil-light reflex and ptosis. Further testing of our model with Toluidine-blue staining demonstrated that, at 2 weeks post-surgery disordered oculomotor nerve fibers, axonal loss, and a thinner than normal myelin sheath were visible. Electrophysiological examination showed occasional spontaneous potentials. Together, these data verified that the model for oculomotor nerve injury was successful, and that the forceps we designed can be used to establish standard mechanical injury models of peripheral nerves.

  19. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    Science.gov (United States)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  20. Automatic Imitation

    Science.gov (United States)

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  1. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Science.gov (United States)

    Rueda-Ayala, Victor; Weis, Martin; Keller, Martina; Andújar, Dionisio; Gerhards, Roland

    2013-01-01

    Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS). The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow. PMID:23669712

  2. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Directory of Open Access Journals (Sweden)

    Roland Gerhards

    2013-05-01

    Full Text Available Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS. The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow.

  3. Second-principles method for materials simulations including electron and lattice degrees of freedom

    Science.gov (United States)

    García-Fernández, Pablo; Wojdeł, Jacek C.; Íñiguez, Jorge; Junquera, Javier

    2016-05-01

    We present a first-principles-based (second-principles) scheme that permits large-scale materials simulations including both atomic and electronic degrees of freedom on the same footing. The method is based on a predictive quantum-mechanical theory—e.g., density functional theory—and its accuracy can be systematically improved at a very modest computational cost. Our approach is based on dividing the electron density of the system into a reference part—typically corresponding to the system's neutral, geometry-dependent ground state—and a deformation part—defined as the difference between the actual and reference densities. We then take advantage of the fact that the bulk part of the system's energy depends on the reference density alone; this part can be efficiently and accurately described by a force field, thus avoiding explicit consideration of the electrons. Then, the effects associated to the difference density can be treated perturbatively with good precision by working in a suitably chosen Wannier function basis. Further, the electronic model can be restricted to the bands of interest. All these features combined yield a very flexible and computationally very efficient scheme. Here we present the basic formulation of this approach, as well as a practical strategy to compute model parameters for realistic materials. We illustrate the accuracy and scope of the proposed method with two case studies, namely, the relative stability of various spin arrangements in NiO (featuring complex magnetic interactions in a strongly-correlated oxide) and the formation of a two-dimensional electron gas at the interface between band insulators LaAlO3 and SrTiO3 (featuring subtle electron-lattice couplings and screening effects). We conclude by discussing ways to overcome the limitations of the present approach (most notably, the assumption of a fixed bonding topology), as well as its many envisioned possibilities and future extensions.

  4. Application of the stress wave method to automatic signal matching and to statnamic predictions

    NARCIS (Netherlands)

    Esposito, G.; Courage, W.M.G.; Foeken, R.J. van

    2000-01-01

    The Statnamic method is an increasingly popular technique to carry out loading tests on cast in-situ piles. The method bas proved to be a cost-effective alternative to a static loading test. As-sociated to Unloading Point Method (UPM) and to automatie signal matching, the Statnamic testing technique

  5. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. [comp.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  6. Applicability of a panel method, which includes nonlinear effects, to a forward-swept-wing aircraft

    Science.gov (United States)

    Ross, J. C.

    1984-01-01

    The ability of a lower order panel method VSAERO, to accurately predict the lift and pitching moment of a complete forward-swept-wing/canard configuration was investigated. The program can simulate nonlinear effects including boundary-layer displacement thickness, wake roll up, and to a limited extent, separated wakes. The predictions were compared with experimental data obtained using a small-scale model in the 7- by 10- Foot Wind Tunnel at NASA Ames Research Center. For the particular configuration under investigation, wake roll up had only a small effect on the force and moment predictions. The effect of the displacement thickness modeling was to reduce the lift curve slope slightly, thus bringing the predicted lift into good agreement with the measured value. Pitching moment predictions were also improved by the boundary-layer simulation. The separation modeling was found to be sensitive to user inputs, but appears to give a reasonable representation of a separated wake. In general, the nonlinear capabilities of the code were found to improve the agreement with experimental data. The usefullness of the code would be enhanced by improving the reliability of the separated wake modeling and by the addition of a leading edge separation model.

  7. Methods of forming aluminum oxynitride-comprising bodies, including methods of forming a sheet of transparent armor

    Science.gov (United States)

    Chu, Henry Shiu-Hung [Idaho Falls, ID; Lillo, Thomas Martin [Idaho Falls, ID

    2008-12-02

    The invention includes methods of forming an aluminum oxynitride-comprising body. For example, a mixture is formed which comprises A:B:C in a respective molar ratio in the range of 9:3.6-6.2:0.1-1.1, where "A" is Al.sub.2O.sub.3, "B" is AlN, and "C" is a total of one or more of B.sub.2O.sub.3, SiO.sub.2, Si--Al--O--N, and TiO.sub.2. The mixture is sintered at a temperature of at least 1,600.degree. C. at a pressure of no greater than 500 psia effective to form an aluminum oxynitride-comprising body which is at least internally transparent and has at least 99% maximum theoretical density.

  8. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  9. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    Energy Technology Data Exchange (ETDEWEB)

    Fan, J; Fan, J; Hu, W; Wang, J [Fudan University Shanghai Cancer Center, Shanghai, Shanghai (China)

    2016-06-15

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditional probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.

  10. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    International Nuclear Information System (INIS)

    Fan, J; Fan, J; Hu, W; Wang, J

    2016-01-01

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditional probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.

  11. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  12. An automatic high precision registration method between large area aerial images and aerial light detection and ranging data

    Science.gov (United States)

    Du, Q.; Xie, D.; Sun, Y.

    2015-06-01

    The integration of digital aerial photogrammetry and Light Detetion And Ranging (LiDAR) is an inevitable trend in Surveying and Mapping field. We calculate the external orientation elements of images which identical with LiDAR coordinate to realize automatic high precision registration between aerial images and LiDAR data. There are two ways to calculate orientation elements. One is single image spatial resection using image matching 3D points that registered to LiDAR. The other one is Position and Orientation System (POS) data supported aerotriangulation. The high precision registration points are selected as Ground Control Points (GCPs) instead of measuring GCPs manually during aerotriangulation. The registration experiments indicate that the method which registering aerial images and LiDAR points has a great advantage in higher automation and precision compare with manual registration.

  13. Automatic MRI Quantifying Methods in Behavioral-Variant Frontotemporal Dementia Diagnosis

    DEFF Research Database (Denmark)

    Cajanus, Antti; Hall, Anette; Koikkalainen, Juha

    2018-01-01

    genetic status in the differentiation sensitivity. Methods: The MRI scans of 50 patients with bvFTD (17 C9ORF72 expansion carriers) were analyzed using 6 quantification methods as follows: voxel-based morphometry (VBM), tensor-based morphometry, volumetry (VOL), manifold learning, grading, and white...

  14. Latent variable method for automatic adaptation to background states in motor imagery BCI

    Science.gov (United States)

    Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei

    2018-02-01

    Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.

  15. Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods

    Directory of Open Access Journals (Sweden)

    Kim HyungTae

    2015-01-01

    Full Text Available Automatic lighting (auto-lighting is a function that maximizes the image quality of a vision inspection system by adjusting the light intensity and color.In most inspection systems, a single color light source is used, and an equal step search is employed to determine the maximum image quality. However, when a mixed light source is used, the number of iterations becomes large, and therefore, a rapid search method must be applied to reduce their number. Derivative optimum search methods follow the tangential direction of a function and are usually faster than other methods. In this study, multi-dimensional forms of derivative optimum search methods are applied to obtain the maximum image quality considering a mixed-light source. The auto-lighting algorithms were derived from the steepest descent and conjugate gradient methods, which have N-size inputs of driving voltage and one output of image quality. Experiments in which the proposed algorithm was applied to semiconductor patterns showed that a reduced number of iterations is required to determine the locally maximized image quality.

  16. An automatic evaluation method for the surface profile of a microlens array using an optical interferometric microscope

    International Nuclear Information System (INIS)

    Lin, Chern-Sheng; Loh, Guo-Hao; Fu, Shu-Hsien; Chang, Hsun-Kai; Yang, Shih-Wei; Yeh, Mau-Shiun

    2010-01-01

    In this paper, an automatic evaluation method for the surface profile of a microlens array using an optical interferometric microscope is presented. For inspecting the microlens array, an XY-table is used to position it. With a He–Ne laser beam and optical fiber as a probing light, the measured image is sent to the computer to analyze the surface profile. By binary image slicing and area recognition, this study located the center of each ring and determined the substrate of the microlens array image through the background of the entire microlens array interference image. The maximum and minimum values of every segment brightness curve were determined corresponding to the change in the segment phase angle from 0° to 180°. According to the ratio of the actual ring area and the ideal ring area, the area ratio method was adopted to find the phase-angle variation of the interference ring. Based on the ratio of actual ring brightness and the ideal ring brightness, the brightness ratio method was used to determine the phase-angle variation of the interference ring fringe. The area ratio method and brightness ratio method are interchangeable in precisely determining the phase angles of the innermost and outermost rings of the interference fringe and obtaining different microlens surface altitudes of respective pixels in the segment, to greatly increase the microlens array surface profile inspection accuracy and quality

  17. Wavelet-Based Bayesian Methods for Image Analysis and Automatic Target Recognition

    National Research Council Canada - National Science Library

    Nowak, Robert

    2001-01-01

    .... We have developed two new techniques. First, we have develop a wavelet-based approach to image restoration and deconvolution problems using Bayesian image models and an alternating-maximation method...

  18. Implementation aspects of the Boundary Element Method including viscous and thermal losses

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Juhl, Peter Møller

    2014-01-01

    The implementation of viscous and thermal losses using the Boundary Element Method (BEM) is based on the Kirchhoff’s dispersion relation and has been tested in previous work using analytical test cases and comparison with measurements. Numerical methods that can simulate sound fields in fluids...

  19. A method for automatic grain segmentation of multi-angle cross-polarized microscopic images of sandstone

    Science.gov (United States)

    Jiang, Feng; Gu, Qing; Hao, Huizhen; Li, Na; Wang, Bingqian; Hu, Xiumian

    2018-06-01

    Automatic grain segmentation of sandstone is to partition mineral grains into separate regions in the thin section, which is the first step for computer aided mineral identification and sandstone classification. The sandstone microscopic images contain a large number of mixed mineral grains where differences among adjacent grains, i.e., quartz, feldspar and lithic grains, are usually ambiguous, which make grain segmentation difficult. In this paper, we take advantage of multi-angle cross-polarized microscopic images and propose a method for grain segmentation with high accuracy. The method consists of two stages, in the first stage, we enhance the SLIC (Simple Linear Iterative Clustering) algorithm, named MSLIC, to make use of multi-angle images and segment the images as boundary adherent superpixels. In the second stage, we propose the region merging technique which combines the coarse merging and fine merging algorithms. The coarse merging merges the adjacent superpixels with less evident boundaries, and the fine merging merges the ambiguous superpixels using the spatial enhanced fuzzy clustering. Experiments are designed on 9 sets of multi-angle cross-polarized images taken from the three major types of sandstones. The results demonstrate both the effectiveness and potential of the proposed method, comparing to the available segmentation methods.

  20. Research on a Hierarchical Dynamic Automatic Voltage Control System Based on the Discrete Event-Driven Method

    Directory of Open Access Journals (Sweden)

    Yong Min

    2013-06-01

    Full Text Available In this paper, concepts and methods of hybrid control systems are adopted to establish a hierarchical dynamic automatic voltage control (HD-AVC system, realizing the dynamic voltage stability of power grids. An HD-AVC system model consisting of three layers is built based on the hybrid control method and discrete event-driven mechanism. In the Top Layer, discrete events are designed to drive the corresponding control block so as to avoid solving complex multiple objective functions, the power system’s characteristic matrix is formed and the minimum amplitude eigenvalue (MAE is calculated through linearized differential-algebraic equations. MAE is applied to judge the system’s voltage stability and security and construct discrete events. The Middle Layer is responsible for management and operation, which is also driven by discrete events. Control values of the control buses are calculated based on the characteristics of power systems and the sensitivity method. Then control values generate control strategies through the interface block. In the Bottom Layer, various control devices receive and implement the control commands from the Middle Layer. In this way, a closed-loop power system voltage control is achieved. Computer simulations verify the validity and accuracy of the HD-AVC system, and verify that the proposed HD-AVC system is more effective than normal voltage control methods.

  1. A comparison between the conventional manual ROI method and an automatic algorithm for semiquantitative analysis of SPECT studies

    International Nuclear Information System (INIS)

    Pagan, L; Novi, B; Guidarelli, G; Tranfaglia, C; Galli, S; Lucchi, G; Fagioli, G

    2011-01-01

    In this study, the performance of a free software for automatic segmentation of striatal SPECT brain studies (BasGanV2 - www.aimn.it) and a standard manual Region Of Interest (ROI) method were compared. The anthropomorphic Alderson RSD phantom, filled with solutions at different concentration of 123 I-FP-CIT with Caudate-Putamen to Background ratios between 1 and 8.7 and Caudate to Putamen ratios between 1 and 2, was imaged on a Philips-Irix triple head gamma camera. Images were reconstructed using filtered back-projection and processed with both BasGanV2, that provides normalized striatal uptake values on volumetric anatomical ROIs, and a manual method, based on average counts per voxel in ROIs drawn in a three-slice section. Caudate-Putamen/Background and Caudate/Putamen ratios obtained with the two methods were compared with true experimental ratios. Good correlation was found for each method; BasGanV2, however, has higher R index (BasGan R mean = 0.95, p mean = 0.89, p 123 I-FP-CIT SPECT data with, moreover, the advantage of the availability of a control subject's database.

  2. Automatic method detection of artifacts for control of tomographic uniformity on SPECT; Metodo automatico de dteccion de artefactos para el control de la uniformidad tomografica en SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Reynes Llompart, G.; Puchal, R.

    2013-07-01

    The objective of this work is the find an automatic method for the detection and classification of artifacts produced in tomographic uniformity, extracting the characteristics necessary to apply a classification algorithm using pattern recognition techniques. The method has been trained and validated with synthetic images and tested with real images. (Author)

  3. Automatic Tree Data Removal Method for Topography Measurement Result Using Terrestrial Laser Scanner

    Science.gov (United States)

    Yokoyama, H.; Chikatsu, H.

    2017-02-01

    Recently, laser scanning has been receiving greater attention as a useful tool for real-time 3D data acquisition, and various applications such as city modelling, DTM generation and 3D modelling of cultural heritage sites have been proposed. And, former digital data processing were demanded in the past digital archive techniques for cultural heritage sites. However, robust filtering method for distinguishing on- and off-terrain points by terrestrial laser scanner still have many issues. In the past investigation, former digital data processing using air-bone laser scanner were reported. Though, efficient tree removal methods from terrain points for the cultural heritage are not considered. In this paper, authors describe a new robust filtering method for cultural heritage using terrestrial laser scanner with "the echo digital processing technology" as latest data processing techniques of terrestrial laser scanner.

  4. An innovative exercise method to simulate orbital EVA work - Applications to PLSS automatic controls

    Science.gov (United States)

    Lantz, Renee; Vykukal, H.; Webbon, Bruce

    1987-01-01

    An exercise method has been proposed which may satisfy the current need for a laboratory simulation representative of muscular, cardiovascular, respiratory, and thermoregulatory responses to work during orbital extravehicular activity (EVA). The simulation incorporates arm crank ergometry with a unique body support mechanism that allows all body position stabilization forces to be reacted at the feet. By instituting this exercise method in laboratory experimentation, an advanced portable life support system (PLSS) thermoregulatory control system can be designed to more accurately reflect the specific work requirements of orbital EVA.

  5. Evaluation of the Patient Effective Dose in Whole Spine Scanography Based on the Automatic Image Pasting Method for Digital Radiography

    International Nuclear Information System (INIS)

    Kim, Jung-Su; Yoon, Sang-Wook; Seo, Deok-Nam; Nam, So-Ra; Kim, Jung-Min

    2016-01-01

    Whole spine scanography (WSS) is a radiologic examination that requires whole body X-ray exposure. Consequently, the amount of patient radiation exposure is higher than the radiation dose following routine X-ray examination. Several studies have evaluated the patient effective dose (ED) following single exposure film-screen WSS. The objective of this study was to evaluate patient ED during WSS, based on the automatic image pasting method for multiple exposure digital radiography (APMDR). Further, the calculated EDs were compared with the results of previous studies involving single exposure film-screen WSS. We evaluated the ED of 50 consecutive patients (M:F = 28:22) who underwent WSS using APMDR. The anterior-posterior (AP) and lateral (LAT) projection EDs were evaluated based on the Monte Carlo simulation. Using APMDR, the mean number of exposures was 6.1 for AP and 6.5 for LAT projections. LAT projections required more exposures (6.55%) than AP projections. The mean ED was 0.6276 mSv (AP) and 0.6716 mSv (LAT). The mean ED for LAT projections was 0.6061 mSv in automatic exposure control (AEC) and 0.7694 mSv in manual mode. The relationship between dose-area-product (DAP) and ED revealed a proportional correlation (AP, R 2 = 0.943; LAT, R 2 = 0.773). Compared to prior research involving single exposure screen-film WSS, the patient ED following WSS using APMDR was lower on AP than on LAT projections. Despite multiple exposures, ED control is more effective if WSS is performed using APMDR in the AEC mode

  6. A practical method to standardise and optimise the Philips DoseRight 2.0 CT automatic exposure control system.

    Science.gov (United States)

    Wood, T J; Moore, C S; Stephens, A; Saunderson, J R; Beavis, A W

    2015-09-01

    Given the increasing use of computed tomography (CT) in the UK over the last 30 years, it is essential to ensure that all imaging protocols are optimised to keep radiation doses as low as reasonably practicable, consistent with the intended clinical task. However, the complexity of modern CT equipment can make this task difficult to achieve in practice. Recent results of local patient dose audits have shown discrepancies between two Philips CT scanners that use the DoseRight 2.0 automatic exposure control (AEC) system in the 'automatic' mode of operation. The use of this system can result in drifting dose and image quality performance over time as it is designed to evolve based on operator technique. The purpose of this study was to develop a practical technique for configuring examination protocols on four CT scanners that use the DoseRight 2.0 AEC system in the 'manual' mode of operation. This method used a uniform phantom to generate reference images which form the basis for how the AEC system calculates exposure factors for any given patient. The results of this study have demonstrated excellent agreement in the configuration of the CT scanners in terms of average patient dose and image quality when using this technique. This work highlights the importance of CT protocol harmonisation in a modern Radiology department to ensure both consistent image quality and radiation dose. Following this study, the average radiation dose for a range of CT examinations has been reduced without any negative impact on clinical image quality.

  7. Automatic MRI Quantifying Methods in Behavioral-Variant Frontotemporal Dementia Diagnosis

    Directory of Open Access Journals (Sweden)

    Antti Cajanus

    2018-02-01

    Full Text Available Aims: We assessed the value of automated MRI quantification methods in the differential diagnosis of behavioral-variant frontotemporal dementia (bvFTD from Alzheimer disease (AD, Lewy body dementia (LBD, and subjective memory complaints (SMC. We also examined the role of the C9ORF72-related genetic status in the differentiation sensitivity. Methods: The MRI scans of 50 patients with bvFTD (17 C9ORF72 expansion carriers were analyzed using 6 quantification methods as follows: voxel-based morphometry (VBM, tensor-based morphometry, volumetry (VOL, manifold learning, grading, and white-matter hyperintensities. Each patient was then individually compared to an independent reference group in order to attain diagnostic suggestions. Results: Only VBM and VOL showed utility in correctly identifying bvFTD from our set of data. The overall classification sensitivity of bvFTD with VOL + VBM achieved a total sensitivity of 60%. Using VOL + VBM, 32% were misclassified as having LBD. There was a trend of higher values for classification sensitivity of the C9ORF72 expansion carriers than noncarriers. Conclusion: VOL, VBM, and their combination are effective in differential diagnostics between bvFTD and AD or SMC. However, MRI atrophy profiles for bvFTD and LBD are too similar for a reliable differentiation with the quantification methods tested in this study.

  8. Development of automatic image analysis methods for high-throughput and high-content screening

    NARCIS (Netherlands)

    Di, Zi

    2013-01-01

    This thesis focuses on the development of image analysis methods for ultra-high content analysis of high-throughput screens where cellular phenotype responses to various genetic or chemical perturbations that are under investigation. Our primary goal is to deliver efficient and robust image analysis

  9. Automatic Power Control for Daily Load-following Operation using Model Predictive Control Method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Keuk Jong; Kim, Han Gon [KH, Daejeon (Korea, Republic of)

    2009-10-15

    Under the circumstances that nuclear power occupies more than 50%, nuclear power plants are required to be operated on load-following operation in order to make the effective management of electric grid system and enhanced responsiveness to rapid changes in power demand. Conventional reactors such as the OPR1000 and APR1400 have a regulating system that controls the average temperature of the reactor core relation to the reference temperature. This conventional method has the advantages of proven technology and ease of implementation. However, this method is unsuitable for controlling the axial power shape, particularly the load following operation. Accordingly, this paper reports on the development of a model predictive control method which is able to control the reactor power and the axial shape index. The purpose of this study is to analyze the behavior of nuclear reactor power and the axial power shape by using a model predictive control method when the power is increased and decreased for a daily load following operation. The study confirms that deviations in the axial shape index (ASI) are within the operating limit.

  10. A method for unsupervised change detection and automatic radiometric normalization in multispectral data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton John

    2011-01-01

    Based on canonical correlation analysis the iteratively re-weighted multivariate alteration detection (MAD) method is used to successfully perform unsupervised change detection in bi-temporal Landsat ETM+ images covering an area with villages, woods, agricultural fields and open pit mines in North...... to carry out the analyses is available from the authors' websites....

  11. Technical improvement and development of automatic detection method for genomic mutation

    International Nuclear Information System (INIS)

    Yamada, Kiyomi; Takai, Setsuo; Togashi, Chikako; Itami, Jun

    1999-01-01

    Fluorescent in situ hybridization (FISH) method was improved to estimate the dose of radiation exposure. Cleavage of DNA molecules in lymphocyte was used as the detection parameter and the procedures for preparation of samples suitable for atomic force microscopy and laser microscopy were developed. When ordinal primers on the market were used for PCR, the products were generally too small (about 200 b.p.) for detection by FISH method. Therefore, dATP and biotin-labeled dUTP were linked to its 3'-end by treatment with TdT for 2 hours, resulting that the mean length of PCR products was ca. 1.5 Kb. After hybridization using this prove, signal amplification was carried out according to biotin-avidin detection method. Thus, fluorescent signals on chromosomes could be easily detected. When three primers, D6S105, D6S291 and D6S282 were used as primer, fluorescent signal was detectable at 3 sites on chromatin fiber. These results indicate that this method is available for the analysis of cleavage of chromosomes. However, the backgrounds were much varied depending on the way to wash the preparation after incubation with fluorescent particle to form biotin-avidin binding. Therefore, further improvement of this method was necessary to apply in practice. When chromosomes 13, 14 and 15 from lymphocytes exposed to X-ray were used as test samples, it was demonstrated that radio-sensitivity was variable depending on the contents of R band in each chromosome. (M.N.)

  12. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    Science.gov (United States)

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  13. Turbomachine combustor nozzle including a monolithic nozzle component and method of forming the same

    Science.gov (United States)

    Stoia, Lucas John; Melton, Patrick Benedict; Johnson, Thomas Edward; Stevenson, Christian Xavier; Vanselow, John Drake; Westmoreland, James Harold

    2016-02-23

    A turbomachine combustor nozzle includes a monolithic nozzle component having a plate element and a plurality of nozzle elements. Each of the plurality of nozzle elements includes a first end extending from the plate element to a second end. The plate element and plurality of nozzle elements are formed as a unitary component. A plate member is joined with the nozzle component. The plate member includes an outer edge that defines first and second surfaces and a plurality of openings extending between the first and second surfaces. The plurality of openings are configured and disposed to register with and receive the second end of corresponding ones of the plurality of nozzle elements.

  14. Automatization of the neutron activation analysis method in the nuclear analysis laboratory

    International Nuclear Information System (INIS)

    Gonzalez, N.R.; Rivero, D del C.; Gonzalez, M.A.; Larramendi, F.

    1993-01-01

    In the present paper the work done to automatice the Neutron Activation Analysis technic with a neutron generator is described. An interface between an IBM compatible microcomputer and the equipment in use to make this kind of measurement was developed. including the specialized software for this system

  15. 10 CFR 431.134 - Uniform test methods for the measurement of energy consumption and water consumption of automatic...

    Science.gov (United States)

    2010-01-01

    ... consumption and water consumption of automatic commercial ice makers. 431.134 Section 431.134 Energy... of energy consumption and water consumption of automatic commercial ice makers. (a) Scope. This... consumption, but instead calculate the energy use rate (kWh/100 lbs Ice) by dividing the energy consumed...

  16. Development of a quantitative safety assessment method for nuclear I and C systems including human operators

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2004-02-01

    Conventional PSA (probabilistic safety analysis) is performed in the framework of event tree analysis and fault tree analysis. In conventional PSA, I and C systems and human operators are assumed to be independent for simplicity. But, the dependency of human operators on I and C systems and the dependency of I and C systems on human operators are gradually recognized to be significant. I believe that it is time to consider the interdependency between I and C systems and human operators in the framework of PSA. But, unfortunately it seems that we do not have appropriate methods for incorporating the interdependency between I and C systems and human operators in the framework of Pasa. Conventional human reliability analysis (HRA) methods are not developed to consider the interdependecy, and the modeling of the interdependency using conventional event tree analysis and fault tree analysis seem to be, event though is does not seem to be impossible, quite complex. To incorporate the interdependency between I and C systems and human operators, we need a new method for HRA and a new method for modeling the I and C systems, man-machine interface (MMI), and human operators for quantitative safety assessment. As a new method for modeling the I and C systems, MMI and human operators, I develop a new system reliability analysis method, reliability graph with general gates (RGGG), which can substitute conventional fault tree analysis. RGGG is an intuitive and easy-to-use method for system reliability analysis, while as powerful as conventional fault tree analysis. To demonstrate the usefulness of the RGGG method, it is applied to the reliability analysis of Digital Plant Protection System (DPPS), which is the actual plant protection system of Ulchin 5 and 6 nuclear power plants located in Republic of Korea. The latest version of the fault tree for DPPS, which is developed by the Integrated Safety Assessment team in Korea Atomic Energy Research Institute (KAERI), consists of 64

  17. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  18. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    Science.gov (United States)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  19. Method of Automatic Ontology Mapping through Machine Learning and Logic Mining

    Institute of Scientific and Technical Information of China (English)

    王英林

    2004-01-01

    Ontology mapping is the bottleneck of handling conflicts among heterogeneous ontologies and of implementing reconfiguration or interoperability of legacy systems. We proposed an ontology mapping method by using machine learning, type constraints and logic mining techniques. This method is able to find concept correspondences through instances and the result is optimized by using an error function; it is able to find attribute correspondence between two equivalent concepts and the mapping accuracy is enhanced by combining together instances learning, type constraints and the logic relations that are imbedded in instances; moreover, it solves the most common kind of categorization conflicts. We then proposed a merging algorithm to generate the shared ontology and proposed a reconfigurable architecture for interoperation based on multi agents. The legacy systems are encapsulated as information agents to participate in the integration system. Finally we give a simplified case study.

  20. An automatic image-based modelling method applied to forensic infography.

    Directory of Open Access Journals (Sweden)

    Sandra Zancajo-Blazquez

    Full Text Available This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet and image (visible, infrared, thermal, etc.; (ii automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model.

  1. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  2. An automatic formulation of inverse free second moment method for algebraic systems

    International Nuclear Information System (INIS)

    Shakshuki, Elhadi; Ponnambalam, Kumaraswamy

    2002-01-01

    In systems with probabilistic uncertainties, an estimation of reliability requires at least the first two moments. In this paper, we focus on probabilistic analysis of linear systems. The important tasks in this analysis are the formulation and the automation of the moment equations. The main objective of the formulation is to provide at least means and variances of the output variables with at least a second-order accuracy. The objective of the automation is to reduce the storage and computational complexities required for implementing (automating) those formulations. This paper extends the recent work done to calculate the first two moments of a set of random algebraic linear equations by developing a stamping procedure to facilitate its automation. The new method has an additional advantage of being able to solve problems when the mean matrix of a system is singular. Lastly, from storage and computational complexities and accuracy point of view, a comparison between the new method and another recently developed first order second moment method is made with numerical examples

  3. A robust automatic leukocyte recognition method based on island-clustering texture

    Directory of Open Access Journals (Sweden)

    Xiaoshun Li

    2016-01-01

    Full Text Available A leukocyte recognition method for human peripheral blood smear based on island-clustering texture (ICT is proposed. By analyzing the features of the five typical classes of leukocyte images, a new ICT model is established. Firstly, some feature points are extracted in a gray leukocyte image by mean-shift clustering to be the centers of islands. Secondly, the growing region is employed to create regions of the islands in which the seeds are just these feature points. These islands distribution can describe a new texture. Finally, a distinguished parameter vector of these islands is created as the ICT features by combining the ICT features with the geometric features of the leukocyte. Then the five typical classes of leukocytes can be recognized successfully at the correct recognition rate of more than 92.3% with a total sample of 1310 leukocytes. Experimental results show the feasibility of the proposed method. Further analysis reveals that the method is robust and results can provide important information for disease diagnosis.

  4. Method of extruding and packaging a thin sample of reactive material including forming the extrusion die

    International Nuclear Information System (INIS)

    Lewandowski, E.F.; Peterson, L.L.

    1985-01-01

    This invention teaches a method of cutting a narrow slot in an extrusion die with an electrical discharge machine by first drilling spaced holes at the ends of where the slot will be, whereby the oil can flow through the holes and slot to flush the material eroded away as the slot is being cut. The invention further teaches a method of extruding a very thin ribbon of solid highly reactive material such as lithium or sodium through the die in an inert atmosphere of nitrogen, argon or the like as in a glovebox. The invention further teaches a method of stamping out sample discs from the ribbon and of packaging each disc by sandwiching it between two aluminum sheets and cold welding the sheets together along an annular seam beyond the outer periphery of the disc. This provides a sample of high purity reactive material that can have a long shelf life

  5. Fully automatic and reference-marker-free image stitching method for full-spine and full-leg imaging with computed radiography

    Science.gov (United States)

    Wang, Xiaohui; Foos, David H.; Doran, James; Rogers, Michael K.

    2004-05-01

    Full-leg and full-spine imaging with standard computed radiography (CR) systems requires several cassettes/storage phosphor screens to be placed in a staggered arrangement and exposed simultaneously to achieve an increased imaging area. A method has been developed that can automatically and accurately stitch the acquired sub-images without relying on any external reference markers. It can detect and correct the order, orientation, and overlap arrangement of the subimages for stitching. The automatic determination of the order, orientation, and overlap arrangement of the sub-images consists of (1) constructing a hypothesis list that includes all cassette/screen arrangements, (2) refining hypotheses based on a set of rules derived from imaging physics, (3) correlating each consecutive sub-image pair in each hypothesis and establishing an overall figure-of-merit, (4) selecting the hypothesis of maximum figure-of-merit. The stitching process requires the CR reader to over scan each CR screen so that the screen edges are completely visible in the acquired sub-images. The rotational displacement and vertical displacement between two consecutive sub-images are calculated by matching the orientation and location of the screen edge in the front image and its corresponding shadow in the back image. The horizontal displacement is estimated by maximizing the correlation function between the two image sections in the overlap region. Accordingly, the two images are stitched together. This process is repeated for the newly stitched composite image and the next consecutive sub-image until a full-image composite is created. The method has been evaluated in both phantom experiments and clinical studies. The standard deviation of image misregistration is below one image pixel.

  6. Method for optical 15N analysis of small amounts of nitrogen gas released from an automatic nitrogen analyzer

    International Nuclear Information System (INIS)

    Arima, Yasuhiro

    1981-01-01

    A method of optical 15 N analysis is proposed for application to small amounts of nitrogen gas released from an automatic nitrogen analyzer (model ANA-1300, Carlo Erba, Milano) subjected to certain set modifications. The ANA-1300 was combined with a vacuum line attached by a molecular sieve 13X column. The nitrogen gas released from the ANA-1300 was introduced with a carrier gas of helium into the molecular sieve column which was pre-evacuated at 10 -4 Torr and cooled with outer liquid nitrogen. After removal of the helium by evacuation, the nitrogen gas fixed on the molecular sieve was released by warming the column, and then, it was sealed into pre-evacuated pyrex glass tubes at 4.5 - 5.0 Torr. In the preparation of discharge tubes, contamination of unlabelled nitrogen occurred from the carrier gas of standard grade helium, and the relative lowering of the 15 N value by it was estimated to be less than 1% when over 700 μg nitrogen was charged on the ANA-1300; when 200 μg nitrogen was charged, it was about 3.5%. However, the effect of the contamination could be corrected for by knowing the amount of contaminant nitrogen. In the analysis of plant materials by the proposed method, the coefficient of variation was less than 2%, and no significant difference was observed between results given by the present method and by the ordinary method in which samples were directly pyrolyzed in the discharge tubes by the Dumas method. The present method revealed about 1.5 μg of cross-contaminated nitrogen and was applicable to more than 200 μg of sample nitrogen. (author)

  7. Endotracheal tube cuff pressure monitoring during neurosurgery - Manual vs. automatic method

    Directory of Open Access Journals (Sweden)

    Mukul Kumar Jain

    2011-01-01

    Full Text Available Background: Inflation and assessment of the endotracheal tube cuff pressure is often not appreciated as a critical aspect of endotracheal intubation. Appropriate endotracheal tube cuff pressure, endotracheal intubation seals the airway to prevent aspiration and provides for positive-pressure ventilation without air leak. Materials and Methods: Correlations between manual methods of assessing the pressure by an experienced anesthesiologists and assessment with maintenance of the pressure within the normal range by the automated pressure controller device were studied in 100 patients divided into two groups. In Group M, endotracheal tube cuff was inflated manually by a trained anesthesiologist and checked for its pressure hourly by cuff pressure monitor till the end of surgery. In Group C, endotracheal tube cuff was inflated by automated cuff pressure controller and pressure was maintained at 25-cm H 2 O throughout the surgeries. Repeated measure ANOVA was applied. Results: Repeated measure ANOVA results showed that average of endotracheal tube cuff pressure of 50 patients taken at seven different points is significantly different (F-value: 171.102, P-value: 0.000. Bonferroni correction test shows that average of endotracheal tube cuff pressure in all six groups are significantly different from constant group (P = 0.000. No case of laryngomalacia, tracheomalacia, tracheal stenosis, tracheoesophageal fistula or aspiration pneumonitis was observed. Conclusions: Endotracheal tube cuff pressure was significantly high when endotracheal tube cuff was inflated manually. The known complications of high endotracheal tube cuff pressure can be avoided if the cuff pressure controller device is used and manual methods cannot be relied upon for keeping the pressure within the recommended levels.

  8. Method for including detailed evaluation of daylight levels in Be06

    DEFF Research Database (Denmark)

    Petersen, Steffen

    2008-01-01

    Good daylight conditions in office buildings have become an important issue due to new European regulatory demands which include energy consumption for electrical lighting in the building energy frame. Good daylight conditions in offices are thus in increased focus as an energy conserving measure....... In order to evaluate whether a certain design is good daylight design or not building designers must perform detailed evaluation of daylight levels, including the daylight performance of dynamic solar shadings, and include these in the energy performance evaluation. However, the mandatory national...... calculation tool in Denmark (Be06) for evaluating the energy performance of buildings is currently using a simple representation of available daylight in a room and simple assumptions regarding the control of shading devices. In a case example, this is leading to an overestimation of the energy consumption...

  9. A human error taxonomy and its application to an automatic method accident analysis

    International Nuclear Information System (INIS)

    Matthews, R.H.; Winter, P.W.

    1983-01-01

    Commentary is provided on the quantification aspects of human factors analysis in risk assessment. Methods for quantifying human error in a plant environment are discussed and their application to system quantification explored. Such a programme entails consideration of the data base and a taxonomy of factors contributing to human error. A multi-levelled approach to system quantification is proposed, each level being treated differently drawing on the advantages of different techniques within the fault/event tree framework. Management, as controller of organization, planning and procedure, is assigned a dominant role. (author)

  10. Method for automatic re contouring straight adaptive radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Rodriguez Vila, B.; Garcia Vicente, F.; Aguilera, E. J.

    2011-01-01

    Outline of quickly and accurately the rectal wall is important in Image Guided Radiotherapy (IGRT in the acronym) as an organ of greatest influence in limiting the dose in the planning of radiation therapy in prostate cancer. Deformabies registration methods based on image intensity can not create a correct spatial transformation if there is no correspondence between the image and image planning session. The rectal content variation creates a non-correspondence in the image intensity becomes a major obstacle to the deformable registration based on image intensity.

  11. A novel method of including Landau level mixing in numerical studies of the quantum Hall effect

    International Nuclear Information System (INIS)

    Wooten, Rachel; Quinn, John; Macek, Joseph

    2013-01-01

    Landau level mixing should influence the quantum Hall effect for all except the strongest applied magnetic fields. We propose a simple method for examining the effects of Landau level mixing by incorporating multiple Landau levels into the Haldane pseudopotentials through exact numerical diagonalization. Some of the resulting pseudopotentials for the lowest and first excited Landau levels will be presented

  12. Development of Extended Ray-tracing method including diffraction, polarization and wave decay effects

    Science.gov (United States)

    Yanagihara, Kota; Kubo, Shin; Dodin, Ilya; Nakamura, Hiroaki; Tsujimura, Toru

    2017-10-01

    Geometrical Optics Ray-tracing is a reasonable numerical analytic approach for describing the Electron Cyclotron resonance Wave (ECW) in slowly varying spatially inhomogeneous plasma. It is well known that the result with this conventional method is adequate in most cases. However, in the case of Helical fusion plasma which has complicated magnetic structure, strong magnetic shear with a large scale length of density can cause a mode coupling of waves outside the last closed flux surface, and complicated absorption structure requires a strong focused wave for ECH. Since conventional Ray Equations to describe ECW do not have any terms to describe the diffraction, polarization and wave decay effects, we can not describe accurately a mode coupling of waves, strong focus waves, behavior of waves in inhomogeneous absorption region and so on. For fundamental solution of these problems, we consider the extension of the Ray-tracing method. Specific process is planned as follows. First, calculate the reference ray by conventional method, and define the local ray-base coordinate system along the reference ray. Then, calculate the evolution of the distributions of amplitude and phase on ray-base coordinate step by step. The progress of our extended method will be presented.

  13. Indication of Importance of Including Soil Microbial Characteristics into Biotope Valuation Method.

    Czech Academy of Sciences Publication Activity Database

    Trögl, J.; Pavlorková, Jana; Packová, P.; Seják, J.; Kuráň, P.; Kuráň, J.; Popelka, J.; Pacina, J.

    2016-01-01

    Roč. 8, č. 3 (2016), č. článku 253. ISSN 2071-1050 Institutional support: RVO:67985858 Keywords : biotope assessment * biotope valuation method * soil microbial communities Subject RIV: DJ - Water Pollution ; Quality Impact factor: 1.789, year: 2016

  14. Method for Increasing the Efficiency of Automatic Fire Extinguish System at Objects Of Power

    Directory of Open Access Journals (Sweden)

    Dmitrienko Margarita

    2015-01-01

    Full Text Available Operation of energy facilities requires compliance with all safety standards, and especially fire safety. Emergency situations that arise when operated the power equipment damage not only the objects of the technosphere but also the environment. In recent years, can be noted a trend of quite intensive development of technological bases of technology water mist fire extinguishing. Using the methods of optical panoramic imaging PIV, IPI and the method of high-speed video recording were performed the experimental studies of the characteristics of evaporation of large single water droplets as they pass through the flames of oil and oil products with varying parameters of the processes (the initial size of 2–6 mm, the rate of 2–4 m/s and the temperature of water drops 290–300 K, the temperature of the combustion products 185–2073 K. Was established decisive influence droplet size, velocities at which droplets enter the gaseous medium, the initial water temperature on heating rate and evaporation of droplets in a stream of high-temperature combustion products.

  15. Method of preparing a negative electrode including lithium alloy for use within a secondary electrochemical cell

    Science.gov (United States)

    Tomczuk, Zygmunt; Olszanski, Theodore W.; Battles, James E.

    1977-03-08

    A negative electrode that includes a lithium alloy as active material is prepared by briefly submerging a porous, electrically conductive substrate within a melt of the alloy. Prior to solidification, excess melt can be removed by vibrating or otherwise manipulating the filled substrate to expose interstitial surfaces. Electrodes of such as solid lithium-aluminum filled within a substrate of metal foam are provided.

  16. Thick electrodes including nanoparticles having electroactive materials and methods of making same

    Science.gov (United States)

    Xiao, Jie; Lu, Dongping; Liu, Jun; Zhang, Jiguang; Graff, Gordon L.

    2017-02-21

    Electrodes having nanostructure and/or utilizing nanoparticles of active materials and having high mass loadings of the active materials can be made to be physically robust and free of cracks and pinholes. The electrodes include nanoparticles having electroactive material, which nanoparticles are aggregated with carbon into larger secondary particles. The secondary particles can be bound with a binder to form the electrode.

  17. Modelling of the automatic stabilization system of the aircraft course by a fuzzy logic method

    Science.gov (United States)

    Mamonova, T.; Syryamkin, V.; Vasilyeva, T.

    2016-04-01

    The problem of the present paper concerns the development of a fuzzy model of the system of an aircraft course stabilization. In this work modelling of the aircraft course stabilization system with the application of fuzzy logic is specified. Thus the authors have used the data taken for an ordinary passenger plane. As a result of the study the stabilization system models were realised in the environment of Matlab package Simulink on the basis of the PID-regulator and fuzzy logic. The authors of the paper have shown that the use of the method of artificial intelligence allows reducing the time of regulation to 1, which is 50 times faster than the time when standard receptions of the management theory are used. This fact demonstrates a positive influence of the use of fuzzy regulation.

  18. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images.

    Science.gov (United States)

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L; Levin, Michael; Miller, Eric L

    2015-11-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach.

  19. Kmeans-ICA based automatic method for ocular artifacts removal in a motorimagery classification.

    Science.gov (United States)

    Bou Assi, Elie; Rihana, Sandy; Sawan, Mohamad

    2014-01-01

    Electroencephalogram (EEG) recordings aroused as inputs of a motor imagery based BCI system. Eye blinks contaminate the spectral frequency of the EEG signals. Independent Component Analysis (ICA) has been already proved for removing these artifacts whose frequency band overlap with the EEG of interest. However, already ICA developed methods, use a reference lead such as the ElectroOculoGram (EOG) to identify the ocular artifact components. In this study, artifactual components were identified using an adaptive thresholding by means of Kmeans clustering. The denoised EEG signals have been fed into a feature extraction algorithm extracting the band power, the coherence and the phase locking value and inserted into a linear discriminant analysis classifier for a motor imagery classification.

  20. Automatic measurement for solid state track detectors

    International Nuclear Information System (INIS)

    Ogura, Koichi

    1982-01-01

    Since in solid state track detectors, their tracks are measured with a microscope, observers are forced to do hard works that consume time and labour. This causes to obtain poor statistic accuracy or to produce personal error. Therefore, many researches have been done to aim at simplifying and automating track measurement. There are two categories in automating the measurement: simple counting of the number of tracks and the requirements to know geometrical elements such as the size of tracks or their coordinates as well as the number of tracks. The former is called automatic counting and the latter automatic analysis. The method to generally evaluate the number of tracks in automatic counting is the estimation of the total number of tracks in the total detector area or in a field of view of a microscope. It is suitable for counting when the track density is higher. The method to count tracks one by one includes the spark counting and the scanning microdensitometer. Automatic analysis includes video image analysis in which the high quality images obtained with a high resolution video camera are processed with a micro-computer, and the tracks are automatically recognized and measured by feature extraction. This method is described in detail. In many kinds of automatic measurements reported so far, frequently used ones are ''spark counting'' and ''video image analysis''. (Wakatsuki, Y.)

  1. Method for pulse control in a laser including a stimulated brillouin scattering mirror system

    Science.gov (United States)

    Dane, C. Brent; Hackel, Lloyd; Harris, Fritz B.

    2007-10-23

    A laser system, such as a master oscillator/power amplifier system, comprises a gain medium and a stimulated Brillouin scattering SBS mirror system. The SBS mirror system includes an in situ filtered SBS medium that comprises a compound having a small negative non-linear index of refraction, such as a perfluoro compound. An SBS relay telescope having a telescope focal point includes a baffle at the telescope focal point which blocks off angle beams. A beam splitter is placed between the SBS mirror system and the SBS relay telescope, directing a fraction of the beam to an alternate beam path for an alignment fiducial. The SBS mirror system has a collimated SBS cell and a focused SBS cell. An adjustable attenuator is placed between the collimated SBS cell and the focused SBS cell, by which pulse width of the reflected beam can be adjusted.

  2. An innovative method for automatic determination of time of arrival for Lamb waves excited by impact events

    Science.gov (United States)

    Zhu, Junxiao; Parvasi, Seyed Mohammad; Ho, Siu Chun Michael; Patil, Devendra; Ge, Maochen; Li, Hongnan; Song, Gangbing

    2017-05-01

    Lamb waves have great potential as a diagnostic tool in the application of structural health monitoring. Propagation properties of Lamb waves are affected by the state of the structure that the waves are traveling upon. Thus Lamb waves can carry information about the structure as they travel across a structure. However, the dispersive, multimodal and attenuation characteristics of Lamb waves make it difficult to determine the time of arrival of Lamb waves. To deal with these characteristics, an innovative method to automatically determine the time of arrival for impact-induced Lamb waves without human intervention is proposed in this paper. Lead zirconate titanate sensors mounted on the surface of an aluminum plate were used to measure the Lamb waves excited by an impact. The time of arrival was determined based on wavelet decomposition, Hilbert transform and statistics (Grubbs’ test and maximum likelihood estimation). Both of numerical analysis and physical measurements have verified the accuracy of this method for impacts on an aluminum plate.

  3. AUTOMATIC EXTRACTION OF ROCK JOINTS FROM LASER SCANNED DATA BY MOVING LEAST SQUARES METHOD AND FUZZY K-MEANS CLUSTERING

    Directory of Open Access Journals (Sweden)

    S. Oh

    2012-09-01

    Full Text Available Recent development of laser scanning device increased the capability of representing rock outcrop in a very high resolution. Accurate 3D point cloud model with rock joint information can help geologist to estimate stability of rock slope on-site or off-site. An automatic plane extraction method was developed by computing normal directions and grouping them in similar direction. Point normal was calculated by moving least squares (MLS method considering every point within a given distance to minimize error to the fitting plane. Normal directions were classified into a number of dominating clusters by fuzzy K-means clustering. Region growing approach was exploited to discriminate joints in a point cloud. Overall procedure was applied to point cloud with about 120,000 points, and successfully extracted joints with joint information. The extraction procedure was implemented to minimize number of input parameters and to construct plane information into the existing point cloud for less redundancy and high usability of the point cloud itself.

  4. Impact of Including Authentic Inquiry Experiences in Methods Courses for Pre-Service Secondary Teachers

    Science.gov (United States)

    Slater, T. F.; Elfring, L.; Novodvorsky, I.; Talanquer, V.; Quintenz, J.

    2007-12-01

    Science education reform documents universally call for students to have authentic and meaningful experiences using real data in the context of their science education. The underlying philosophical position is that students analyzing data can have experiences that mimic actual research. In short, research experiences that reflect the scientific spirit of inquiry potentially can: prepare students to address real world complex problems; develop students' ability to use scientific methods; prepare students to critically evaluate the validity of data or evidence and of the consequent interpretations or conclusions; teach quantitative skills, technical methods, and scientific concepts; increase verbal, written, and graphical communication skills; and train students in the values and ethics of working with scientific data. However, it is unclear what the broader pre-service teacher preparation community is doing in preparing future teachers to promote, manage, and successful facilitate their own students in conducting authentic scientific inquiry. Surveys of undergraduates in secondary science education programs suggests that students have had almost no experiences themselves in conducting open scientific inquiry where they develop researchable questions, design strategies to pursue evidence, and communicate data-based conclusions. In response, the College of Science Teacher Preparation Program at the University of Arizona requires all students enrolled in its various science teaching methods courses to complete an open inquiry research project and defend their findings at a specially designed inquiry science mini-conference at the end of the term. End-of-term surveys show that students enjoy their research experience and believe that this experience enhances their ability to facilitate their own future students in conducting open inquiry.

  5. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    Science.gov (United States)

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  6. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    Science.gov (United States)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  7. Position automatic determination technology

    International Nuclear Information System (INIS)

    1985-10-01

    This book tells of method of position determination and characteristic, control method of position determination and point of design, point of sensor choice for position detector, position determination of digital control system, application of clutch break in high frequency position determination, automation technique of position determination, position determination by electromagnetic clutch and break, air cylinder, cam and solenoid, stop position control of automatic guide vehicle, stacker crane and automatic transfer control.

  8. Use of the potentiometric titration method to investigate heterogeneous systems including phosphorylated complexones

    International Nuclear Information System (INIS)

    Tereshin, G.S.; Kharitonova, L.K.; Kuznetsova, O.B.

    1979-01-01

    Heterogeneous systems Y(NO 3 ) 3 (YCl 3 )-Hsub(n)L-KNO 3 (KCl)-H 2 O are investigated by potentiometric titration (with coulomb-meter generation of oH - ions). Hsub(n)L is one of the following: oxyethylidendiphosphonic; aminobenzilidendiphosphonic; glycine-bis-methyl-phosphonic; nitrilotrimethylphosphonic (H 6 L) and ethylenediaminetetramethylphosphonic acids. The range of the exsistence of YHsub(nL3)LxyH 2 O has been determined. The possibility of using potentiometric titration for investigating heterogeneous systems is demonstrated by the stUdy of the system Y(NO 3 ) 3 -H 6 L-KOH-H 2 o by the method of residual concentration. The two methods have shown that at pH 3 LxyH 2 O; at pH=6, KYH 2 Lxy'H 2 O, and at pH=7, K 2 YHLxy''H 2 O. The complete solubility products of nitrilotrimethylphosphonates are evaluated

  9. A convolution method for predicting mean treatment dose including organ motion at imaging

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2000-01-01

    Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine

  10. System and method for detecting components of a mixture including a valving scheme for competition assays

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Chung-Yan; Piccini, Matthew E.; Singh, Anup K.

    2017-09-19

    Examples are described including measurement systems for conducting competition assays. A first chamber of an assay device may be loaded with a sample containing a target antigen. The target antigen in the sample may be allowed to bind to antibody-coated beads in the first chamber. A control layer separating the first chamber from a second chamber may then be opened to allow a labeling agent loaded in a first portion of the second chamber to bind to any unoccupied sites on the antibodies. A centrifugal force may then be applied to transport the beads through a density media to a detection region for measurement by a detection unit.

  11. System and method for detecting components of a mixture including a valving scheme for competition assays

    Science.gov (United States)

    Koh, Chung-Yan; Piccini, Matthew E.; Singh, Anup K.

    2017-07-11

    Examples are described including measurement systems for conducting competition assays. A first chamber of an assay device may be loaded with a sample containing a target antigen. The target antigen in the sample may be allowed to bind to antibody-coated beads in the first chamber. A control layer separating the first chamber from a second chamber may then be opened to allow a labeling agent loaded in a first portion of the second chamber to bind to any unoccupied sites on the antibodies. A centrifugal force may then be applied to transport the beads through a density media to a detection region for measurement by a detection unit.

  12. Electromagnetic Radiation : Variational Methods, Waveguides and Accelerators Including seminal papers of Julian Schwinger

    CERN Document Server

    Milton, Kimball A

    2006-01-01

    This is a graduate level textbook on the theory of electromagnetic radiation and its application to waveguides, transmission lines, accelerator physics and synchrotron radiation. It has grown out of lectures and manuscripts by Julian Schwinger prepared during the war at MIT's Radiation Laboratory, updated with material developed by Schwinger at UCLA in the 1970s and 1980s, and by Milton at the University of Oklahoma since 1994. The book includes a great number of straightforward and challenging exercises and problems. It is addressed to students in physics, electrical engineering, and applied mathematics seeking a thorough introduction to electromagnetism with emphasis on radiation theory and its applications.

  13. Thai Automatic Speech Recognition

    National Research Council Canada - National Science Library

    Suebvisai, Sinaporn; Charoenpornsawat, Paisarn; Black, Alan; Woszczyna, Monika; Schultz, Tanja

    2005-01-01

    .... We focus on the discussion of the rapid deployment of ASR for Thai under limited time and data resources, including rapid data collection issues, acoustic model bootstrap, and automatic generation of pronunciations...

  14. Fetal Intelligent Navigation Echocardiography (FINE): a novel method for rapid, simple, and automatic examination of the fetal heart.

    Science.gov (United States)

    Yeo, Lami; Romero, Roberto

    2013-09-01

    To describe a novel method (Fetal Intelligent Navigation Echocardiography (FINE)) for visualization of standard fetal echocardiography views from volume datasets obtained with spatiotemporal image correlation (STIC) and application of 'intelligent navigation' technology. We developed a method to: 1) demonstrate nine cardiac diagnostic planes; and 2) spontaneously navigate the anatomy surrounding each of the nine cardiac diagnostic planes (Virtual Intelligent Sonographer Assistance (VIS-Assistance®)). The method consists of marking seven anatomical structures of the fetal heart. The following echocardiography views are then automatically generated: 1) four chamber; 2) five chamber; 3) left ventricular outflow tract; 4) short-axis view of great vessels/right ventricular outflow tract; 5) three vessels and trachea; 6) abdomen/stomach; 7) ductal arch; 8) aortic arch; and 9) superior and inferior vena cava. The FINE method was tested in a separate set of 50 STIC volumes of normal hearts (18.6-37.2 weeks of gestation), and visualization rates for fetal echocardiography views using diagnostic planes and/or VIS-Assistance® were calculated. To examine the feasibility of identifying abnormal cardiac anatomy, we tested the method in four cases with proven congenital heart defects (coarctation of aorta, tetralogy of Fallot, transposition of great vessels and pulmonary atresia with intact ventricular septum). In normal cases, the FINE method was able to generate nine fetal echocardiography views using: 1) diagnostic planes in 78-100% of cases; 2) VIS-Assistance® in 98-100% of cases; and 3) a combination of diagnostic planes and/or VIS-Assistance® in 98-100% of cases. In all four abnormal cases, the FINE method demonstrated evidence of abnormal fetal cardiac anatomy. The FINE method can be used to visualize nine standard fetal echocardiography views in normal hearts by applying 'intelligent navigation' technology to STIC volume datasets. This method can simplify

  15. Performance of human observers and an automatic 3-dimensional computer-vision-based locomotion scoring method to detect lameness and hoof lesions in dairy cows

    NARCIS (Netherlands)

    Schlageter-Tello, Andrés; Hertem, Van Tom; Bokkers, Eddie A.M.; Viazzi, Stefano; Bahr, Claudia; Lokhorst, Kees

    2018-01-01

    The objective of this study was to determine if a 3-dimensional computer vision automatic locomotion scoring (3D-ALS) method was able to outperform human observers for classifying cows as lame or nonlame and for detecting cows affected and nonaffected by specific type(s) of hoof lesion. Data

  16. Application of automatic change of interval to de Vogelaere's method of the solution of the differential equation y'' = f (x, y)

    International Nuclear Information System (INIS)

    Rogers, M.H.

    1960-11-01

    The paper gives an extension to de Vogelaere's method for the solution of systems of second order differential equations from which first derivatives are absent. The extension is a description of the way in which automatic change in step-length can be made to give a prescribed accuracy at each step. (author)

  17. Consensus for nonmelanoma skin cancer treatment: basal cell carcinoma, including a cost analysis of treatment methods.

    Science.gov (United States)

    Kauvar, Arielle N B; Cronin, Terrence; Roenigk, Randall; Hruza, George; Bennett, Richard

    2015-05-01

    Basal cell carcinoma (BCC) is the most common cancer in the US population affecting approximately 2.8 million people per year. Basal cell carcinomas are usually slow-growing and rarely metastasize, but they do cause localized tissue destruction, compromised function, and cosmetic disfigurement. To provide clinicians with guidelines for the management of BCC based on evidence from a comprehensive literature review, and consensus among the authors. An extensive review of the medical literature was conducted to evaluate the optimal treatment methods for cutaneous BCC, taking into consideration cure rates, recurrence rates, aesthetic and functional outcomes, and cost-effectiveness of the procedures. Surgical approaches provide the best outcomes for BCCs. Mohs micrographic surgery provides the highest cure rates while maximizing tissue preservation, maintenance of function, and cosmesis. Mohs micrographic surgery is an efficient and cost-effective procedure and remains the treatment of choice for high-risk BCCs and for those in cosmetically sensitive locations. Nonsurgical modalities may be used for low-risk BCCs when surgery is contraindicated or impractical, but the cure rates are lower.

  18. Development method of Hybrid Energy Storage System, including PEM fuel cell and a battery

    International Nuclear Information System (INIS)

    Ustinov, A; Khayrullina, A; Khmelik, M; Sveshnikova, A; Borzenko, V

    2016-01-01

    Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia. (paper)

  19. Development method of Hybrid Energy Storage System, including PEM fuel cell and a battery

    Science.gov (United States)

    Ustinov, A.; Khayrullina, A.; Borzenko, V.; Khmelik, M.; Sveshnikova, A.

    2016-09-01

    Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia.

  20. A Survey of Automatic Protocol Reverse Engineering Approaches, Methods, and Tools on the Inputs and Outputs View

    Directory of Open Access Journals (Sweden)

    Baraka D. Sija

    2018-01-01

    Full Text Available A network protocol defines rules that control communications between two or more machines on the Internet, whereas Automatic Protocol Reverse Engineering (APRE defines the way of extracting the structure of a network protocol without accessing its specifications. Enough knowledge on undocumented protocols is essential for security purposes, network policy implementation, and management of network resources. This paper reviews and analyzes a total of 39 approaches, methods, and tools towards Protocol Reverse Engineering (PRE and classifies them into four divisions, approaches that reverse engineer protocol finite state machines, protocol formats, and both protocol finite state machines and protocol formats to approaches that focus directly on neither reverse engineering protocol formats nor protocol finite state machines. The efficiency of all approaches’ outputs based on their selected inputs is analyzed in general along with appropriate reverse engineering inputs format. Additionally, we present discussion and extended classification in terms of automated to manual approaches, known and novel categories of reverse engineered protocols, and a literature of reverse engineered protocols in relation to the seven layers’ OSI (Open Systems Interconnection model.

  1. Evaluation of automatic cloud removal method for high elevation areas in Landsat 8 OLI images to improve environmental indexes computation

    Science.gov (United States)

    Alvarez, César I.; Teodoro, Ana; Tierra, Alfonso

    2017-10-01

    Thin clouds in the optical remote sensing data are frequent and in most of the cases don't allow to have a pure surface data in order to calculate some indexes as Normalized Difference Vegetation Index (NDVI). This paper aims to evaluate the Automatic Cloud Removal Method (ACRM) algorithm over a high elevation city like Quito (Ecuador), with an altitude of 2800 meters above sea level, where the clouds are presented all the year. The ACRM is an algorithm that considers a linear regression between each Landsat 8 OLI band and the Cirrus band using the slope obtained with the linear regression established. This algorithm was employed without any reference image or mask to try to remove the clouds. The results of the application of the ACRM algorithm over Quito didn't show a good performance. Therefore, was considered improving this algorithm using a different slope value data (ACMR Improved). After, the NDVI computation was compared with a reference NDVI MODIS data (MOD13Q1). The ACMR Improved algorithm had a successful result when compared with the original ACRM algorithm. In the future, this Improved ACRM algorithm needs to be tested in different regions of the world with different conditions to evaluate if the algorithm works successfully for all conditions.

  2. Imaging different components of a tectonic tremor sequence in southwestern Japan using an automatic statistical detection and location method

    Science.gov (United States)

    Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige

    2018-06-01

    In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the downdip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multiscale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of

  3. Imaging different components of a tectonic tremor sequence in southwestern Japan using an automatic statistical detection and location method

    Science.gov (United States)

    Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige

    2018-02-01

    In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the down-dip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multi-scale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of

  4. Automatic Earthquake Shear Stress Measurement Method Developed for Accurate Time- Prediction Analysis of Forthcoming Major Earthquakes Along Shallow Active Faults

    Science.gov (United States)

    Serata, S.

    2006-12-01

    The Serata Stressmeter has been developed to measure and monitor earthquake shear stress build-up along shallow active faults. The development work made in the past 25 years has established the Stressmeter as an automatic stress measurement system to study timing of forthcoming major earthquakes in support of the current earthquake prediction studies based on statistical analysis of seismological observations. In early 1982, a series of major Man-made earthquakes (magnitude 4.5-5.0) suddenly occurred in an area over deep underground potash mine in Saskatchewan, Canada. By measuring underground stress condition of the mine, the direct cause of the earthquake was disclosed. The cause was successfully eliminated by controlling the stress condition of the mine. The Japanese government was interested in this development and the Stressmeter was introduced to the Japanese government research program for earthquake stress studies. In Japan the Stressmeter was first utilized for direct measurement of the intrinsic lateral tectonic stress gradient G. The measurement, conducted at the Mt. Fuji Underground Research Center of the Japanese government, disclosed the constant natural gradients of maximum and minimum lateral stresses in an excellent agreement with the theoretical value, i.e., G = 0.25. All the conventional methods of overcoring, hydrofracturing and deformation, which were introduced to compete with the Serata method, failed demonstrating the fundamental difficulties of the conventional methods. The intrinsic lateral stress gradient determined by the Stressmeter for the Japanese government was found to be the same with all the other measurements made by the Stressmeter in Japan. The stress measurement results obtained by the major international stress measurement work in the Hot Dry Rock Projects conducted in USA, England and Germany are found to be in good agreement with the Stressmeter results obtained in Japan. Based on this broad agreement, a solid geomechanical

  5. Calculation of left ventricular volume and ejection fraction from ECG-gated myocardial SPECT. Automatic detection of endocardial borders by threshold method

    International Nuclear Information System (INIS)

    Fukushi, Shoji; Teraoka, Satomi.

    1997-01-01

    A new method which calculate end-diastolic volume (EDV), end-systolic volume (ESV) and ejection fraction (LVEF) of the left ventricle from myocardial short axis images of ECG-gated SPECT using 99m Tc myocardial perfusion tracer has been designed. Eight frames per cardiac cycle ECG-gated 180 degrees SPECT was performed. Threshold method was used to detect myocardial borders automatically. The optimal threshold was 45% by myocardial SPECT phantom. To determine if EDV, ESV and LVEF can also be calculated by this method, 12 patients were correlated ventriculography (LVG) for 10 days each. The correlation coefficient with LVG was 0.918 (EDV), 0.935 (ESV) and 0.900 (LVEF). This method is excellent at objectivity and reproductivity because of the automatic detection of myocardial borders. It also provides useful information on heart function in addition to myocardial perfusion. (author)

  6. Additivity methods for prediction of thermochemical properties. The Laidler method revisited. 2. Hydrocarbons including substituted cyclic compounds

    International Nuclear Information System (INIS)

    Santos, Rui C.; Leal, Joao P.; Martinho Simoes, Jose A.

    2009-01-01

    A revised parameterization of the extended Laidler method for predicting standard molar enthalpies of atomization and standard molar enthalpies of formation at T = 298.15 K for several families of hydrocarbons (alkanes, alkenes, alkynes, polyenes, poly-ynes, cycloalkanes, substituted cycloalkanes, cycloalkenes, substituted cycloalkenes, benzene derivatives, and bi and polyphenyls) is presented. Data for a total of 265 gas-phase and 242 liquid-phase compounds were used for the calculation of the parameters. Comparison of the experimental values with those obtained using the additive scheme led to an average absolute difference of 0.73 kJ . mol -1 for the gas-phase standard molar enthalpy of formation and 0.79 kJ . mol -1 for the liquid-phase standard molar enthalpy of formation. The database used to establish the parameters was carefully reviewed by using, whenever possible, the original publications. A worksheet to simplify the calculation of standard molar enthalpies of formation and standard molar enthalpies of atomization at T = 298.15 K based on the extended Laidler parameters defined in this paper is provided as supplementary material.

  7. A comparison of automatic and intentional instructions when using the method of vanishing cues in acquired brain injury.

    Science.gov (United States)

    Riley, Gerard A; Venn, Paul

    2015-01-01

    Thirty-four participants with acquired brain injury learned word lists under two forms of vanishing cues - one in which the learning trial instructions encouraged intentional retrieval (i.e., explicit memory) and one in which they encouraged automatic retrieval (which encompasses implicit memory). The automatic instructions represented a novel approach in which the cooperation of participants was actively sought to avoid intentional retrieval. Intentional instructions resulted in fewer errors during the learning trials and better performance on immediate and delayed retrieval tests. The advantage of intentional over automatic instructions was generally less for those who had more severe memory and/or executive impairments. Most participants performed better under intentional instructions on both the immediate and the delayed tests. Although those who were more severely impaired in both memory and executive function also did better with intentional instructions on the immediate retrieval test, they were significantly more likely to show an advantage for automatic instructions on the delayed test. It is suggested that this pattern of results may reflect impairments in the consolidation of intentional memories in this group. When using vanishing cues, automatic instructions may be better for those with severe consolidation impairments, but otherwise intentional instructions may be better.

  8. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    Science.gov (United States)

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. 10 CFR Appendix J to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Uniform Test Method for Measuring the Energy Consumption.... Note: Appendix J does not provide a means for determining the energy consumption of a clothes washer... divided by the total clothes washer energy consumption per cycle, with such energy consumption expressed...

  10. Analysis of carotid artery plaque and wall boundaries on CT images by using a semi-automatic method based on level set model

    International Nuclear Information System (INIS)

    Saba, Luca; Sannia, Stefano; Ledda, Giuseppe; Gao, Hao; Acharya, U.R.; Suri, Jasjit S.

    2012-01-01

    The purpose of this study was to evaluate the potentialities of a semi-automated technique in the detection and measurement of the carotid artery plaque. Twenty-two consecutive patients (18 males, 4 females; mean age 62 years) examined with MDCTA from January 2011 to March 2011 were included in this retrospective study. Carotid arteries are examined with a 16-multi-detector-row CT system, and for each patient, the most diseased carotid was selected. In the first phase, the carotid plaque was identified and one experienced radiologist manually traced the inner and outer boundaries by using polyline and radial distance method (PDM and RDM, respectively). In the second phase, the carotid inner and outer boundaries were traced with an automated algorithm: level-set-method (LSM). Data were compared by using Pearson rho correlation, Bland-Altman, and regression. A total of 715 slices were analyzed. The mean thickness of the plaque using the reference PDM was 1.86 mm whereas using the LSM-PDM was 1.96 mm; using the reference RDM was 2.06 mm whereas using the LSM-RDM was 2.03 mm. The correlation values between the references, the LSM, the PDM and the RDM were 0.8428, 0.9921, 0.745 and 0.6425. Bland-Altman demonstrated a very good agreement in particular with the RDM method. Results of our study indicate that LSM method can automatically measure the thickness of the plaque and that the best results are obtained with the RDM. Our results suggest that advanced computer-based algorithms can identify and trace the plaque boundaries like an experienced human reader. (orig.)

  11. Analysis of carotid artery plaque and wall boundaries on CT images by using a semi-automatic method based on level set model

    Energy Technology Data Exchange (ETDEWEB)

    Saba, Luca; Sannia, Stefano; Ledda, Giuseppe [University of Cagliari - Azienda Ospedaliero Universitaria di Cagliari, Department of Radiology, Monserrato, Cagliari (Italy); Gao, Hao [University of Strathclyde, Signal Processing Centre for Excellence in Signal and Image Processing, Department of Electronic and Electrical Engineering, Glasgow (United Kingdom); Acharya, U.R. [Ngee Ann Polytechnic University, Department of Electronics and Computer Engineering, Clementi (Singapore); Suri, Jasjit S. [Biomedical Technologies Inc., Denver, CO (United States); Idaho State University (Aff.), Pocatello, ID (United States)

    2012-11-15

    The purpose of this study was to evaluate the potentialities of a semi-automated technique in the detection and measurement of the carotid artery plaque. Twenty-two consecutive patients (18 males, 4 females; mean age 62 years) examined with MDCTA from January 2011 to March 2011 were included in this retrospective study. Carotid arteries are examined with a 16-multi-detector-row CT system, and for each patient, the most diseased carotid was selected. In the first phase, the carotid plaque was identified and one experienced radiologist manually traced the inner and outer boundaries by using polyline and radial distance method (PDM and RDM, respectively). In the second phase, the carotid inner and outer boundaries were traced with an automated algorithm: level-set-method (LSM). Data were compared by using Pearson rho correlation, Bland-Altman, and regression. A total of 715 slices were analyzed. The mean thickness of the plaque using the reference PDM was 1.86 mm whereas using the LSM-PDM was 1.96 mm; using the reference RDM was 2.06 mm whereas using the LSM-RDM was 2.03 mm. The correlation values between the references, the LSM, the PDM and the RDM were 0.8428, 0.9921, 0.745 and 0.6425. Bland-Altman demonstrated a very good agreement in particular with the RDM method. Results of our study indicate that LSM method can automatically measure the thickness of the plaque and that the best results are obtained with the RDM. Our results suggest that advanced computer-based algorithms can identify and trace the plaque boundaries like an experienced human reader. (orig.)

  12. Automatic assessment of cardiac perfusion MRI

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Stegmann, Mikkel Bille; Larsson, Henrik B.W.

    2004-01-01

    In this paper, a method based on Active Appearance Models (AAM) is applied for automatic registration of myocardial perfusion MRI. A semi-quantitative perfusion assessment of the registered image sequences is presented. This includes the formation of perfusion maps for three parameters; maximum up...

  13. Method for automatic determination of soybean actual evapotranspiration under open top chambers (OTC) subjected to effects of water stress and air ozone concentration.

    Science.gov (United States)

    Rana, Gianfranco; Katerji, Nader; Mastrorilli, Marcello

    2012-10-01

    The present study describes an operational method, based on the Katerji et al. (Eur J Agron 33:218-230, 2010) model, for determining the daily evapotranspiration (ET) for soybean inside open top chambers (OTCs). It includes two functions, calculated day par day, making it possible to separately take into account the effects of concentrations of air ozone and plant water stress. This last function was calibrated in function of the daily values of actual water reserve in the soil. The input variables of the method are (a) the diurnal values of global radiation and temperature, usually measured routinely in a standard weather station; (b) the daily values of the AOT40 index accumulated (accumulated ozone over a threshold of 40 ppb during daylight hours, when global radiation exceeds 50 Wm(-2)) determined inside the OTC; and (c) the actual water reserve in the soil, at the beginning of the trial. The ensemble of these input variables can be automatable; thus, the proposed method could be applied in routine. The ability of the method to take into account contrasting conditions of ozone air concentration and water stress was evaluated over three successive years, for 513 days, in ten crop growth cycles, excluding the days employed to calibrate the method. Tests were carried out in several chambers for each year and take into account the intra- and inter-year variability of ET measured inside the OTCs. On the daily scale, the slope of the linear regression between the ET measured by the soil water balance and that calculated by the proposed method, under different water conditions, are 0.98 and 1.05 for the filtered and unfiltered (or enriched) OTCs with root mean square error (RMSE) equal to 0.77 and 1.07 mm, respectively. On the seasonal scale, the mean difference between measured and calculated ET is equal to +5% and +11% for the filtered and unfiltered OTCs, respectively. The ability of the proposed method to estimate the daily and seasonal ET inside the OTCs is

  14. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  15. Teaching Methods in Biology Education and Sustainability Education Including Outdoor Education for Promoting Sustainability—A Literature Review

    Directory of Open Access Journals (Sweden)

    Eila Jeronen

    2016-12-01

    Full Text Available There are very few studies concerning the importance of teaching methods in biology education and environmental education including outdoor education for promoting sustainability at the levels of primary and secondary schools and pre-service teacher education. The material was selected using special keywords from biology and sustainable education in several scientific databases. The article provides an overview of 24 selected articles published in peer-reviewed scientific journals from 2006–2016. The data was analyzed using qualitative content analysis. Altogether, 16 journals were selected and 24 articles were analyzed in detail. The foci of the analyses were teaching methods, learning environments, knowledge and thinking skills, psychomotor skills, emotions and attitudes, and evaluation methods. Additionally, features of good methods were investigated and their implications for teaching were emphasized. In total, 22 different teaching methods were found to improve sustainability education in different ways. The most emphasized teaching methods were those in which students worked in groups and participated actively in learning processes. Research points toward the value of teaching methods that provide a good introduction and supportive guidelines and include active participation and interactivity.

  16. Teaching Methods in Biology Education and Sustainability Education Including Outdoor Education for Promoting Sustainability--A Literature Review

    Science.gov (United States)

    Jeronen, Eila; Palmberg, Irmeli; Yli-Panula, Eija

    2017-01-01

    There are very few studies concerning the importance of teaching methods in biology education and environmental education including outdoor education for promoting sustainability at the levels of primary and secondary schools and pre-service teacher education. The material was selected using special keywords from biology and sustainable education…

  17. Method for assessment of stormwater treatment facilities - Synthetic road runoff addition including micro-pollutants and tracer.

    Science.gov (United States)

    Cederkvist, Karin; Jensen, Marina B; Holm, Peter E

    2017-08-01

    Stormwater treatment facilities (STFs) are becoming increasingly widespread but knowledge on their performance is limited. This is due to difficulties in obtaining representative samples during storm events and documenting removal of the broad range of contaminants found in stormwater runoff. This paper presents a method to evaluate STFs by addition of synthetic runoff with representative concentrations of contaminant species, including the use of tracer for correction of removal rates for losses not caused by the STF. A list of organic and inorganic contaminant species, including trace elements representative of runoff from roads is suggested, as well as relevant concentration ranges. The method was used for adding contaminants to three different STFs including a curbstone extension with filter soil, a dual porosity filter, and six different permeable pavements. Evaluation of the method showed that it is possible to add a well-defined mixture of contaminants despite different field conditions by having a flexibly system, mixing different stock-solutions on site, and use bromide tracer for correction of outlet concentrations. Bromide recovery ranged from only 12% in one of the permeable pavements to 97% in the dual porosity filter, stressing the importance of including a conservative tracer for correction of contaminant retention values. The method is considered useful in future treatment performance testing of STFs. The observed performance of the STFs is presented in coming papers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Automatic welding of stainless steel tubing

    Science.gov (United States)

    Clautice, W. E.

    1978-01-01

    The use of automatic welding for making girth welds in stainless steel tubing was investigated as well as the reduction in fabrication costs resulting from the elimination of radiographic inspection. Test methodology, materials, and techniques are discussed, and data sheets for individual tests are included. Process variables studied include welding amperes, revolutions per minute, and shielding gas flow. Strip chart recordings, as a definitive method of insuring weld quality, are studied. Test results, determined by both radiographic and visual inspection, are presented and indicate that once optimum welding procedures for specific sizes of tubing are established, and the welding machine operations are certified, then the automatic tube welding process produces good quality welds repeatedly, with a high degree of reliability. Revised specifications for welding tubing using the automatic process and weld visual inspection requirements at the Kennedy Space Center are enumerated.

  19. A robust two-node, 13 moment quadrature method of moments for dilute particle flows including wall bouncing

    Science.gov (United States)

    Sun, Dan; Garmory, Andrew; Page, Gary J.

    2017-02-01

    For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.

  20. Automatic LOD selection

    OpenAIRE

    Forsman, Isabelle

    2017-01-01

    In this paper a method to automatically generate transition distances for LOD, improving image stability and performance is presented. Three different methods were tested all measuring the change between two level of details using the spatial frequency. The methods were implemented as an optional pre-processing step in order to determine the transition distances from multiple view directions. During run-time both view direction based selection and the furthest distance for each direction was ...

  1. The study of fingerprint characteristics of Dayi Pu-Erh tea using a fully automatic HS-SPME/GC-MS and combined chemometrics method.

    Directory of Open Access Journals (Sweden)

    Shidong Lv

    Full Text Available The quality of tea is presently evaluated by the sensory assessment of professional tea tasters, however, this approach is both inconsistent and inaccurate. A more standardized and efficient method is urgently needed to objectively evaluate tea quality. In this study, the chemical fingerprint of 7 different Dayi Pu-erh tea brands and 3 different Ya'an tea brands on the market were analyzed using fully automatic headspace solid-phase microextraction (HS-SPME combined with gas chromatography-mass spectrometry (GC-MS. A total of 78 volatiles were separated, among 75 volatiles were identified by GC-MS in seven Dayi Pu-erh teas, and the major chemical components included methoxyphenolic compounds, hydrocarbons, and alcohol compounds, such as 1,2,3-trimethoxybenzene, 1,2,4-trimethoxybenzene, 2,6,10,14-tetramethyl-pentadecane, linalool and its oxides, α-terpineol, and phytol. The overlapping ratio of peaks (ORP of the chromatogram in the seven Dayi Pu-erh tea samples was greater than 89.55%, whereas the ORP of Ya'an tea samples was less than 79.10%. The similarity and differences of the Dayi Pu-erh tea samples were also characterized using correlation coefficient similarity and principal component analysis (PCA. The results showed that the correlation coefficient of similarity of the seven Dayi Pu-erh tea samples was greater than 0.820 and was gathered in a specific area, which showed that samples from different brands were basically the same, despite have some slightly differences of chemical indexes was found. These results showed that the GC-MS fingerprint combined with the PCA approach can be used as an effective tool for the quality assessment and control of Pu-erh tea.

  2. Automatic differentiation algorithms in model analysis

    NARCIS (Netherlands)

    Huiskes, M.J.

    2002-01-01

    Title: Automatic differentiation algorithms in model analysis
    Author: M.J. Huiskes
    Date: 19 March, 2002

    In this thesis automatic differentiation algorithms and derivative-based methods

  3. Automatic methods for long-term tracking and the detection and decoding of communication dances in honeybees

    Directory of Open Access Journals (Sweden)

    Fernando eWario

    2015-09-01

    Full Text Available The honeybee waggle dance communication system is an intriguing example of abstract animal communication and has been investigated thoroughly throughout the last seven decades. Typically, observables such as durations or angles are extracted manually directly from the observation hive or from video recordings to quantify dance properties, particularly to determine where bees have foraged. In recent years, biology has profited from automation, improving measurement precision, removing human bias, and accelerating data collection. As a further step, we have developed technologies to track all individuals of a honeybee colony and detect and decode communication dances automatically. In strong contrast to conventional approaches that focus on a small subset of the hive life, whether this regards time, space, or animal identity, our more inclusive system will help the understanding of the dance comprehensively in its spatial, temporal, and social context. In this contribution, we present full specifications of the recording setup and the software for automatic recognition and decoding of tags and dances, and we discuss potential research directions that may benefit from automation. Lastly, to exemplify the power of the methodology, we show experimental data and respective analyses for a continuous, experimental recording of nine weeks duration.

  4. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  5. Automatically stable discontinuous Petrov-Galerkin methods for stationary transport problems: Quasi-optimal test space norm

    KAUST Repository

    Niemi, Antti H.; Collier, Nathan; Calo, Victor M.

    2013-01-01

    We investigate the application of the discontinuous Petrov-Galerkin (DPG) finite element framework to stationary convection-diffusion problems. In particular, we demonstrate how the quasi-optimal test space norm improves the robustness of the DPG method with respect to vanishing diffusion. We numerically compare coarse-mesh accuracy of the approximation when using the quasi-optimal norm, the standard norm, and the weighted norm. Our results show that the quasi-optimal norm leads to more accurate results on three benchmark problems in two spatial dimensions. We address the problems associated to the resolution of the optimal test functions with respect to the quasi-optimal norm by studying their convergence numerically. In order to facilitate understanding of the method, we also include a detailed explanation of the methodology from the algorithmic point of view. © 2013 Elsevier Ltd. All rights reserved.

  6. Automatically stable discontinuous Petrov-Galerkin methods for stationary transport problems: Quasi-optimal test space norm

    KAUST Repository

    Niemi, Antti H.

    2013-12-01

    We investigate the application of the discontinuous Petrov-Galerkin (DPG) finite element framework to stationary convection-diffusion problems. In particular, we demonstrate how the quasi-optimal test space norm improves the robustness of the DPG method with respect to vanishing diffusion. We numerically compare coarse-mesh accuracy of the approximation when using the quasi-optimal norm, the standard norm, and the weighted norm. Our results show that the quasi-optimal norm leads to more accurate results on three benchmark problems in two spatial dimensions. We address the problems associated to the resolution of the optimal test functions with respect to the quasi-optimal norm by studying their convergence numerically. In order to facilitate understanding of the method, we also include a detailed explanation of the methodology from the algorithmic point of view. © 2013 Elsevier Ltd. All rights reserved.

  7. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his...... honor in the Higher-Order and Symbolic Computation Journal in the years 2003 and 2005. Among them there are two papers by Bob: (i) a retrospective view of his research lines, and (ii) a proposal for future studies in the area of the automatic program derivation. The book also includes some papers...... by members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers...

  8. TRANSAT-- method for detecting the conserved helices of functional RNA structures, including transient, pseudo-knotted and alternative structures.

    Science.gov (United States)

    Wiebe, Nicholas J P; Meyer, Irmtraud M

    2010-06-24

    The prediction of functional RNA structures has attracted increased interest, as it allows us to study the potential functional roles of many genes. RNA structure prediction methods, however, assume that there is a unique functional RNA structure and also do not predict functional features required for in vivo folding. In order to understand how functional RNA structures form in vivo, we require sophisticated experiments or reliable prediction methods. So far, there exist only a few, experimentally validated transient RNA structures. On the computational side, there exist several computer programs which aim to predict the co-transcriptional folding pathway in vivo, but these make a range of simplifying assumptions and do not capture all features known to influence RNA folding in vivo. We want to investigate if evolutionarily related RNA genes fold in a similar way in vivo. To this end, we have developed a new computational method, Transat, which detects conserved helices of high statistical significance. We introduce the method, present a comprehensive performance evaluation and show that Transat is able to predict the structural features of known reference structures including pseudo-knotted ones as well as those of known alternative structural configurations. Transat can also identify unstructured sub-sequences bound by other molecules and provides evidence for new helices which may define folding pathways, supporting the notion that homologous RNA sequence not only assume a similar reference RNA structure, but also fold similarly. Finally, we show that the structural features predicted by Transat differ from those assuming thermodynamic equilibrium. Unlike the existing methods for predicting folding pathways, our method works in a comparative way. This has the disadvantage of not being able to predict features as function of time, but has the considerable advantage of highlighting conserved features and of not requiring a detailed knowledge of the cellular

  9. Automatic slice-alignment method in cardiac magnetic resonance imaging for evaluation of the right ventricle in patients with pulmonary hypertension

    Science.gov (United States)

    Yokoyama, Kenichi; Nitta, Shuhei; Kuhara, Shigehide; Ishimura, Rieko; Kariyasu, Toshiya; Imai, Masamichi; Nitatori, Toshiaki; Takeguchi, Tomoyuki; Shiodera, Taichiro

    2015-09-01

    We propose a new automatic slice-alignment method, which enables right ventricular scan planning in addition to the left ventricular scan planning developed in our previous work, to simplify right ventricular cardiac scan planning and assess its accuracy and the clinical acceptability of the acquired imaging planes in the evaluation of patients with pulmonary hypertension. Steady-state free precession (SSFP) sequences covering the whole heart in the end-diastolic phase with ECG gating were used to acquire 2D axial multislice images. To realize right ventricular scan planning, two morphological feature points are added to be detected and a total of eight morphological features of the heart were extracted from these series of images, and six left ventricular planes and four right ventricular planes were calculated simultaneously based on the extracted features. The subjects were 33 patients (25 with chronic thromboembolic pulmonary hypertension and 8 with idiopathic pulmonary arterial hypertension). The four right ventricular reference planes including right ventricular short-axis, 4-chamber, 2-chamber, and 3-chamber images were evaluated. The acceptability of the acquired imaging planes was visually evaluated using a 4-point scale, and the angular differences between the results obtained by this method and by conventional manual annotation were measured for each view. The average visual scores were 3.9±0.4 for short-axis images, 3.8±0.4 for 4-chamber images, 3.8±0.4 for 2-chamber images, and 3.5±0.6 for 3-chamber images. The average angular differences were 8.7±5.3, 8.3±4.9, 8.1±4.8, and 7.9±5.3 degrees, respectively. The processing time was less than 2.5 seconds in all subjects. The proposed method, which enables right ventricular scan planning in addition to the left ventricular scan planning developed in our previous work, can provide clinically acceptable planes in a short time and is useful because special proficiency in performing cardiac MR for

  10. Method of fabricating electrodes including high-capacity, binder-free anodes for lithium-ion batteries

    Science.gov (United States)

    Ban, Chunmei; Wu, Zhuangchun; Dillon, Anne C.

    2017-01-10

    An electrode (110) is provided that may be used in an electrochemical device (100) such as an energy storage/discharge device, e.g., a lithium-ion battery, or an electrochromic device, e.g., a smart window. Hydrothermal techniques and vacuum filtration methods were applied to fabricate the electrode (110). The electrode (110) includes an active portion (140) that is made up of electrochemically active nanoparticles, with one embodiment utilizing 3d-transition metal oxides to provide the electrochemical capacity of the electrode (110). The active material (140) may include other electrochemical materials, such as silicon, tin, lithium manganese oxide, and lithium iron phosphate. The electrode (110) also includes a matrix or net (170) of electrically conductive nanomaterial that acts to connect and/or bind the active nanoparticles (140) such that no binder material is required in the electrode (110), which allows more active materials (140) to be included to improve energy density and other desirable characteristics of the electrode. The matrix material (170) may take the form of carbon nanotubes, such as single-wall, double-wall, and/or multi-wall nanotubes, and be provided as about 2 to 30 percent weight of the electrode (110) with the rest being the active material (140).

  11. Faraday rotation of Automatic Dependent Surveillance Broadcast (ADS-B) signals as a method of ionospheric characterization

    Science.gov (United States)

    Cushley, A. C.; Kabin, K.; Noel, J. M. A.

    2017-12-01

    Radio waves propagating through plasma in the Earth's ambient magnetic field experience Faraday rotation; the plane of the electric field of a linearly polarized wave changes as a function of the distance travelled through a plasma. Linearly polarized radio waves at 1090 MHz frequency are emitted by Automatic Dependent Surveillance Broadcast (ADS-B) devices which are installed on most commercial aircraft. These radio waves can be detected by satellites in low earth orbits, and the change of the polarization angle caused by propagation through the terrestrial ionosphere can be measured. In this work we discuss how these measurements can be used to characterize the ionospheric conditions. In the present study, we compute the amount of Faraday rotation from a prescribed total electron content value and two of the profile parameters of the NeQuick model.

  12. Faraday Rotation of Automatic Dependent Surveillance-Broadcast (ADS-B) Signals as a Method of Ionospheric Characterization

    Science.gov (United States)

    Cushley, A. C.; Kabin, K.; Noël, J.-M.

    2017-10-01

    Radio waves propagating through plasma in the Earth's ambient magnetic field experience Faraday rotation; the plane of the electric field of a linearly polarized wave changes as a function of the distance travelled through a plasma. Linearly polarized radio waves at 1090 MHz frequency are emitted by Automatic Dependent Surveillance Broadcast (ADS-B) devices that are installed on most commercial aircraft. These radio waves can be detected by satellites in low Earth orbits, and the change of the polarization angle caused by propagation through the terrestrial ionosphere can be measured. In this manuscript we discuss how these measurements can be used to characterize the ionospheric conditions. In the present study, we compute the amount of Faraday rotation from a prescribed total electron content value and two of the profile parameters of the NeQuick ionospheric model.

  13. Applications of the conjugate gradient FFT method in scattering and radiation including simulations with impedance boundary conditions

    Science.gov (United States)

    Barkeshli, Kasra; Volakis, John L.

    1991-01-01

    The theoretical and computational aspects related to the application of the Conjugate Gradient FFT (CGFFT) method in computational electromagnetics are examined. The advantages of applying the CGFFT method to a class of large scale scattering and radiation problems are outlined. The main advantages of the method stem from its iterative nature which eliminates a need to form the system matrix (thus reducing the computer memory allocation requirements) and guarantees convergence to the true solution in a finite number of steps. Results are presented for various radiators and scatterers including thin cylindrical dipole antennas, thin conductive and resistive strips and plates, as well as dielectric cylinders. Solutions of integral equations derived on the basis of generalized impedance boundary conditions (GIBC) are also examined. The boundary conditions can be used to replace the profile of a material coating by an impedance sheet or insert, thus, eliminating the need to introduce unknown polarization currents within the volume of the layer. A general full wave analysis of 2-D and 3-D rectangular grooves and cavities is presented which will also serve as a reference for future work.

  14. Evaluation and Comparison of Multiple Test Methods, Including Real-time PCR, for Legionella Detection in Clinical Specimens

    Science.gov (United States)

    Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.

    2016-01-01

    Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella. PMID:27630979

  15. Evaluation and comparison of multiple test methods, including real-time PCR, for Legionella detection in clinical specimens.

    Directory of Open Access Journals (Sweden)

    Adriana Peci

    2016-08-01

    Full Text Available Legionella is a gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture and PCR test methods and to determine if sputum is an alternative to the use of more invasive bronchoalveolar lavage (BAL. Data for this study included specimens tested for Legionella at PHOL from January 1, 2010 to April 30, 2014, as part of routine clinical testing. We found sensitivity of UAT compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV 63.8% and negative predictive value (NPV 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7% and NPV 98.1%. Of 146 patients who had a Legionella positive result by PCR, only 66(45.2% also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%; sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results despite testing methods (Fisher Exact p-values=1.0, for each test. In summary, all test methods have inherent weaknesses in identifying Legionella; thereforemore than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection, and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical, from patients being tested for Legionella.

  16. An automatic respiratory gating method for the improvement of microcirculation evaluation: application to contrast-enhanced ultrasound studies of focal liver lesions

    Energy Technology Data Exchange (ETDEWEB)

    Mule, S; Kachenoura, N; Lucidarme, O; De Oliveira, A; Pellot-Barakat, C; Herment, A; Frouin, F, E-mail: Sebastien.Mule@gmail.com [INSERM UMR-S 678, 75634 Paris Cedex 13 (France)

    2011-08-21

    Contrast-enhanced ultrasound (CEUS), with the recent development of both contrast-specific imaging modalities and microbubble-based contrast agents, allows noninvasive quantification of microcirculation in vivo. Nevertheless, functional parameters obtained by modeling contrast uptake kinetics could be impaired by respiratory motion. Accordingly, we developed an automatic respiratory gating method and tested it on 35 CEUS hepatic datasets with focal lesions. Each dataset included fundamental mode and cadence contrast pulse sequencing (CPS) mode sequences acquired simultaneously. The developed method consisted in (1) the estimation of the respiratory kinetics as a linear combination of the first components provided by a principal components analysis constrained by a prior knowledge on the respiratory rate in the frequency domain, (2) the automated generation of two respiratory-gated subsequences from the CPS mode sequence by detecting end-of-inspiration and end-of-expiration phases from the respiratory kinetics. The fundamental mode enabled a more reliable estimation of the respiratory kinetics than the CPS mode. The k-means algorithm was applied on both the original CPS mode sequences and the respiratory-gated subsequences resulting in clustering maps and associated mean kinetics. Our respiratory gating process allowed better superimposition of manually drawn lesion contours on k-means clustering maps as well as substantial improvement of the quality of contrast uptake kinetics. While the quality of maps and kinetics was satisfactory in only 11/35 datasets before gating, it was satisfactory in 34/35 datasets after gating. Moreover, noise amplitude estimated within the delineated lesions was reduced from 62 {+-} 21 to 40 {+-} 10 (p < 0.01) after gating. These findings were supported by the low residual horizontal (0.44 {+-} 0.29 mm) and vertical (0.15 {+-} 0.16 mm) shifts found during manual motion correction of each respiratory-gated subsequence. The developed

  17. SU-D-BRC-01: An Automatic Beam Model Commissioning Method for Monte Carlo Simulations in Pencil-Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qin, N; Shen, C; Tian, Z; Jiang, S; Jia, X [UT Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy, and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with

  18. Automatic structural scene digitalization.

    Science.gov (United States)

    Tang, Rui; Wang, Yuhan; Cosker, Darren; Li, Wenbin

    2017-01-01

    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.

  19. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  20. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    Science.gov (United States)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  1. Method for contamination control and barrier apparatus with filter for containing waste materials that include dangerous particulate matter

    Science.gov (United States)

    Pinson, Paul A.

    1998-01-01

    A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated in barrier material, preferably in the form of a flexible sheet, one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention.

  2. Contributors to Frequent Telehealth Alerts Including False Alerts for Patients with Heart Failure: A Mixed Methods Exploration

    Science.gov (United States)

    Radhakrishna, K.; Bowles, K.; Zettek-Sumner, A.

    2013-01-01

    Summary Background Telehealth data overload through high alert generation is a significant barrier to sustained adoption of telehealth for managing HF patients. Objective To explore the factors contributing to frequent telehealth alerts including false alerts for Medicare heart failure (HF) patients admitted to a home health agency. Materials and Methods A mixed methods design that combined quantitative correlation analysis of patient characteristic data with number of telehealth alerts and qualitative analysis of telehealth and visiting nurses’ notes on follow-up actions to patients’ telehealth alerts was employed. All the quantitative and qualitative data was collected through retrospective review of electronic records of the home heath agency. Results Subjects in the study had a mean age of 83 (SD = 7.6); 56% were female. Patient co-morbidities (ppatient characteristics along with establishing patient-centered telehealth outcome goals may allow meaningful generation of telehealth alerts. Reducing avoidable telehealth alerts could vastly improve the efficiency and sustainability of telehealth programs for HF management. PMID:24454576

  3. Method for contamination control and barrier apparatus with filter for containing waste materials that include dangerous particulate matter

    International Nuclear Information System (INIS)

    Pinson, P.A.

    1998-01-01

    A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated barrier material, preferably in the form of a flexible sheet, and one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention. 3 figs

  4. Shannon Entropy and K-Means Method for Automatic Diagnosis of Broken Rotor Bars in Induction Motors Using Vibration Signals

    Directory of Open Access Journals (Sweden)

    David Camarena-Martinez

    2016-01-01

    Full Text Available For industry, the induction motors are essential elements in production chains. Despite the robustness of induction motors, they are susceptible to failures. The broken rotor bar (BRB fault in induction motors has received special attention since one of its characteristics is that the motor can continue operating with apparent normality; however, at certain point the fault may cause severe damage to the motor. In this work, a methodology to detect BRBs using vibration signals is proposed. The methodology uses the Shannon entropy to quantify the amount of information provided by the vibration signals, which changes due to the presence of new frequency components associated with the fault. For automatic diagnosis, the K-means cluster algorithm and a decision-making unit that looks for the nearest cluster through the Euclidian distance are applied. Unlike other reported works, the proposal can diagnose the BRB condition during startup transient and steady state regimes of operation. Additionally, the proposal is also implemented into a field programmable gate array in order to offer a low-cost and low-complex online monitoring system. The obtained results demonstrate the proposal effectiveness to diagnose half, one, and two BRBs.

  5. Automatic inverse methods for the analysis of pulse tests: application to four pulse tests at the Leuggern borehole

    International Nuclear Information System (INIS)

    Carrera, J.; Samper, J.; Vives, L.; Kuhlmann, U.

    1989-07-01

    Four pulse tests performed for NAGRA at the Leuggern borehole were analyzed using automatic test sequence matching techniques. Severe identifiability problems were unveiled during the process. Because of these problems, the identifiability of aquifer parameters (hydraulic conductivity, storativity and skin conductivity) from pulse tests similar to those performed in the Leuggern borehole was studied in two synthetic examples. The first of these had a positive skin effect and the second had a negative skin effect. These synthetic examples showed that, for the test conditions at the Leuggern borehole, estimating formation hydraulic conductivity may be nearly impossible for the cases of positive and negative skin factors. In addition, identifiability appears to be quite sensitive to the values of the parameters and to other factors such as skin thickness. Nevertheless, largely because of the manner in which the tests were conducted (i.e. relatively long injection time and performance of both injection and withdrawal), identifiability of the actual tests was much better than suggested by the synthetic examples. Only one of the four tests was nearly nonidentifiable. In all, the match between measured and computed aquifer responses was excellent for all the tests, and formation hydraulic conductivities were estimated within a relatively narrow uncertainty interval. (author) 19 refs., 59 figs., 28 tabs

  6. Difference in target definition using three different methods to include respiratory motion in radiotherapy of lung cancer.

    Science.gov (United States)

    Sloth Møller, Ditte; Knap, Marianne Marquard; Nyeng, Tine Bisballe; Khalil, Azza Ahmed; Holt, Marianne Ingerslev; Kandi, Maria; Hoffmann, Lone

    2017-11-01

    Minimizing the planning target volume (PTV) while ensuring sufficient target coverage during the entire respiratory cycle is essential for free-breathing radiotherapy of lung cancer. Different methods are used to incorporate the respiratory motion into the PTV. Fifteen patients were analyzed. Respiration can be included in the target delineation process creating a respiratory GTV, denoted iGTV. Alternatively, the respiratory amplitude (A) can be measured based on the 4D-CT and A can be incorporated in the margin expansion. The GTV expanded by A yielded GTV + resp, which was compared to iGTV in terms of overlap. Three methods for PTV generation were compared. PTV del (delineated iGTV expanded to CTV plus PTV margin), PTV σ (GTV expanded to CTV and A was included as a random uncertainty in the CTV to PTV margin) and PTV ∑ (GTV expanded to CTV, succeeded by CTV linear expansion by A to CTV + resp, which was finally expanded to PTV ∑ ). Deformation of tumor and lymph nodes during respiration resulted in volume changes between the respiratory phases. The overlap between iGTV and GTV + resp showed that on average 7% of iGTV was outside the GTV + resp implying that GTV + resp did not capture the tumor during the full deformable respiration cycle. A comparison of the PTV volumes showed that PTV σ was smallest and PTV Σ largest for all patients. PTV σ was in mean 14% (31 cm 3 ) smaller than PTV del , while PTV del was 7% (20 cm 3 ) smaller than PTV Σ . PTV σ yields the smallest volumes but does not ensure coverage of tumor during the full respiratory motion due to tumor deformation. Incorporating the respiratory motion in the delineation (PTV del ) takes into account the entire respiratory cycle including deformation, but at the cost, however, of larger treatment volumes. PTV Σ should not be used, since it incorporates the disadvantages of both PTV del and PTV σ .

  7. Grid Generating Automatically Method of Finite Element Analysis for Rotator Structure%旋转体结构有限元网格自动划分法

    Institute of Scientific and Technical Information of China (English)

    许贤泽

    2000-01-01

    The finite element analysis is applied to structure and freedom-system analysis. Its grid generating method is important to the finite element modeling,which generates the grid automatically by the sectional division method and gets the finite element grid model, thus accomplishing the pre-work of the finite element analysis.%用有限元法对进行结构和自由度体系进行分析,其网格的生成是建立有限元模型的重要技术,利用分块分割法对网格自动划分,从而形成有限元网格模型,完成有限元分析的前处理。

  8. ASPIRE In-Home: rationale, design, and methods of a study to evaluate the safety and efficacy of automatic insulin suspension for nocturnal hypoglycemia.

    Science.gov (United States)

    Klonoff, David C; Bergenstal, Richard M; Garg, Satish K; Bode, Bruce W; Meredith, Melissa; Slover, Robert H; Ahmann, Andrew; Welsh, John B; Lee, Scott W

    2013-07-01

    Nocturnal hypoglycemia is a barrier to therapy intensification efforts in diabetes. The Paradigm® Veo™ system may mitigate nocturnal hypoglycemia by automatically suspending insulin when a prespecified sensor glucose threshold is reached. ASPIRE (Automation to Simulate Pancreatic Insulin REsponse) In-Home (NCT01497938) was a multicenter, randomized, parallel, adaptive study of subjects with type 1 diabetes. The control arm used sensor-augmented pump therapy. The treatment arm used sensor-augmented pump therapy with threshold suspend, which automatically suspends the insulin pump in response to a sensor glucose value at or below a prespecified threshold. To be randomized, subjects had to have demonstrated ≥2 episodes of nocturnal hypoglycemia, defined as >20 consecutive minutes of sensor glucose values ≤65 mg/dl starting between 10:00 PM and 8:00 AM in the 2-week run-in phase. The 3-month study phase evaluated safety by comparing changes in glycated hemoglobin (A1C) values and evaluated efficacy by comparing the mean area under the glucose concentration time curves for nocturnal hypoglycemia events in the two groups. Other outcomes included the rate of nocturnal hypoglycemia events and the distribution of sensor glucose values. Data from the ASPIRE In-Home study should provide evidence on the safety of the threshold suspend feature with respect to A1C and its efficacy with respect to severity and duration of nocturnal hypoglycemia when used at home over a 3-month period. © 2013 Diabetes Technology Society.

  9. M-AMST: an automatic 3D neuron tracing method based on mean shift and adapted minimum spanning tree.

    Science.gov (United States)

    Wan, Zhijiang; He, Yishan; Hao, Ming; Yang, Jian; Zhong, Ning

    2017-03-29

    Understanding the working mechanism of the brain is one of the grandest challenges for modern science. Toward this end, the BigNeuron project was launched to gather a worldwide community to establish a big data resource and a set of the state-of-the-art of single neuron reconstruction algorithms. Many groups contributed their own algorithms for the project, including our mean shift and minimum spanning tree (M-MST). Although M-MST is intuitive and easy to implement, the MST just considers spatial information of single neuron and ignores the shape information, which might lead to less precise connections between some neuron segments. In this paper, we propose an improved algorithm, namely M-AMST, in which a rotating sphere model based on coordinate transformation is used to improve the weight calculation method in M-MST. Two experiments are designed to illustrate the effect of adapted minimum spanning tree algorithm and the adoptability of M-AMST in reconstructing variety of neuron image datasets respectively. In the experiment 1, taking the reconstruction of APP2 as reference, we produce the four difference scores (entire structure average (ESA), different structure average (DSA), percentage of different structure (PDS) and max distance of neurons' nodes (MDNN)) by comparing the neuron reconstruction of the APP2 and the other 5 competing algorithm. The result shows that M-AMST gets lower difference scores than M-MST in ESA, PDS and MDNN. Meanwhile, M-AMST is better than N-MST in ESA and MDNN. It indicates that utilizing the adapted minimum spanning tree algorithm which took the shape information of neuron into account can achieve better neuron reconstructions. In the experiment 2, 7 neuron image datasets are reconstructed and the four difference scores are calculated by comparing the gold standard reconstruction and the reconstructions produced by 6 competing algorithms. Comparing the four difference scores of M-AMST and the other 5 algorithm, we can conclude that

  10. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  11. Evaluation of a direct method for the identification and antibiotic susceptibility assessment of microrganisms isolated from blood cultures by automatic systems

    Directory of Open Access Journals (Sweden)

    Sergio Frugoni

    2008-03-01

    Full Text Available The purpose of blood cultures in the septic patient is to address a correct therapeutic approach. Identification and antibiotic susceptibility test carried out directly from the bottle may give important information in short time.The introduction of the automatic instrumentation has improved the discovering of pathogens in the blood, however the elapsing time between the positive detection and the microbiological report is still along. Is the evaluation of this study a fast, easy, cheap method to be applied to the routine, which could reduce the response time in the bacteraemia diagnosis.The automatic systems Vitek Senior (bioMérieux, and Vitek 2 (bioMérieux were used at Pio Albergo Trivulzio (Centre1 and at Istituto dei Tumori (Centre2 respectivetly.To remove blood cells, 7 ml. of the culture has been moved by vacuum sampling in a test tube and centrifuged for 10 minutes at 1000 rpm the supernatant has been further centrifuged for 10 minutes at 3000 rpm.0.5 ml. of BHI has been added to the pellet o sediment.The concentration of bacterial suspension has been fit for the inoculation. At the same time has been prepared standard cultures in suitable culture media were carried out for comparison. In the centro1 and centro2 have been isolated and identify respectively 63 and 31 Gram negative, and, 32 and 40 gram positive microorganisms have been isolated and identify in the Centre1 and Centre2 respectively.The identification Gram-negative and Gram positive microorganisms showed an agreement of 100% and 86.2% and 93.3% and 65.78% respectively between the direct and the standard method. For antibiotic susceptibility tests, 903 (Centre1 and 491 (Centre2 and 396 and 509 compounds were totally assessed in Gram negative and Gram positive bacteria respectively.The analysis has highlighted that: Centre1 has reported 0.30% very major errors (GE, 0.92% major errors (EM, 1.23% minor errors (Em. Centre 2 showed 0.57% very major errors (GE, 0.09% major errors

  12. Reproducibility of a semi-automatic method for 6-point vertebral morphometry in a multi-centre trial

    International Nuclear Information System (INIS)

    Guglielmi, Giuseppe; Stoppino, Luca Pio; Placentino, Maria Grazia; D'Errico, Francesco; Palmieri, Francesco

    2009-01-01

    Purpose: To evaluate the reproducibility of a semi-automated system for vertebral morphometry (MorphoXpress) in a large multi-centre trial. Materials and methods: The study involved 132 clinicians (no radiologist) with different levels of experience across 20 osteo-centres in Italy. All have received training in using MorphoXpress. An expert radiologist was also involved providing data used as standard of reference. The test image originate from normal clinical activity and represent a variety of normal, under and over exposed films, indicating both normal anatomy and vertebral deformities. The image was represented twice to the clinicians in a random order. Using the software, the clinicians initially marked the midpoints of the upper and lower vertebrae to include as many of the vertebrae (T5-L4) as practical within each given image. MorphoXpress performs the localisation of all morphometric points based on statistical model-based vision system. Intra-operator as well inter-operator measurement of agreement was calculated using the coefficient of variation and the mean and standard deviation of the difference of two measurements to check their agreement. Results: The overall intra-operator mean differences in vertebral heights is 1.61 ± 4.27% (1 S.D.). The overall intra-operator coefficient of variation is 3.95%. The overall inter-operator mean differences in vertebral heights is 2.93 ± 5.38% (1 S.D.). The overall inter-operator coefficient of variation is 6.89%. Conclusions: The technology tested here can facilitate reproducible quantitative morphometry suitable for large studies of vertebral deformities

  13. Automatic tracking of wake vortices using ground-wind sensor data

    Science.gov (United States)

    1977-01-03

    Algorithms for automatic tracking of wake vortices using ground-wind anemometer : data are developed. Methods of bad-data suppression, track initiation, and : track termination are included. An effective sensor-failure detection-and identification : ...

  14. New Multigrid Method Including Elimination Algolithm Based on High-Order Vector Finite Elements in Three Dimensional Magnetostatic Field Analysis

    Science.gov (United States)

    Hano, Mitsuo; Hotta, Masashi

    A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

  15. Automatic welding and cladding in heavy fabrication

    International Nuclear Information System (INIS)

    Altamer, A. de

    1980-01-01

    A description is given of the automatic welding processes used by an Italian fabricator of pressure vessels for petrochemical and nuclear plant. The automatic submerged arc welding, submerged arc strip cladding, pulsed TIG, hot wire TIG and MIG welding processes have proved satisfactory in terms of process reliability, metal deposition rate, and cost effectiveness for low alloy and carbon steels. An example shows sequences required during automatic butt welding, including heat treatments. Factors which govern satisfactory automatic welding include automatic anti-drift rotator device, electrode guidance and bead programming system, the capability of single and dual head operation, flux recovery and slag removal systems, operator environment and controls, maintaining continuity of welding and automatic reverse side grinding. Automatic welding is used for: joining vessel sections; joining tubes to tubeplate; cladding of vessel rings and tubes, dished ends and extruded nozzles; nozzle to shell and butt welds, including narrow gap welding. (author)

  16. Automatic contact in DYNA3D for vehicle crashworthiness

    International Nuclear Information System (INIS)

    Whirley, R.G.; Engelmann, B.E.

    1994-01-01

    This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit, nonlinear, finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. The authors have used a new four-step automatic contact algorithm. Key aspects of the proposed method include (1) automatic identification of adjacent and opposite surfaces in the global search phase, and (2) the use of a smoothly varying surface normal that allows a consistent treatment of shell intersection and corner contact conditions without ad hoc rules. Three examples are given to illustrate the performance of the newly proposed algorithm in the public DYNA3D code

  17. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    Zhao, Junsheng; Sun, Sam Zandong

    2013-01-01

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  18. An Automatic Segmentation Method Combining an Active Contour Model and a Classification Technique for Detecting Polycomb-group Proteinsin High-Throughput Microscopy Images.

    Science.gov (United States)

    Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura

    2016-01-01

    The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.

  19. Semi-automatized segmentation method using image-based flow cytometry to study sperm physiology: the case of capacitation-induced tyrosine phosphorylation.

    Science.gov (United States)

    Matamoros-Volante, Arturo; Moreno-Irusta, Ayelen; Torres-Rodriguez, Paulina; Giojalas, Laura; Gervasi, María G; Visconti, Pablo E; Treviño, Claudia L

    2018-02-01

    Is image-based flow cytometry a useful tool to study intracellular events in human sperm such as protein tyrosine phosphorylation or signaling processes? Image-based flow cytometry is a powerful tool to study intracellular events in a relevant number of sperm cells, which enables a robust statistical analysis providing spatial resolution in terms of the specific subcellular localization of the labeling. Sperm capacitation is required for fertilization. During this process, spermatozoa undergo numerous physiological changes, via activation of different signaling pathways, which are not completely understood. Classical approaches for studying sperm physiology include conventional microscopy, flow cytometry and Western blotting. These techniques present disadvantages for obtaining detailed subcellular information of signaling pathways in a relevant number of cells. This work describes a new semi-automatized analysis using image-based flow cytometry which enables the study, at the subcellular and population levels, of different sperm parameters associated with signaling. The increase in protein tyrosine phosphorylation during capacitation is presented as an example. Sperm cells were isolated from seminal plasma by the swim-up technique. We evaluated the intensity and distribution of protein tyrosine phosphorylation in sperm incubated in non-capacitation and capacitation-supporting media for 1 and 18 h under different experimental conditions. We used an antibody against FER kinase and pharmacological inhibitors in an attempt to identify the kinases involved in protein tyrosine phosphorylation during human sperm capacitation. Semen samples from normospermic donors were obtained by masturbation after 2-3 days of sexual abstinence. We used the innovative technique image-based flow cytometry and image analysis tools to segment individual images of spermatozoa. We evaluated and quantified the regions of sperm where protein tyrosine phosphorylation takes place at the

  20. Assessing the Applicability of Currently Available Methods for Attributing Foodborne Disease to Sources, Including Food and Food Commodities

    DEFF Research Database (Denmark)

    Pires, Sara Monteiro

    2013-01-01

    on the public health question being addressed, on the data requirements, on advantages and limitations of the method, and on the data availability of the country or region in question. Previous articles have described available methods for source attribution, but have focused only on foodborne microbiological...

  1. Comparative evaluation of reproductive parameters between the automatic GEDIS cervical insemination method and the traditional in multicolor bristles

    Directory of Open Access Journals (Sweden)

    Núñez-Torres Oscar Patricio

    2017-04-01

    Full Text Available The research was carried out in Ecuador, in the province of Tungurahua, Cevallos county. A comparison of reproductive parameters between the cervical self insemination method and the traditional one in multiparous sows was performed using 12 sows (hybrid females between the second and fourth calving, dividing In two groups of 6 sows respectively, using the insemination protocol 12h - 24h - 36h. Fresh semen was prepared with long-term diluent + bidistilled water, at a concentration of 3 x 109 spermatozoa/mL in total volume per 100 mL straw. At the time of insemination the amount of seminal reflux was determined and when the Student's T test was applied with paired observations in the results, they statistically reported a significant difference at 5% among the evaluated methods, the calculated T value was 9.50 Which is greater than the T of tables at 5% of 2.57. The duration of each method was determined, results that reported similarity between the two methods (15 min. At 21 days post insemination pregnancy was diagnosed by ultrasound and evaluation of no return of heat, results that reported in both methods 100% effectiveness. Subsequently, at the time of delivery, the number of total born piglets was evaluated, using the Student's T-test with paired observations that statistically there was no significant difference at 5% between the two methods, the calculated T value was 0, 14 which is less than the T of tables at 5% of 2.57. We also determined the weight of piglets at birth, reported by Student's t-test with paired observations that there is a statistically significant difference to 5% among the evaluated methods, the calculated T value was 5.17, which is higher than the T Of tables at 5% of 2.57. As for costs there is no considerable difference.

  2. Innovative Method for Automatic Shape Generation and 3D Printing of Reduced-Scale Models of Ultra-Thin Concrete Shells

    Directory of Open Access Journals (Sweden)

    Ana Tomé

    2018-02-01

    Full Text Available A research and development project has been conducted aiming to design and produce ultra-thin concrete shells. In this paper, the first part of the project is described, consisting of an innovative method for shape generation and the consequent production of reduced-scale models of the selected geometries. First, the shape generation is explained, consisting of a geometrically nonlinear analysis based on the Finite Element Method (FEM to define the antifunicular of the shell’s deadweight. Next, the scale model production is described, consisting of 3D printing, specifically developed to evaluate the aesthetics and visual impact, as well as to study the aerodynamic behaviour of the concrete shells in a wind tunnel. The goals and constraints of the method are identified and a step-by-step guidelines presented, aiming to be used as a reference in future studies. The printed geometry is validated by high-resolution assessment achieved by photogrammetry. The results are compared with the geometry computed through geometric nonlinear finite-element-based analysis, and no significant differences are recorded. The method is revealed to be an important tool for automatic shape generation and building scale models of shells. The latter enables the performing of wind tunnel tests to obtain pressure coefficients, essential for structural analysis of this type of structures.

  3. Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor

    International Nuclear Information System (INIS)

    Yin Xiaoming; Li Xiang; Zhao Liping; Fang Zhongping

    2009-01-01

    A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.

  4. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform

    Directory of Open Access Journals (Sweden)

    Xiao Yu

    2015-11-01

    Full Text Available Because roller element bearings (REBs failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT. In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS into window spectrums, following which Rand Index (RI criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs. Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines. The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU. The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault

  5. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings...... differential equations, but in this thesis, we describe how to use the methods for enclosing iterates of discrete mappings, and then later use them for discretizing solutions of ordinary differential equations. The theory of automatic differentiation is introduced, and three methods for obtaining derivatives...... are described: The forward, the backward, and the Taylor expansion methods. The three methods have been implemented in the C++ program packages FADBAD/TADIFF. Some examples showing how to use the three metho ds are presented. A feature of FADBAD/TADIFF not present in other automatic differentiation packages...

  6. Estimation of a condition of electromagnetic relays a method of acoustic diagnostics in systems of railway automatics

    Directory of Open Access Journals (Sweden)

    V.V. Laguta

    2012-04-01

    Full Text Available The paper deals with methods of preliminary analysis of sound signals to extract stable features in classifying the relay used in railway automation systems. The algorithm of the description of a sound signal on the basis of these attributes is offered.

  7. New method for the automatic control of decentralized system-tie frequency converters; Neues Verfahren zur Regelung von dezentralen Netzkupplungsumrichtern

    Energy Technology Data Exchange (ETDEWEB)

    Xie Jian [Ulm Univ. (Germany). Inst. fuer Energiewandlung und -speicherung; Wolf, T. [DB Energie GmbH, Chemnitz (Germany)

    2007-07-01

    The calculation of the optimum rectifier voltage for minimum losses is based on the line impedances be-tween the substation and the neighbouring substations. A new method for the online determination of these impedances is proposed. The test results obtained support the viability of this approach. (orig.)

  8. Compositions of graphene materials with metal nanostructures and microstructures and methods of making and using including pressure sensors

    KAUST Repository

    Chen, Ye; Khashab, Niveen M.; Tao, Jing

    2017-01-01

    Composition comprising at least one graphene material and at least one metal. The metal can be in the form of nanoparticles as well as microflakes, including single crystal microflakes. The metal can be intercalated in the graphene sheets

  9. Comparison of 2D radiography and a semi-automatic CT-based 3D method for measuring change in dorsal angulation over time in distal radius fractures

    Energy Technology Data Exchange (ETDEWEB)

    Christersson, Albert; Larsson, Sune [Uppsala University, Department of Orthopaedics, Uppsala (Sweden); Nysjoe, Johan; Malmberg, Filip; Sintorn, Ida-Maria; Nystroem, Ingela [Uppsala University, Centre for Image Analysis, Uppsala (Sweden); Berglund, Lars [Uppsala University, Uppsala Clinical Research Centre, UCR Statistics, Uppsala (Sweden)

    2016-06-15

    The aim of the present study was to compare the reliability and agreement between a computer tomography-based method (CT) and digitalised 2D radiographs (XR) when measuring change in dorsal angulation over time in distal radius fractures. Radiographs from 33 distal radius fractures treated with external fixation were retrospectively analysed. All fractures had been examined using both XR and CT at six times over 6 months postoperatively. The changes in dorsal angulation between the first reference images and the following examinations in every patient were calculated from 133 follow-up measurements by two assessors and repeated at two different time points. The measurements were analysed using Bland-Altman plots, comparing intra- and inter-observer agreement within and between XR and CT. The mean differences in intra- and inter-observer measurements for XR, CT, and between XR and CT were close to zero, implying equal validity. The average intra- and inter-observer limits of agreement for XR, CT, and between XR and CT were ± 4.4 , ± 1.9 and ± 6.8 respectively. For scientific purpose, the reliability of XR seems unacceptably low when measuring changes in dorsal angulation in distal radius fractures, whereas the reliability for the semi-automatic CT-based method was higher and is therefore preferable when a more precise method is requested. (orig.)

  10. Evaluation of methods to produce an image library for automatic patient model localization for dose mapping during fluoroscopically guided procedures

    Science.gov (United States)

    Kilian-Meneghin, Josh; Xiong, Z.; Rudin, S.; Oines, A.; Bednarek, D. R.

    2017-03-01

    The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid's 3D-visualization tool and Plastimatch's digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.

  11. A new method for the automatic retrieval of medical cases based on the RadLex ontology.

    Science.gov (United States)

    Spanier, A B; Cohen, D; Joskowicz, L

    2017-03-01

    The goal of medical case-based image retrieval (M-CBIR) is to assist radiologists in the clinical decision-making process by finding medical cases in large archives that most resemble a given case. Cases are described by radiology reports comprised of radiological images and textual information on the anatomy and pathology findings. The textual information, when available in standardized terminology, e.g., the RadLex ontology, and used in conjunction with the radiological images, provides a substantial advantage for M-CBIR systems. We present a new method for incorporating textual radiological findings from medical case reports in M-CBIR. The input is a database of medical cases, a query case, and the number of desired relevant cases. The output is an ordered list of the most relevant cases in the database. The method is based on a new case formulation, the Augmented RadLex Graph and an Anatomy-Pathology List. It uses a new case relatedness metric [Formula: see text] that prioritizes more specific medical terms in the RadLex tree over less specific ones and that incorporates the length of the query case. An experimental study on 8 CT queries from the 2015 VISCERAL 3D Case Retrieval Challenge database consisting of 1497 volumetric CT scans shows that our method has accuracy rates of 82 and 70% on the first 10 and 30 most relevant cases, respectively, thereby outperforming six other methods. The increasing amount of medical imaging data acquired in clinical practice constitutes a vast database of untapped diagnostically relevant information. This paper presents a new hybrid approach to retrieving the most relevant medical cases based on textual and image information.

  12. Evaluation, including effects of storage and repeated freezing and thawing, of a method for measurement of urinary creatinine

    DEFF Research Database (Denmark)

    Garde, A H; Hansen, Åse Marie; Kristiansen, J

    2003-01-01

    The aims of this study were to elucidate to what extent storage and repeated freezing and thawing influenced the concentration of creatinine in urine samples and to evaluate the method for determination of creatinine in urine. The creatinine method was based on the well-known Jaffe's reaction...... and measured on a COBAS Mira autoanalyser from Roche. The main findings were that samples for analysis of creatinine should be kept at a temperature of -20 degrees C or lower and frozen and thawed only once. The limit of detection, determined as 3 x SD of 20 determinations of a sample at a low concentration (6...

  13. A computer program for automatic gamma-ray spectra analysis

    International Nuclear Information System (INIS)

    Hiromura, Kazuyuki

    1975-01-01

    A computer program for automatic analysis of gamma-ray spectra obtained with a Ge(Li) detector is presented. The program includes a method by comparing the successive values of experimental data for the automatic peak finding and method of leastsquares for the peak fitting. The peak shape in the fitting routine is a 'modified Gaussian', which consists of two different Gaussians with the same height joined at the centroid. A quadratic form is chosen as a function representing the background. A maximum of four peaks can be treated in the fitting routine by the program. Some improvements in question are described. (auth.)

  14. Theory of linear physical systems theory of physical systems from the viewpoint of classical dynamics, including Fourier methods

    CERN Document Server

    Guillemin, Ernst A

    2013-01-01

    An eminent electrical engineer and authority on linear system theory presents this advanced treatise, which approaches the subject from the viewpoint of classical dynamics and covers Fourier methods. This volume will assist upper-level undergraduates and graduate students in moving from introductory courses toward an understanding of advanced network synthesis. 1963 edition.

  15. Sighting optics including an optical element having a first focal length and a second focal length and methods for sighting

    Science.gov (United States)

    Crandall, David Lynn

    2011-08-16

    Sighting optics include a front sight and a rear sight positioned in a spaced-apart relation. The rear sight includes an optical element having a first focal length and a second focal length. The first focal length is selected so that it is about equal to a distance separating the optical element and the front sight and the second focal length is selected so that it is about equal to a target distance. The optical element thus brings into simultaneous focus for a user images of the front sight and the target.

  16. Compositions of graphene materials with metal nanostructures and microstructures and methods of making and using including pressure sensors

    KAUST Repository

    Chen, Ye

    2017-01-26

    Composition comprising at least one graphene material and at least one metal. The metal can be in the form of nanoparticles as well as microflakes, including single crystal microflakes. The metal can be intercalated in the graphene sheets. The composition has high conductivity and flexibility. The composition can be made by a one-pot synthesis in which a graphene material precursor is converted to the graphene material, and the metal precursor is converted to the metal. A reducing solvent or dispersant such as NMP can be used. Devices made from the composition include a pressure sensor which has high sensitivity. Two two- dimension materials can be combined to form a hybrid material.

  17. Automatic Quantification of Tumour Hypoxia From Multi-Modal Microscopy Images Using Weakly-Supervised Learning Methods.

    Science.gov (United States)

    Carneiro, Gustavo; Peng, Tingying; Bayer, Christine; Navab, Nassir

    2017-07-01

    In recently published clinical trial results, hypoxia-modified therapies have shown to provide more positive outcomes to cancer patients, compared with standard cancer treatments. The development and validation of these hypoxia-modified therapies depend on an effective way of measuring tumor hypoxia, but a standardized measurement is currently unavailable in clinical practice. Different types of manual measurements have been proposed in clinical research, but in this paper we focus on a recently published approach that quantifies the number and proportion of hypoxic regions using high resolution (immuno-)fluorescence (IF) and hematoxylin and eosin (HE) stained images of a histological specimen of a tumor. We introduce new machine learning-based methodologies to automate this measurement, where the main challenge is the fact that the clinical annotations available for training the proposed methodologies consist of the total number of normoxic, chronically hypoxic, and acutely hypoxic regions without any indication of their location in the image. Therefore, this represents a weakly-supervised structured output classification problem, where training is based on a high-order loss function formed by the norm of the difference between the manual and estimated annotations mentioned above. We propose four methodologies to solve this problem: 1) a naive method that uses a majority classifier applied on the nodes of a fixed grid placed over the input images; 2) a baseline method based on a structured output learning formulation that relies on a fixed grid placed over the input images; 3) an extension to this baseline based on a latent structured output learning formulation that uses a graph that is flexible in terms of the amount and positions of nodes; and 4) a pixel-wise labeling based on a fully-convolutional neural network. Using a data set of 89 weakly annotated pairs of IF and HE images from eight tumors, we show that the quantitative results of methods (3) and (4

  18. Difference in target definition using three different methods to include respiratory motion in radiotherapy of lung cancer

    DEFF Research Database (Denmark)

    Sloth Møller, Ditte; Knap, Marianne Marquard; Nyeng, Tine Bisballe

    2017-01-01

    : PTVσ yields the smallest volumes but does not ensure coverage of tumor during the full respiratory motion due to tumor deformation. Incorporating the respiratory motion in the delineation (PTVdel) takes into account the entire respiratory cycle including deformation, but at the cost, however, of larger...

  19. Application of a methane carbon isotope analyzer for the investigation of δ13C of methane emission measured by the automatic chamber method in an Arctic Tundra

    Science.gov (United States)

    Mastepanov, Mikhail; Christensen, Torben

    2014-05-01

    Methane emissions have been monitored by an automatic chamber method in Zackenberg valley, NE Greenland, since 2006 as a part of Greenland Ecosystem Monitoring (GEM) program. During most of the seasons the measurements were carried out from the time of snow melt (June-July) until freezing of the active layer (October-November). Several years of data, obtained by the same method, instrumentation and at exactly the same site, provided a unique opportunity for the analysis of interannual methane flux patterns and factors affecting their temporal variability. The start of the growing season emissions was found to be closely related to a date of snow melt at the site. Despite a large between year variability of this date (sometimes more than a month), methane emission started within a few days after, and was increasing for the next about 30 days. After this peak of emission, it slowly decreased and stayed more or less constant or slightly decreasing during the rest of the growing season (Mastepanov et al., Biogeosciences, 2013). During the soil freezing, a second peak of methane emission was found (Mastepanov et al., Nature, 2008); its amplitude varied a lot between the years, from almost undetectable to comparable with total growing season emissions. Analysis of the multiyear emission patterns (Mastepanov et al., Biogeosciences, 2013) led to hypotheses of different sources for the spring, summer and autumn methane emissions, and multiyear cycles of accumulation and release of these components to the atmosphere. For the further investigation of this it was decided to complement the monitoring system with a methane carbon isotope analyzer (Los Gatos Research, USA). The instrument was installed during 2013 field season and was successfully operating until the end of the measurement campaign (27 October). Detecting both 12C-CH4 and 13C-CH4 concentrations in real time (0.5 Hz) during automatic chamber closure (15 min), the instrument was providing data for determination of

  20. Validation of a New Method to Automatically Select Cases With Intraoperative Red Blood Cell Transfusion for Audit.

    Science.gov (United States)

    Dexter, Franklin; Epstein, Richard H; Ledolter, Johannes; Dasovich, Susan M; Herman, Jay H; Maga, Joni M; Schwenk, Eric S

    2018-05-01

    Hospitals review allogeneic red blood cell (RBC) transfusions for appropriateness. Audit criteria have been published that apply to 5 common procedures. We expanded on this work to study the management decision of selecting which cases involving transfusion of at least 1 RBC unit to audit (review) among all surgical procedures, including those previously studied. This retrospective, observational study included 400,000 cases among 1891 different procedures over an 11-year period. There were 12,616 cases with RBC transfusion. We studied the proportions of cases that would be audited based on criteria of nadir hemoglobin (Hb) greater than the hospital's selected transfusion threshold, or absent Hb or missing estimated blood loss (EBL) among procedures with median EBL 50%) that would be audited and most cases (>50%) with transfusion were among procedures with median EBL 9 g/dL, the procedure's median EBL was 9 g/dL and median EBL for the procedure ≥500 mL. An automated process to select cases for audit of intraoperative transfusion of RBC needs to consider the median EBL of the procedure, whether the nadir Hb is below the hospital's Hb transfusion threshold for surgical cases, and the absence of either a Hb or entry of the EBL for the case. This conclusion applies to all surgical cases and procedures.

  1. Shining a light on LAMP assays--a comparison of LAMP visualization methods including the novel use of berberine.

    Science.gov (United States)

    Fischbach, Jens; Xander, Nina Carolin; Frohme, Marcus; Glökler, Jörn Felix

    2015-04-01

    The need for simple and effective assays for detecting nucleic acids by isothermal amplification reactions has led to a great variety of end point and real-time monitoring methods. Here we tested direct and indirect methods to visualize the amplification of potato spindle tuber viroid (PSTVd) by loop-mediated isothermal amplification (LAMP) and compared features important for one-pot in-field applications. We compared the performance of magnesium pyrophosphate, hydroxynaphthol blue (HNB), calcein, SYBR Green I, EvaGreen, and berberine. All assays could be used to distinguish between positive and negative samples in visible or UV light. Precipitation of magnesium-pyrophosphate resulted in a turbid reaction solution. The use of HNB resulted in a color change from violet to blue, whereas calcein induced a change from orange to yellow-green. We also investigated berberine as a nucleic acid-specific dye that emits a fluorescence signal under UV light after a positive LAMP reaction. It has a comparable sensitivity to SYBR Green I and EvaGreen. Based on our results, an optimal detection method can be chosen easily for isothermal real-time or end point screening applications.

  2. Measure Guideline: Summary of Interior Ducts in New Construction, Including an Efficient, Affordable Method to Install Fur-Down Interior Ducts

    Energy Technology Data Exchange (ETDEWEB)

    Beal, D. [BA-PIRC, Cocoa, FL (United States); McIlvaine, J. [BA-PIRC, Cocoa, FL (United States); Fonorow, K. [BA-PIRC, Cocoa, FL (United States); Martin, E. [BA-PIRC, Cocoa, FL (United States)

    2011-11-01

    This document illustrates guidelines for the efficient installation of interior duct systems in new housing, including the fur-up chase method, the fur-down chase method, and interior ducts positioned in sealed attics or sealed crawl spaces.

  3. A comparative study of methods for automatic detection of rapid eye movement abnormal muscular activity in narcolepsy

    DEFF Research Database (Denmark)

    Olesen, Alexander Neergaard; Cesari, Matteo; Christensen, Julie Anja Engelhard

    2018-01-01

    atonia index (RAI), supra-threshold REM EMG activit ymetric (STREAM), and Frandsen method (FR) were calculated from polysomnography recordings of 20 healthy controls, 18 clinic controls (subjects suspected with narcolepsy but finally diagnosed without any sleep abnormality), 16 narcolepsy type 1 without...... REM sleep behavior disorder (RBD), 9 narcolepsy type 1 with RBD, and 18 narcolepsy type 2. Diagnostic value of metrics in differentiating between groups was quantified by area under the receiver operating characteristic curve (AUC). Correlations among the metrics and cerebrospinal fluid hypocretin-1...... in narcolepsy 1 compared to controls. This finding might play a supportive role in diagnosing narcolepsy and in discriminating narcolepsy subtypes. Moreover, the negative correlation between CSF-hcrt-1 level and REM muscular activity supported a role for hypocretin in the control of motor tone during REM sleep....

  4. Automatically Green

    DEFF Research Database (Denmark)

    Sunstein, Cass R.; Reisch, Lucia

    2014-01-01

    reasons include the power of suggestion; inertia and procrastination; and loss aversion. If well-chosen, green defaults are likely to have large effects in reducing the economic and environmental harms associated with various products and activities. Such defaults may or may not be more expensive...

  5. Automatically Green

    DEFF Research Database (Denmark)

    Sunstein, Cass R.; Reisch, Lucia

    reasons include the power of suggestion; inertia and procrastination; and loss aversion. If well-chosen, green defaults are likely to have large effects in reducing the economic and environmental harms associated with various products and activities. Such defaults may or may not be more expensive...

  6. An automatic method for detection and classification of Ionospheric Alfvén Resonances using signal and image processing techniques

    Science.gov (United States)

    Beggan, Ciaran

    2014-05-01

    Induction coils permit us to measure the very rapid changes of the magnetic field. In June 2012, the British Geological Survey Geomagnetism team installed two high frequency (100 Hz) induction coil magnetometers at the Eskdalemuir Observatory (55.3° N, 3.2° W, L~3), in the Scottish Borders of the United Kingdom. The Eskdalemuir Observatory is one of the longest running geophysical sites in the UK (beginning operation in 1908) and is located in a rural valley with a quiet magnetic environment. The coils record magnetic field changes over an effective frequency range of about 0.1-40Hz, and encompass phenomena such as the Schumann resonances, magnetospheric pulsations and Ionospheric Alfvén Resonances (IAR). In this study we focus on the IAR, which are related to the vibration of magnetic field lines passing through the ionosphere, believed to be mainly excited by lower atmospheric electrical discharges. The IAR typically manifest as a series of spectral resonances structures (SRS) within the 1-6Hz frequency range, usually appearing a fine bands or fringes in spectrogram plots. The SRS tend to occur daily between 18.00-06.00UT at the Eskdalemuir site, disappearing during the daylight hours. They usually start as a single low frequency before bifurcating into 5-10 separate fringes, increasing in frequency until around midnight. The fringes also widen in frequency before fading around 06.00UT. Occasionally, the fringes decrease in frequency slightly around 03.00UT before fading. In order to quantify the daily, seasonal and annual changes of the SRS, we developed a new method to identify the fringes and to quantify their occurrence in frequency (f) and the change in frequency (Δf). The method uses short time-series of 100 seconds to produce an FFT spectral plot from which the non-stationary peaks are identified using the residuals from a best-fit six order spline. This is repeated for an entire day of data. The peaks from each time-slice are placed into a matrix

  7. Development of new quantitative mass spectrometry and semi-automatic isofocusing methods for the determination of Apolipoprotein E typing.

    Science.gov (United States)

    Hirtz, Christophe; Vialaret, Jerome; Nouadje, Georges; Schraen, Susanna; Benlian, Pascale; Mary, Sandrine; Philibert, Pascal; Tiers, Laurent; Bros, Pauline; Delaby, Constance; Gabelle, Audrey; Lehmann, Sylvain

    2016-02-15

    Apolipoprotein E (Apo E) is a 36 Kda glycoprotein involved in lipid transport. It exists in 3 major isoforms: E2, E3 and E4. ApoE status is known to be a major risk factor for late-onset Alzheimer's and cardiovascular diseases. Genotyping is commonly used to obtain ApoE status but can show technical issues with ambiguous determinations. Phenotyping can be an alternative, not requiring genetic material. We evaluated the ability to accurately type ApoE isoforms by 2 phenotyping tests in comparison with genotyping. Two phenotyping techniques were used: (1) LC-MS/MS detection of 4 ApoE specific peptides (6490 Agilent triple quadripole): After its denaturation, serum was either reduced and alkylated, or only diluted, and then trypsin digested. Before analysis, desalting, evaporation and resuspension were performed. (2) Isoelectric focusing and immunoprecipitation: serum samples were neuraminidase digested, delipidated and electrophoresed on Hydragel ApoE (Sebia agarose gel) using Hydrasys 2 Scan instrument (Sebia, Lisses, France). ApoE isoforms bands were directly immunofixed in the gel using a polyclonal anti human ApoE antibody. Then, incubation of the gel with HRP secondary antibody followed by TTF1/TTF2 substrate allowed the visualization of ApoE bands. The results of the two techniques were compared to genotyping. Sera from 35 patients previously genotyped were analyzed with the 2 phenotyping techniques. 100% concordance between both phenotyping assays was obtained for the tested phenotypes (E2/E2, E2/E3, E2/E4, E3/E3, E3/E4, E4/E4). When compared to genotyping, 3 samples were discordant. After reanalyzing them by both phenotyping tests and DNA sequencing, 2/3 discrepancies were confirmed. Those can be explained by variants or rare ApoE alleles or by unidentified technical issues. 102 additional samples were then tested on LC-MS/MS only and compared to genotyping. The data showed 100% concordance. Our 2 phenotyping methods represent a valuable alternative to

  8. Automatic evidence retrieval for systematic reviews.

    Science.gov (United States)

    Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G; Tsafnat, Guy

    2014-10-01

    Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing's effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Our goal was to evaluate an automatic method for citation snowballing's capacity to identify and retrieve the full text and/or abstracts of cited articles. Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews.

  9. A Quadrature Method of Moments for Polydisperse Flow in Bubble Columns Including Poly-Celerity, Breakup and Coalescence

    Directory of Open Access Journals (Sweden)

    Thomas Acher

    2014-12-01

    Full Text Available A simulation model for 3D polydisperse bubble column flows in an Eulerian/Eulerian framework is presented. A computationally efficient and numerically stable algorithm is created by making use of quadrature method of moments (QMOM functionalities, in conjunction with appropriate breakup and coalescence models. To account for size dependent bubble motion, the constituent moments of the bubble size distribution function are transported with individual velocities. Validation of the simulation results against experimental and numerical data of Hansen [1] show the capability of the present model to accurately predict complex gas-liquid flows.

  10. Principles of cobalt-60 teletherapy including an introduction to the compendium. Guidelines for the documentation of radiation treatment methods

    International Nuclear Information System (INIS)

    Cohen, M.

    1984-01-01

    A great deal of thought has been given in recent years to the documentation of individual patients and their diseases, especially since the computerization of registry sytems facilitates the storage and retrieval of large amounts of data, but the documentation of radiation treatment methods has received surprisingly little attention. The guidelines which follow are intended for use both internally (within radiotherapy centres) and externally when a treatment method is reported in the literature or transferred from one centre to another. The amount of detail reported externally will, of course, depend on the circumstances: for example, a published paper will usually mention only the most important of the radiation and physical parameters, but it is important for the department of origin to list all parameters in a separate document, available on request. These guidelines apply specifically to the documentation of treatment by external radiation beams, although many of the suggestions would also apply to treatment by small sealed sources (brachytherapy) and by unsealed radionuclides. Treatment techniques which involve a combination of external and internal sources (e.g. Ca. cervix uteri treatd by intracavitary sources plus external beam therapy) require particularly careful documentation to indicate the relationship bwtween dose distribution (in both space and time) achieved by the two modalities

  11. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    Science.gov (United States)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  12. Innovative Methods for Estimating Densities and Detection Probabilities of Secretive Reptiles Including Invasive Constrictors and Rare Upland Snakes

    Science.gov (United States)

    2018-01-30

    home range  maintenance  or attraction to or avoidance of  landscape features, including  roads  (Morales et al. 2004, McClintock et al. 2012). For example...radiotelemetry and extensive road survey data are used to generate the first density estimates available for the species. The results show that southern...secretive snakes that combines behavioral observations of snake road crossing speed, systematic road survey data, and simulations of spatial

  13. SU-F-R-05: Multidimensional Imaging Radiomics-Geodesics: A Novel Manifold Learning Based Automatic Feature Extraction Method for Diagnostic Prediction in Multiparametric Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, V [The Johns Hopkins University, Computer Science. Baltimore, MD (United States); Jacobs, MA [The Johns Hopkins University School of Medicine, Dept of Radiology and Oncology. Baltimore, MD (United States)

    2016-06-15

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI) and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRa

  14. SU-F-R-05: Multidimensional Imaging Radiomics-Geodesics: A Novel Manifold Learning Based Automatic Feature Extraction Method for Diagnostic Prediction in Multiparametric Imaging

    International Nuclear Information System (INIS)

    Parekh, V; Jacobs, MA

    2016-01-01

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI) and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRa

  15. Validation of simple quantification methods for 18F FP CIT PET Using Automatic Delineation of volumes of interest based on statistical probabilistic anatomical mapping and isocontour margin setting

    International Nuclear Information System (INIS)

    Kim, Yong Il; Im, Hyung Jun; Paeng, Jin Chul; Lee, Jae Sung; Eo, Jae Seon; Kim, Dong Hyun; Kim, Euishin E.; Kang, Keon Wook; Chung, June Key; Lee Dong Soo

    2012-01-01

    18 F FP CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, 18 F FP CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) fir the stratum. in this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy five 18 F FP CIT images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, Q SPAM, was calculated to simulate binding potential. additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake volume product(Q UVP) was calculated for each striatal region. Q SPAMa nd Q UVPw as calculated for each visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the Q SPAMa nd Q UVPw ere significantly different according to visual grading (0.001). The agreements of Q UVPa nd Q SPAMw ith visual grading were slight to fair for the caudate nucleus (K= 0.421 and 0.291, respectively) and good to prefect to the putamen (K=0.663 and 0.607, respectively). Also, Q SPAMa nd Q UVPh ad a significant correlation with each other (0.001). Cerebral atrophy made a significant difference in Q SPAMa nd Q UVPo f the caudate nuclei regions with decreased 18 F FP CIT uptake. Simple quantitative measurements of Q SPAMa nd Q UVPs howed acceptable agreement with visual grad-ing. although Q SPAMi n some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of F FP

  16. Staphylococcus aureus strains associated with food poisoning outbreaks in France: comparison of different molecular typing methods, including MLVA

    Science.gov (United States)

    Roussel, Sophie; Felix, Benjamin; Vingadassalon, Noémie; Grout, Joël; Hennekinne, Jacques-Antoine; Guillier, Laurent; Brisabois, Anne; Auvray, Fréderic

    2015-01-01

    Staphylococcal food poisoning outbreaks (SFPOs) are frequently reported in France. However, most of them remain unconfirmed, highlighting a need for a better characterization of isolated strains. Here we analyzed the genetic diversity of 112 Staphylococcus aureus strains isolated from 76 distinct SFPOs that occurred in France over the last 30 years. We used a recently developed multiple-locus variable-number tandem-repeat analysis (MLVA) protocol and compared this method with pulsed field gel electrophoresis (PFGE), spa-typing and carriage of genes (se genes) coding for 11 staphylococcal enterotoxins (i.e., SEA, SEB, SEC, SED, SEE, SEG, SEH, SEI, SEJ, SEP, SER). The strains known to have an epidemiological association with one another had identical MLVA types, PFGE profiles, spa-types or se gene carriage. MLVA, PFGE and spa-typing divided 103 epidemiologically unrelated strains into 84, 80, and 50 types respectively demonstrating the high genetic diversity of S. aureus strains involved in SFPOs. Each MLVA type shared by more than one strain corresponded to a single spa-type except for one MLVA type represented by four strains that showed two different-but closely related-spa-types. The 87 enterotoxigenic strains were distributed across 68 distinct MLVA types that correlated all with se gene carriage except for four MLVA types. The most frequent se gene detected was sea, followed by seg and sei and the most frequently associated se genes were sea-seh and sea-sed-sej-ser. The discriminatory ability of MLVA was similar to that of PFGE and higher than that of spa-typing. This MLVA protocol was found to be compatible with high throughput analysis, and was also faster and less labor-intensive than PFGE. MLVA holds promise as a suitable method for investigating SFPOs and tracking the source of contamination in food processing facilities in real time. PMID:26441849

  17. A novel automatic flow method with direct-injection photometric detector for determination of dissolved reactive phosphorus in wastewater and freshwater samples.

    Science.gov (United States)

    Koronkiewicz, Stanislawa; Trifescu, Mihaela; Smoczynski, Lech; Ratnaweera, Harsha; Kalinowski, Slawomir

    2018-02-12

    The novel automatic flow system, direct-injection detector (DID) integrated with multi-pumping flow system (MPFS), dedicated for the photometric determination of orthophosphates in wastewater and freshwater samples is for the first time described. All reagents and the sample were injected simultaneously, in counter-current into the reaction-detection chamber by the system of specially selected for this purpose solenoid micro-pumps. The micro-pumps provided good precision and accuracy of the injected volumes. For the determination of orthophosphates, the molybdenum blue method was employed. The developed method can be used to detect orthophosphate in the range 0.1-12 mg L -1 , with the repeatability (RSD) about 2.2% at 4 mg L -1 and a very high injection throughput of 120 injections h -1 . It was possible to achieve a very small consumption of reagents (10 μL of ammonium molybdate and 10 μL of ascorbic acid) and sample (20 μL). The volume of generated waste was only 440 μL per analysis. The method has been successfully applied, giving a good accuracy, to determination of orthophosphates in complex matrix samples: treated wastewater, lake water and reference sample of groundwater. The developed system is compact, small in both size and weight, requires 12 V in supply voltage, which are desirable for truly portable equipment used in routine analysis. The simplicity of the system should result in its greater long-time reliability comparing to other flow methods previously described.

  18. Automatic Cobb Angle Determination From Radiographic Images

    NARCIS (Netherlands)

    Sardjono, Tri Arief; Wilkinson, Michael H. F.; Veldhuizen, Albert G.; van Ooijen, Peter M. A.; Purnama, Ketut E.; Verkerke, Gijsbertus J.

    2013-01-01

    Study Design. Automatic measurement of Cobb angle in patients with scoliosis. Objective. To test the accuracy of an automatic Cobb angle determination method from frontal radiographical images. Summary of Background Data. Thirty-six frontal radiographical images of patients with scoliosis. Methods.

  19. Use of spectrophotometric readout method for free radical dosimetry in radiation processing including low energy electrons and bremsstrahlung

    International Nuclear Information System (INIS)

    Gupta, B.L.

    2000-01-01

    Our laboratory maintains standards for high doses in India. The glutamine powder dosimeter (spectrophotometric readout) is used for this purpose. Present studies show that 20 mg of unirradiated/irradiated glutamine dissolved in freshly prepared 10 ml of aerated aqueous acidic FX solution containing 2 x 10 -3 mol dm -3 ferrous ammonium sulphate and 10 -4 mol dm -3 xylenol orange in 0.033 mol dm -3 sulphuric acid is suitable for the dosimetry in the dose range of 0.1-100 kGy. Normally no corrections are required for the post-irradiation fading of the irradiated glutamine. The response of glutamine dosimeter is independent of irradiation temperature in the range of about 23-30 deg. C and at other temperatures, a correction is necessary. The dose intercomparison results for photon, electron and bremsstrahlung radiations show that glutamine can be used as a reference standard dosimeter. The use of flat polyethylene bags containing glutamine powder has proved very successful for electron dosimetry of wide energies. Several other amino acids like alanine, valine and threonine can also be used to cover wide range of doses using spectrophotometric readout method. (author)

  20. Analysis of alternative transportation methods for radioactive materials shipments including the use of special trains for spent fuel and wastes

    International Nuclear Information System (INIS)

    Smith, D.R.; Luna, R.E.; Taylor, J.M.

    1978-01-01

    Two studies were completed which evaluate the environmental impact of radioactive material transport. The first was a generic study which evaluated all radioactive materials and all transportation modes; the second addressed spent fuel and fuel-cycle wastes shipped by truck, rail and barge. A portion of each of those studies dealing with the change in impact resulting from alternative shipping methods is presented in this paper. Alternatives evaluated in each study were mode shifts, operational constraints, and, in generic case, changes in material properties and package capabilities. Data for the analyses were obtained from a shipper survey and from projections of shipments that would occur in an equilibrium fuel cycle supporting one hundred 1000-MW(e) reactors. Population exposures were deduced from point source radiation formulae using separation distances derived for scenarios appropriate to each shipping mode and to each exposed population group. Fourteen alternatives were investigated for the generic impact case. All showed relatively minor changes in the overall radiological impact. Since the radioactive material transport is estimated to be fewer than 3 latent cancer fatalities (LCF) for each shipment year (compared to some 300,000 yearly cancer fatalities or 5000 LCF's calculated for background radiation using the same radiological effects model), a 15% decrease caused by shifting from passenger air to cargo air is a relatively small effect. Eleven alternatives were considered for the fuel cycle/special train study, but only one produced a reduction in total special train baseline LCF's (.047) that was larger than 5%

  1. State Token Petri Net modeling method for formal verification of computerized procedure including operator's interruptions of procedure execution flow

    International Nuclear Information System (INIS)

    Kim, Yun Goo; Seong, Poong Hyun

    2012-01-01

    The Computerized Procedure System (CPS) is one of the primary operating support systems in the digital Main Control Room. The CPS displays procedure on the computer screen in the form of a flow chart, and displays plant operating information along with procedure instructions. It also supports operator decision making by providing a system decision. A procedure flow should be correct and reliable, as an error would lead to operator misjudgement and inadequate control. In this paper we present a modeling for the CPS that enables formal verification based on Petri nets. The proposed State Token Petri Nets (STPN) also support modeling of a procedure flow that has various interruptions by the operator, according to the plant condition. STPN modeling is compared with Coloured Petri net when they are applied to Emergency Operating Computerized Procedure. A converting program for Computerized Procedure (CP) to STPN has been also developed. The formal verification and validation methods of CP with STPN increase the safety of a nuclear power plant and provide digital quality assurance means that are needed when the role and function of the CPS is increasing.

  2. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  3. Optimal Control as a method for Diesel engine efficiency assessment including pressure and NO_x constraints

    International Nuclear Information System (INIS)

    Guardiola, Carlos; Climent, Héctor; Pla, Benjamín; Reig, Alberto

    2017-01-01

    Highlights: • Optimal Control is applied for heat release shaping in internal combustion engines. • Optimal Control allows to assess the engine performance with a realistic reference. • The proposed method gives a target heat release law to define control strategies. - Abstract: The present paper studies the optimal heat release law in a Diesel engine to maximise the indicated efficiency subject to different constraints, namely: maximum cylinder pressure, maximum cylinder pressure derivative, and NO_x emission restrictions. With this objective, a simple but also representative model of the combustion process has been implemented. The model consists of a 0D energy balance model aimed to provide the pressure and temperature evolutions in the high pressure loop of the engine thermodynamic cycle from the gas conditions at the intake valve closing and the heat release law. The gas pressure and temperature evolutions allow to compute the engine efficiency and NO_x emissions. The comparison between model and experimental results shows that despite the model simplicity, it is able to reproduce the engine efficiency and NO_x emissions. After the model identification and validation, the optimal control problem is posed and solved by means of Dynamic Programming (DP). Also, if only pressure constraints are considered, the paper proposes a solution that reduces the computation cost of the DP strategy in two orders of magnitude for the case being analysed. The solution provides a target heat release law to define injection strategies but also a more realistic maximum efficiency boundary than the ideal thermodynamic cycles usually employed to estimate the maximum engine efficiency.

  4. Multiscale approach including microfibril scale to assess elastic constants of cortical bone based on neural network computation and homogenization method.

    Science.gov (United States)

    Barkaoui, Abdelwahed; Chamekh, Abdessalem; Merzouki, Tarek; Hambli, Ridha; Mkaddem, Ali

    2014-03-01

    The complexity and heterogeneity of bone tissue require a multiscale modeling to understand its mechanical behavior and its remodeling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network (NN) computation and homogenization equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained NN simulation. Finite element calculation is performed at nanoscopic levels to provide a database to train an in-house NN program; and (iii) in steps 2-10 from fibril to continuum cortical bone tissue, homogenization equations are used to perform the computation at the higher scales. The NN outputs (elastic properties of the microfibril) are used as inputs for the homogenization computation to determine the properties of mineralized collagen fibril. The mechanical and geometrical properties of bone constituents (mineral, collagen, and cross-links) as well as the porosity were taken in consideration. This paper aims to predict analytically the effective elastic constants of cortical bone by modeling its elastic response at these different scales, ranging from the nanostructural to mesostructural levels. Our findings of the lowest scale's output were well integrated with the other higher levels and serve as inputs for the next higher scale modeling. Good agreement was obtained between our predicted results and literature data. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Method and apparatus for enhanced sensitivity filmless medical x-ray imaging, including three-dimensional imaging

    Science.gov (United States)

    Parker, Sherwood

    1995-01-01

    A filmless X-ray imaging system includes at least one X-ray source, upper and lower collimators, and a solid-state detector array, and can provide three-dimensional imaging capability. The X-ray source plane is distance z.sub.1 above upper collimator plane, distance z.sub.2 above the lower collimator plane, and distance z.sub.3 above the plane of the detector array. The object to be X-rayed is located between the upper and lower collimator planes. The upper and lower collimators and the detector array are moved horizontally with scanning velocities v.sub.1, v.sub.2, v.sub.3 proportional to z.sub.1, z.sub.2 and z.sub.3, respectively. The pattern and size of openings in the collimators, and between detector positions is proportional such that similar triangles are always defined relative to the location of the X-ray source. X-rays that pass through openings in the upper collimator will always pass through corresponding and similar openings in the lower collimator, and thence to a corresponding detector in the underlying detector array. Substantially 100% of the X-rays irradiating the object (and neither absorbed nor scattered) pass through the lower collimator openings and are detected, which promotes enhanced sensitivity. A computer system coordinates repositioning of the collimators and detector array, and X-ray source locations. The computer system can store detector array output, and can associate a known X-ray source location with detector array output data, to provide three-dimensional imaging. Detector output may be viewed instantly, stored digitally, and/or transmitted electronically for image viewing at a remote site.

  6. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [PNNL; Soille, Pierre [EC JRC

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  7. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans

    Directory of Open Access Journals (Sweden)

    Alizé Lacoste Jeanson

    2017-05-01

    Full Text Available Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. Methods We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT and lean tissue (LT in such material. An intra-class correlation coefficient (ICC was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS linear regressions and support vector machine regression (SVMR. Results and Discussion The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5 and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77 than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08 for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.

  8. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    Science.gov (United States)

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-11-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  9. Towards unifying inheritance and automatic program specialization

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2002-01-01

    with covariant specialization to control the automatic application of program specialization to class members. Lapis integrates object-oriented concepts, block structure, and techniques from automatic program specialization to provide both a language where object-oriented designs can be e#ciently implemented......Inheritance allows a class to be specialized and its attributes refined, but implementation specialization can only take place by overriding with manually implemented methods. Automatic program specialization can generate a specialized, effcient implementation. However, specialization of programs...

  10. Automatic Testing with Formal Methods

    NARCIS (Netherlands)

    Tretmans, G.J.; Belinfante, Axel

    1999-01-01

    The use of formal system specifications makes it possible to automate the derivation of test cases from specifications. This allows to automate the whole testing process, not only the test execution part of it. This paper presents the state of the art and future perspectives in testing based on

  11. Finding weak points automatically

    International Nuclear Information System (INIS)

    Archinger, P.; Wassenberg, M.

    1999-01-01

    Operators of nuclear power stations have to carry out material tests at selected components by regular intervalls. Therefore a full automaticated test, which achieves a clearly higher reproducibility, compared to part automaticated variations, would provide a solution. In addition the full automaticated test reduces the dose of radiation for the test person. (orig.) [de

  12. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans.

    Science.gov (United States)

    Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara; Brůžek, Jaroslav

    2017-01-01

    Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.

  13. Motor automaticity in Parkinson’s disease

    Science.gov (United States)

    Wu, Tao; Hallett, Mark; Chan, Piu

    2017-01-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  14. A novel GLM-based method for the Automatic IDentification of functional Events (AIDE) in fNIRS data recorded in naturalistic environments.

    Science.gov (United States)

    Pinti, Paola; Merla, Arcangelo; Aichelburg, Clarisse; Lind, Frida; Power, Sarah; Swingler, Elizabeth; Hamilton, Antonia; Gilbert, Sam; Burgess, Paul W; Tachtsidis, Ilias

    2017-07-15

    Recent technological advances have allowed the development of portable functional Near-Infrared Spectroscopy (fNIRS) devices that can be used to perform neuroimaging in the real-world. However, as real-world experiments are designed to mimic everyday life situations, the identification of event onsets can be extremely challenging and time-consuming. Here, we present a novel analysis method based on the general linear model (GLM) least square fit analysis for the Automatic IDentification of functional Events (or AIDE) directly from real-world fNIRS neuroimaging data. In order to investigate the accuracy and feasibility of this method, as a proof-of-principle we applied the algorithm to (i) synthetic fNIRS data simulating both block-, event-related and mixed-design experiments and (ii) experimental fNIRS data recorded during a conventional lab-based task (involving maths). AIDE was able to recover functional events from simulated fNIRS data with an accuracy of 89%, 97% and 91% for the simulated block-, event-related and mixed-design experiments respectively. For the lab-based experiment, AIDE recovered more than the 66.7% of the functional events from the fNIRS experimental measured data. To illustrate the strength of this method, we then applied AIDE to fNIRS data recorded by a wearable system on one participant during a complex real-world prospective memory experiment conducted outside the lab. As part of the experiment, there were four and six events (actions where participants had to interact with a target) for the two different conditions respectively (condition 1: social-interact with a person; condition 2: non-social-interact with an object). AIDE managed to recover 3/4 events and 3/6 events for conditions 1 and 2 respectively. The identified functional events were then corresponded to behavioural data from the video recordings of the movements and actions of the participant. Our results suggest that "brain-first" rather than "behaviour-first" analysis is

  15. Automatic Construction of Finite Algebras

    Institute of Scientific and Technical Information of China (English)

    张健

    1995-01-01

    This paper deals with model generation for equational theories,i.e.,automatically generating (finite)models of a given set of (logical) equations.Our method of finite model generation and a tool for automatic construction of finite algebras is described.Some examples are given to show the applications of our program.We argue that,the combination of model generators and theorem provers enables us to get a better understanding of logical theories.A brief comparison betwween our tool and other similar tools is also presented.

  16. Automatic identification in mining

    Energy Technology Data Exchange (ETDEWEB)

    Puckett, D; Patrick, C [Mine Computers and Electronics Inc., Morehead, KY (United States)

    1998-06-01

    The feasibility of monitoring the locations and vital statistics of equipment and personnel in surface and underground mining operations has increased with advancements in radio frequency identification (RFID) technology. This paper addresses the use of RFID technology, which is relatively new to the mining industry, to track surface equipment in mine pits, loading points and processing facilities. Specific applications are discussed, including both simplified and complex truck tracking systems and an automatic pit ticket system. This paper concludes with a discussion of the future possibilities of using RFID technology in mining including monitoring heart and respiration rates, body temperatures and exertion levels; monitoring repetitious movements for the study of work habits; and logging air quality via personnel sensors. 10 refs., 5 figs.

  17. The Automatic Measurement of Targets

    DEFF Research Database (Denmark)

    Höhle, Joachim

    1997-01-01

    The automatic measurement of targets is demonstrated by means of a theoretical example and by an interactive measuring program for real imagery from a réseau camera. The used strategy is a combination of two methods: the maximum correlation coefficient and the correlation in the subpixel range...... interactive software is also part of a computer-assisted learning program on digital photogrammetry....

  18. CLG for Automatic Image Segmentation

    OpenAIRE

    Christo Ananth; S.Santhana Priya; S.Manisha; T.Ezhil Jothi; M.S.Ramasubhaeswari

    2017-01-01

    This paper proposes an automatic segmentation method which effectively combines Active Contour Model, Live Wire method and Graph Cut approach (CLG). The aim of Live wire method is to provide control to the user on segmentation process during execution. Active Contour Model provides a statistical model of object shape and appearance to a new image which are built during a training phase. In the graph cut technique, each pixel is represented as a node and the distance between those nodes is rep...

  19. Automatic face morphing for transferring facial animation

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Bui, T.D.; Poel, Mannes; Heylen, Dirk K.J.; Nijholt, Antinus; Hamza, H.M.

    2003-01-01

    In this paper, we introduce a novel method of automatically finding the training set of RBF networks for morphing a prototype face to represent a new face. This is done by automatically specifying and adjusting corresponding feature points on a target face. The RBF networks are then used to transfer

  20. Automatic personnel contamination monitor

    International Nuclear Information System (INIS)

    Lattin, Kenneth R.

    1978-01-01

    United Nuclear Industries, Inc. (UNI) has developed an automatic personnel contamination monitor (APCM), which uniquely combines the design features of both portal and hand and shoe monitors. In addition, this prototype system also has a number of new features, including: micro computer control and readout, nineteen large area gas flow detectors, real-time background compensation, self-checking for system failures, and card reader identification and control. UNI's experience in operating the Hanford N Reactor, located in Richland, Washington, has shown the necessity of automatically monitoring plant personnel for contamination after they have passed through the procedurally controlled radiation zones. This final check ensures that each radiation zone worker has been properly checked before leaving company controlled boundaries. Investigation of the commercially available portal and hand and shoe monitors indicated that they did not have the sensitivity or sophistication required for UNI's application, therefore, a development program was initiated, resulting in the subject monitor. Field testing shows good sensitivity to personnel contamination with the majority of alarms showing contaminants on clothing, face and head areas. In general, the APCM has sensitivity comparable to portal survey instrumentation. The inherit stand-in, walk-on feature of the APCM not only makes it easy to use, but makes it difficult to bypass. (author)

  1. Some observations concerning blade-element-momentum (BEM) methods and vortex wake methods, including numerical experiments with a simple vortex model

    Energy Technology Data Exchange (ETDEWEB)

    Snel, H. [Netherlands Energy Research Foundation ECN, Renewable Energy, Wind Energy (Netherlands)

    1997-08-01

    Recently the Blade Element Momentum (BEM) method has been made more versatile. Inclusion of rotational effects on time averaged profile coefficients have improved its achievements for performance calculations in stalled flow. Time dependence as a result of turbulent inflow, pitching actions and yawed operation is now treated more correctly (although more improvement is needed) than before. It is of interest to note that adaptations in modelling of unsteady or periodic induction stem from qualitative and quantitative insights obtained from free vortex models. Free vortex methods and further into the future Navier Stokes (NS) calculations, together with wind tunnel and field experiments, can be very useful in enhancing the potential of BEM for aero-elastic response calculations. It must be kept in mind however that extreme caution must be used with free vortex methods, as will be discussed in the following chapters. A discussion of the shortcomings and the strength of BEM and of vortex wake models is given. Some ideas are presented on how BEM might be improved without too much loss of efficiency. (EG)

  2. Automatic generation of combinatorial test data

    CERN Document Server

    Zhang, Jian; Ma, Feifei

    2014-01-01

    This book reviews the state-of-the-art in combinatorial testing, with particular emphasis on the automatic generation of test data. It describes the most commonly used approaches in this area - including algebraic construction, greedy methods, evolutionary computation, constraint solving and optimization - and explains major algorithms with examples. In addition, the book lists a number of test generation tools, as well as benchmarks and applications. Addressing a multidisciplinary topic, it will be of particular interest to researchers and professionals in the areas of software testing, combi

  3. Methods for high-resolution anisotropic finite element modeling of the human head: automatic MR white matter anisotropy-adaptive mesh generation.

    Science.gov (United States)

    Lee, Won Hee; Kim, Tae-Seong

    2012-01-01

    This study proposes an advanced finite element (FE) head modeling technique through which high-resolution FE meshes adaptive to the degree of tissue anisotropy can be generated. Our adaptive meshing scheme (called wMesh) uses MRI structural information and fractional anisotropy maps derived from diffusion tensors in the FE mesh generation process, optimally reflecting electrical properties of the human brain. We examined the characteristics of the wMeshes through various qualitative and quantitative comparisons to the conventional FE regular-sized meshes that are non-adaptive to the degree of white matter anisotropy. We investigated numerical differences in the FE forward solutions that include the electrical potential and current density generated by current sources in the brain. The quantitative difference was calculated by two statistical measures of relative difference measure (RDM) and magnification factor (MAG). The results show that the wMeshes are adaptive to the anisotropic density of the WM anisotropy, and they better reflect the density and directionality of tissue conductivity anisotropy. Our comparison results between various anisotropic regular mesh and wMesh models show that there are substantial differences in the EEG forward solutions in the brain (up to RDM=0.48 and MAG=0.63 in the electrical potential, and RDM=0.65 and MAG=0.52 in the current density). Our analysis results indicate that the wMeshes produce different forward solutions that are different from the conventional regular meshes. We present some results that the wMesh head modeling approach enhances the sensitivity and accuracy of the FE solutions at the interfaces or in the regions where the anisotropic conductivities change sharply or their directional changes are complex. The fully automatic wMesh generation technique should be useful for modeling an individual-specific and high-resolution anisotropic FE head model incorporating realistic anisotropic conductivity distributions

  4. Neuro-fuzzy system modeling based on automatic fuzzy clustering

    Institute of Scientific and Technical Information of China (English)

    Yuangang TANG; Fuchun SUN; Zengqi SUN

    2005-01-01

    A neuro-fuzzy system model based on automatic fuzzy clustering is proposed.A hybrid model identification algorithm is also developed to decide the model structure and model parameters.The algorithm mainly includes three parts:1) Automatic fuzzy C-means (AFCM),which is applied to generate fuzzy rules automatically,and then fix on the size of the neuro-fuzzy network,by which the complexity of system design is reducesd greatly at the price of the fitting capability;2) Recursive least square estimation (RLSE).It is used to update the parameters of Takagi-Sugeno model,which is employed to describe the behavior of the system;3) Gradient descent algorithm is also proposed for the fuzzy values according to the back propagation algorithm of neural network.Finally,modeling the dynamical equation of the two-link manipulator with the proposed approach is illustrated to validate the feasibility of the method.

  5. Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections

    Energy Technology Data Exchange (ETDEWEB)

    2018-03-25

    A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.

  6. UIO-based Fault Diagnosis for Hydraulic Automatic Gauge Control System of Magnesium Sheet Mill

    Directory of Open Access Journals (Sweden)

    Li-Ping FAN

    2014-02-01

    Full Text Available Hydraulic automatic gauge control system of magnesium sheet mill is a complex integrated control system, which including mechanical, hydraulic and electrical comprehensive information. The failure rate of AGC system always is high, and its fault reasons are always complex. Based on analyzing the fault of main components of the automatic gauge control system, unknown input observer is used to realize fault diagnosis and isolation. Simulation results show that the fault diagnosis method based on the unknown input observer for the hydraulic automatic gauge control system of magnesium sheet mill is effective.

  7. A Method for the Automatic Exposure Control in Pediatric Abdominal CT: Application to the Standard Deviation Value and Tube Current Methods by Using Patient's Age and Body Size.

    Science.gov (United States)

    Furuya, Ken; Akiyama, Shinji; Nambu, Atushi; Suzuki, Yutaka; Hasebe, Yuusuke

    2017-01-01

    We aimed to apply the pediatric abdominal CT protocol of Donnelly et al. in the United States to the pediatric abdominal CT-AEC. Examining CT images of 100 children, we found that the sectional area of the hepatic portal region (y) was strongly correlated with the body weight (x) as follows: y=7.14x + 84.39 (correlation coefficient=0.9574). We scanned an elliptical cone phantom that simulates the human body using a pediatric abdominal CT scanning method of Donnelly et al. in, and measured SD values. We further scanned the same phantom under the settings for adult CT-AEC scan and obtained the relationship between the sectional areas (y) and the SD values. Using these results, we obtained the following preset noise factors for CT-AEC at each body weight range: 6.90 at 4.5-8.9 kg, 8.40 at 9.0-17.9 kg, 8.68 at 18.0-26.9 kg, 9.89 at 27.0-35.9 kg, 12.22 at 36.0-45.0 kg, 13.52 at 45.1-70.0 kg, 15.29 at more than 70 kg. From the relation between age, weight and the distance of liver and tuber ischiadicum of 500 children, we obtained the CTDI vol values and DLP values under the scanning protocol of Donnelly et al. Almost all of DRL from these values turned out to be smaller than the DRL data of IAEA and various countries. Thus, by setting the maximum current values of CT-AEC to be the Donnelly et al.'s age-wise current values, and using our weight-wise noise factors, we think we can perform pediatric abdominal CT-AEC scans that are consistent with the same radiation safety and the image quality as those proposed by Donnelly et al.

  8. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds; Metodo de titulacao potenciometrica de alta precisao semi-automatizado para a caracterizacao de compostos de uranio

    Energy Technology Data Exchange (ETDEWEB)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da, E-mail: barbara@ird.gov.b, E-mail: fabio@ird.gov.b, E-mail: pedrodio@ird.gov.b, E-mail: radier@ird.gov.b, E-mail: delgado@ird.gov.b, E-mail: wanderley@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Lopes, Ricardo T., E-mail: ricardo@lin.ufrj.b [Universidade Federal do Rio de Janeiro (LIN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Lab. de Instrumentacao Nuclear

    2011-10-26

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  9. Automatic quantitative metallography

    International Nuclear Information System (INIS)

    Barcelos, E.J.B.V.; Ambrozio Filho, F.; Cunha, R.C.

    1976-01-01

    The quantitative determination of metallographic parameters is analysed through the description of Micro-Videomat automatic image analysis system and volumetric percentage of perlite in nodular cast irons, porosity and average grain size in high-density sintered pellets of UO 2 , and grain size of ferritic steel. Techniques adopted are described and results obtained are compared with the corresponding ones by the direct counting process: counting of systematic points (grid) to measure volume and intersections method, by utilizing a circunference of known radius for the average grain size. The adopted technique for nodular cast iron resulted from the small difference of optical reflectivity of graphite and perlite. Porosity evaluation of sintered UO 2 pellets is also analyzed [pt

  10. Automatic surveying techniques

    International Nuclear Information System (INIS)

    Sah, R.

    1976-01-01

    In order to investigate the feasibility of automatic surveying methods in a more systematic manner, the PEP organization signed a contract in late 1975 for TRW Systems Group to undertake a feasibility study. The completion of this study resulted in TRW Report 6452.10-75-101, dated December 29, 1975, which was largely devoted to an analysis of a survey system based on an Inertial Navigation System. This PEP note is a review and, in some instances, an extension of that TRW report. A second survey system which employed an ''Image Processing System'' was also considered by TRW, and it will be reviewed in the last section of this note. 5 refs., 5 figs., 3 tabs

  11. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    Science.gov (United States)

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  12. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    International Nuclear Information System (INIS)

    Deeley, M A; Cmelak, A J; Malcolm, A W; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Ding, G X; Chen, A; Datteri, R; Noble, J H; Dawant, B M; Donnelly, E F; Yei, F; Koyama, T

    2011-01-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.

  13. Automatic Photoelectric Telescope Service

    International Nuclear Information System (INIS)

    Genet, R.M.; Boyd, L.J.; Kissell, K.E.; Crawford, D.L.; Hall, D.S.; BDM Corp., McLean, VA; Kitt Peak National Observatory, Tucson, AZ; Dyer Observatory, Nashville, TN)

    1987-01-01

    Automatic observatories have the potential of gathering sizable amounts of high-quality astronomical data at low cost. The Automatic Photoelectric Telescope Service (APT Service) has realized this potential and is routinely making photometric observations of a large number of variable stars. However, without observers to provide on-site monitoring, it was necessary to incorporate special quality checks into the operation of the APT Service at its multiple automatic telescope installation on Mount Hopkins. 18 references

  14. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  15. AuTom: a novel automatic platform for electron tomography reconstruction

    KAUST Repository

    Han, Renmin

    2017-07-26

    We have developed a software package towards automatic electron tomography (ET): Automatic Tomography (AuTom). The presented package has the following characteristics: accurate alignment modules for marker-free datasets containing substantial biological structures; fully automatic alignment modules for datasets with fiducial markers; wide coverage of reconstruction methods including a new iterative method based on the compressed-sensing theory that suppresses the “missing wedge” effect; and multi-platform acceleration solutions that support faster iterative algebraic reconstruction. AuTom aims to achieve fully automatic alignment and reconstruction for electron tomography and has already been successful for a variety of datasets. AuTom also offers user-friendly interface and auxiliary designs for file management and workflow management, in which fiducial marker-based datasets and marker-free datasets are addressed with totally different subprocesses. With all of these features, AuTom can serve as a convenient and effective tool for processing in electron tomography.

  16. Evaluation on ultrasonic examination methods applied to Ni-base alloy weld including cracks due to stress corrosion cracking found in BWR reactor internal

    International Nuclear Information System (INIS)

    Aoki, Takayuki; Kobayashi, Hiroyuki; Higuchi, Shinichi; Shimizu, Sadato

    2005-01-01

    A Ni-base alloy weld, including cracks due to stress corrosion cracking found in the reactor internal of the oldest BWR in Japan, Tsuruga unit 1, in 1999, was examined by three (3) types of UT method. After this examination, a depth of each crack was confirmed by carrying out a little excavation with a grinder and PT examination by turns until each crack disappeared. Then, the depth measured by the former method was compared with the one measured by the latter method. In this fashion, performances of the UT methods were verified. As a result, a combination of the three types of UT method was found to meet the acceptance criteria given by ASME Sec.XI Appendix VIII, Performance Demonstration for Ultrasonic Examination Systems-Supplement 6. In this paper, the results of the UT examination described above and their evaluation are discussed. (author)

  17. Comparison of liver volumetry on contrast-enhanced CT images: one semiautomatic and two automatic approaches.

    Science.gov (United States)

    Cai, Wei; He, Baochun; Fan, Yingfang; Fang, Chihua; Jia, Fucang

    2016-11-08

    This study was to evaluate the accuracy, consistency, and efficiency of three liver volumetry methods- one interactive method, an in-house-developed 3D medical Image Analysis (3DMIA) system, one automatic active shape model (ASM)-based segmentation, and one automatic probabilistic atlas (PA)-guided segmentation method on clinical contrast-enhanced CT images. Forty-two datasets, including 27 normal liver and 15 space-occupying liver lesion patients, were retrospectively included in this study. The three methods - one semiautomatic 3DMIA, one automatic ASM-based, and one automatic PA-based liver volumetry - achieved an accuracy with VD (volume difference) of -1.69%, -2.75%, and 3.06% in the normal group, respectively, and with VD of -3.20%, -3.35%, and 4.14% in the space-occupying lesion group, respectively. However, the three methods achieved an efficiency of 27.63 mins, 1.26 mins, 1.18 mins on average, respectively, compared with the manual volumetry, which took 43.98 mins. The high intraclass correlation coefficient between the three methods and the manual method indicated an excel-lent agreement on liver volumetry. Significant differences in segmentation time were observed between the three methods (3DMIA, ASM, and PA) and the manual volumetry (p volumetries (ASM and PA) and the semiautomatic volumetry (3DMIA) (p < 0.001). The semiautomatic interactive 3DMIA, automatic ASM-based, and automatic PA-based liver volum-etry agreed well with manual gold standard in both the normal liver group and the space-occupying lesion group. The ASM- and PA-based automatic segmentation have better efficiency in clinical use. © 2016 The Authors.

  18. Neural Bases of Automaticity

    Science.gov (United States)

    Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F.; Logan, Gordon D.

    2018-01-01

    Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this…

  19. Automatic control systems engineering

    International Nuclear Information System (INIS)

    Shin, Yun Gi

    2004-01-01

    This book gives descriptions of automatic control for electrical electronics, which indicates history of automatic control, Laplace transform, block diagram and signal flow diagram, electrometer, linearization of system, space of situation, state space analysis of electric system, sensor, hydro controlling system, stability, time response of linear dynamic system, conception of root locus, procedure to draw root locus, frequency response, and design of control system.

  20. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...