WorldWideScience

Sample records for normalized cut method

  1. Wedge cutting of mild steel by CO 2 laser and cut-quality assessment in relation to normal cutting

    Science.gov (United States)

    Yilbas, B. S.; Karatas, C.; Uslan, I.; Keles, O.; Usta, Y.; Yilbas, Z.; Ahsan, M.

    2008-10-01

    In some applications, laser cutting of wedge surfaces cannot be avoided in sheet metal processing and the quality of the end product defines the applicability of the laser-cutting process in such situations. In the present study, CO 2 laser cutting of the wedge surfaces as well as normal surfaces (normal to laser beam axis) is considered and the end product quality is assessed using the international standards for thermal cutting. The cut surfaces are examined by the optical microscopy and geometric features of the cut edges such as out of flatness and dross height are measured from the micrographs. A neural network is introduced to classify the striation patterns of the cut surfaces. It is found that the dross height and out of flatness are influenced significantly by the laser output power, particularly for wedge-cutting situation. Moreover, the cut quality improves at certain value of the laser power intensity.

  2. Spectral segmentation of polygonized images with normalized cuts

    Energy Technology Data Exchange (ETDEWEB)

    Matsekh, Anna [Los Alamos National Laboratory; Skurikhin, Alexei [Los Alamos National Laboratory; Rosten, Edward [UNIV OF CAMBRIDGE

    2009-01-01

    We analyze numerical behavior of the eigenvectors corresponding to the lowest eigenvalues of the generalized graph Laplacians arising in the Normalized Cuts formulations of the image segmentation problem on coarse polygonal grids.

  3. Cutting method and device underwater

    International Nuclear Information System (INIS)

    Takano, Genta; Kamei, Hiromasa; Beppu, Seiji

    1998-01-01

    A place of material to be cut is surrounded by an openable/closable box. The material to be cut is cut underwater, and materials generated in this case are removed from the cut portion by a pressurized water jet. The removed materials are sucked and recovered together with water in the box. Among the materials caused by the cutting underwater, solid materials not floating on water are caused to stay in the midway of a sucking and recovering channel. A large sucking force might be required for the entire region of the sucking and recovering channel when sucking and recovering large sized solid materials not floating on water, but even large sized materials can be recovered easily according to the present invention since they are recovered after being sucked and stayed in the midway of the sucking and recovering channel. (N.H.)

  4. Method of cutting radioactivated metal structures

    International Nuclear Information System (INIS)

    Takimoto, Yoshinori; Sakota, Kotaro; Hamamoto, Noboru; Harada, Keizo.

    1985-01-01

    Purpose: To improve the cutting performance to a level as comparable with that in air, as well as prevent the scattering of the radioactive materials upon cutting to the level as that in water cutting. Method: After igniting a gas cutting torch automatically, water spray by the local water sprayer is started by the actuation of a submerged pump, while a gas cutting manipulator is operated to cut the nuclear reactor pressure vessel. In this way, cutting exhaust gases resulted from the gas cutting torch are water-washed by the spray from the local water sprayer and falls within the nuclear rector pressure vessel in the form of water streams or droplets along the inner wall surface of the pressure vessel. Further, water is fed again to the local water sprayer by the submerged pump. (Kawakami, Y.)

  5. Correlation methods in cutting arcs

    Energy Technology Data Exchange (ETDEWEB)

    Prevosto, L; Kelly, H, E-mail: prevosto@waycom.com.ar [Grupo de Descargas Electricas, Departamento Ing. Electromecanica, Universidad Tecnologica Nacional, Regional Venado Tuerto, Laprida 651, Venado Tuerto (2600), Santa Fe (Argentina)

    2011-05-01

    The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.

  6. Correlation methods in cutting arcs

    International Nuclear Information System (INIS)

    Prevosto, L; Kelly, H

    2011-01-01

    The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.

  7. Method of dismantling cylindrical structure by cutting

    International Nuclear Information System (INIS)

    Harada, Minoru; Mitsuo, Kohei; Yokota, Isoya; Nakamura, Kenjiro.

    1989-01-01

    This invention concerns a method of cutting and removing cylindrical structures, for example, iron-reinforced concrete materials such as thermal shielding walls in BWR type power plants into block-like form. That is, in a method of cutting and removing the cylindrical structure from the side of the outer wall, the structural material is cut from above to below successively in the axial direction and the circumferential direction by means abrasive jet by remote operation and cut into blocks each of a predetermined size. The cut out blocks are successively taken out. Cutting of the material from above to below by remote operation and taking out of small blocks causes no hazards to human body. Upon practicing the present invention, it is preferred to use a processing device for slurry and exhaust gases for preventing scattering of activated dismantled pieces or powdery dusts. (K.M.)

  8. Making the cut for the contour method

    OpenAIRE

    Bouchard, P. John; Ledgard, Peter; Hiller, Stan; Hosseinzadh Torknezhad, Foroogh

    2012-01-01

    The contour method is becoming an increasingly popular measurement technique for mapping residual stress in engineering components. The accuracy of the technique is critically dependent on the quality of the cut performed. This paper presents results from blind cutting trials on austenitic stainless steel using electro-discharge machines made by three manufacturers. The suitability of the machines is assessed based on the surface finish achieved, risk of wire breakages and the nature of cutti...

  9. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  10. Normal Limits of Electrocardiogram and Cut-Off Values for Left ...

    African Journals Online (AJOL)

    Gender difference exists in some cut-off values for LVH. This study defined the normal limits for electrocardiographic variables for young adult Nigerians. Racial factor should be taken into consideration in interpretation of ECG. Keywords: Normal limits, Electrocardiogram, Cut-off values, Left ventricular hypertrophy, Young ...

  11. Development of liner cutting method for stainless steel liner

    International Nuclear Information System (INIS)

    Takahata, Masato; Wignarajah, Sivakmaran; Kamata, Hirofumi

    2005-01-01

    The present work is an attempt to develop a laser cutting method for cutting and removing stainless steel liners from concrete walls and floors in cells and fuel storage pools of nuclear facilities. The effects of basic laser cutting parameters such as cutting speed, assist gas flow etc. were first studied applying a 1 kW Nd:YAG laser to mock up concrete specimens lined with 3 mm thick stainless steel sheets. These initial studies were followed by studies on the effect of unevenness of the liner surface and on methods of confining contamination during the cutting process. The results showed that laser cutting is superior to other conventional cutting methods from the point of view of safety from radioactivity and work efficiency when cutting contaminated stainless steel liners. In addition to the above results, this paper describes the design outline of a laser cutting system for cutting stainless liners at site and evaluates its merit and cost performance. (author)

  12. Development of laser cutting method for stainless steel liner

    International Nuclear Information System (INIS)

    Ishihara, Satoshi; Takahata, Masato; Wignarajah, Sivakumaran; Kamata, Hirofumi

    2007-01-01

    The present work is an attempt to develop a laser cutting method for cutting and removing stainless steel liners from concrete walls and floors in nuclear facilities. The effect of basic laser cutting parameters such as energy, cutting speed, assist gas flow etc. were first studied through cutting experiments on mock-up concrete specimens lined with 3mm thick stainless steel sheets using a 1kW Nd:YAG laser. These initial studies were followed by further studies on the effect of unevenness of the liner surface and on a new method of confining contamination during the cutting process using a sliding evacuation hood attached to the laser cutting head. The results showed that laser cutting is superior to other conventional cutting methods from the point of view of safety from radioactivity and work efficiency when cutting contaminated stainless steel liners. (author)

  13. Selection of Near Optimal Laser Cutting Parameters in CO2 Laser Cutting by the Taguchi Method

    Directory of Open Access Journals (Sweden)

    Miloš MADIĆ

    2013-12-01

    Full Text Available Identification of laser cutting conditions that are insensitive to parameter variations and noise is of great importance. This paper demonstrates the application of Taguchi method for optimization of surface roughness in CO2 laser cutting of stainless steel. The laser cutting experiment was planned and conducted according to the Taguchi’s experimental design using the L27 orthogonal array. Four laser cutting parameters such as laser power, cutting speed, assist gas pressure, and focus position were considered in the experiment. Using the analysis of means and analysis of variance, the significant laser cutting parameters were identified, and subsequently the optimal combination of laser cutting parameter levels was determined. The results showed that the cutting speed is the most significant parameter affecting the surface roughness whereas the influence of the assist gas pressure can be neglected. It was observed, however, that interaction effects have predominant influence over the main effects on the surface roughness.

  14. GPU-based normalized cuts for road extraction using satellite imagery

    Indian Academy of Sciences (India)

    on the framework NVIDIA CUDA. Apart from the ... quality and to generate the elongated road network for further ... the framework of normalized cuts introduced by. Shi and Malik ..... Youn J and Bethel J S 2004 Adaptive snakes for urban.

  15. Experimental study for development of thermic lance cutting method

    International Nuclear Information System (INIS)

    Machida, N.; Katano, Y.; Kamiya, Y.

    1988-01-01

    A series of experiments on a thermic lance cutting method were carried out to obtain useful data for the practical application of this method to the dismantling of reinforced concrete. As a first step, a performance experiment was executed to study basic cutting performance relating to oxygen consumption, extent of bar loss and cutting speed, as well as by-products generated during cutting work such as powdered dust, gas, fumes and slag. An automated and remote-controlled cutting machine was then developed utilizing automated bar supply and ignition. This paper describes the result of these experiments. (author)

  16. Cutting

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Cutting KidsHealth / For Teens / Cutting What's in this article? ... Getting Help Print en español Cortarse What Is Cutting? Emma's mom first noticed the cuts when Emma ...

  17. Methods for Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte

    This thesis deals with the adaptation and implementation of various optimisation methods, in the field of experimental design, for the laser cutting process. The problem in optimising the laser cutting process has been defined and a structure for at Decision Support System (DSS......) for the optimisation of the laser cutting process has been suggested. The DSS consists of a database with the currently used and old parameter settings. Also one of the optimisation methods has been implemented in the DSS in order to facilitate the optimisation procedure for the laser operator. The Simplex Method has...... been adapted in two versions. A qualitative one, that by comparing the laser cut items optimise the process and a quantitative one that uses a weighted quality response in order to achieve a satisfactory quality and after that maximises the cutting speed thus increasing the productivity of the process...

  18. Development of contaminated concrete removing system 'Clean cut method'

    International Nuclear Information System (INIS)

    Kinoshita, Takehiko; Tanaka, Tsutomu; Funakawa, Naoyoshi; Idemura, Hajime; Sakashita, Fumio; Tajitsu, Yoshiteru

    1989-01-01

    In the case of decommissioning nuclear facilities such as nuclear power stations, nuclear fuel facilities and RI handling facilities and carrying out reconstruction works, if there is radioactive contamination on the surfaces of concrete structures such as the floors and walls of the buildings for nuclear facilities, it must be removed. Since concrete is porous, contamination infiltrates into the inside of concrete, and the wiping of surfaces only or chemical decontamination cannot remove it, therefore in most cases, contaminated concrete must be removed. The removal of concrete surfaces has been carried out with chipping hammers, grinders and so on, but many problems arise due to it. In order to solve these problems, the mechanical cutting method was newly devised, and clean cut method (CCRS) was completed. The depth of cutting from concrete surface is set beforehand, and the part to be removed is accurately cut, at the same time, the concrete powder generated is collected nearly perfectly, and recovered into a drum. The outline of the method and the constitution of the system, the features of the clean cut method, the development of the technology for cutting concrete and the technology for recovering concrete powder, and the test of verifying decontamination are reported. (K.I.)

  19. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    Science.gov (United States)

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Cost Cutting in Hospitals. Innovative Methods & Techniques

    OpenAIRE

    Pandit, Abhijit

    2016-01-01

    There is emerging need of hospitals to address efficiency issues confronting health care reforms in the present environment. The main objective is to investigate the cost efficiency of hospitals using various methods and variables, and compare the results estimated by the different methods and variables. Reinforcing a common agenda between medical, paramedical and administrative staff, and sharing a common vision among professionals and decision makers in the planning of care, may be the grea...

  1. Normal Limits of Electrocardiogram and Cut-Off Values for Left ...

    African Journals Online (AJOL)

    olayemitoyin

    The cut-off values for Sokolow-Lyon, Cornell and Araoye criteria for assessment of left ventricular hypertrophy. (LVH) were higher than those previously in ... MATERIALS AND METHODS. This was a cross-sectional descriptive ..... criteria, Araoye code system and Ogunlade criterion were derived from the addition of two or ...

  2. Vision-based method for tracking meat cuts in slaughterhouses

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo; Hviid, Marchen Sonja; Engbo Jørgensen, Mikkel

    2014-01-01

    Meat traceability is important for linking process and quality parameters from the individual meat cuts back to the production data from the farmer that produced the animal. Current tracking systems rely on physical tagging, which is too intrusive for individual meat cuts in a slaughterhouse envi...... (hanging, rough treatment and incorrect trimming) and our method is able to handle these perturbations gracefully. This study shows that the suggested vision-based approach to tracking is a promising alternative to the more intrusive methods currently available....

  3. Method and apparatus for jet-assisted drilling or cutting

    Science.gov (United States)

    Summers, David Archibold; Woelk, Klaus Hubert; Oglesby, Kenneth Doyle; Galecki, Grzegorz

    2012-09-04

    An abrasive cutting or drilling system, apparatus and method, which includes an upstream supercritical fluid and/or liquid carrier fluid, abrasive particles, a nozzle and a gaseous or low-density supercritical fluid exhaust abrasive stream. The nozzle includes a throat section and, optionally, a converging inlet section, a divergent discharge section, and a feed section.

  4. Cutting method and cutting device for spent fuel rod of nuclear reactor

    International Nuclear Information System (INIS)

    Komatsu, Masahiko; Ose, Toshihiko.

    1996-01-01

    A control rod transferred under water in a vertically suspended state is postured horizontally at such a water depth that radiations can be shielded, and then it is cut to a dropping speed limiting portion and a cross-like main body. The separated cross-like main body portion is further cut in the longitudinal direction and separated into a pair of cut pieces each having an L-shaped cross section. A disk like metal saw is used as a cutting tool. Alternatively, a plasma jet cutter or a melting-type water jet cutter is used as a cutting tool. Then, since the spent control rod to be cut is postured horizontally under water, the water depth for the cutting position can be reduced. As a result, the cutting state using the cutting tool can be observed by naked eyes from the position above the water surface thereby enabling to perform the cutting operation reliably. (N.H.)

  5. Distribution network planning method considering distributed generation for peak cutting

    International Nuclear Information System (INIS)

    Ouyang Wu; Cheng Haozhong; Zhang Xiubin; Yao Liangzhong

    2010-01-01

    Conventional distribution planning method based on peak load brings about large investment, high risk and low utilization efficiency. A distribution network planning method considering distributed generation (DG) for peak cutting is proposed in this paper. The new integrated distribution network planning method with DG implementation aims to minimize the sum of feeder investments, DG investments, energy loss cost and the additional cost of DG for peak cutting. Using the solution techniques combining genetic algorithm (GA) with the heuristic approach, the proposed model determines the optimal planning scheme including the feeder network and the siting and sizing of DG. The strategy for the site and size of DG, which is based on the radial structure characteristics of distribution network, reduces the complexity degree of solving the optimization model and eases the computational burden substantially. Furthermore, the operation schedule of DG at the different load level is also provided.

  6. Monitoring Method of Cutting Force by Using Additional Spindle Sensors

    Science.gov (United States)

    Sarhan, Ahmed Aly Diaa; Matsubara, Atsushi; Sugihara, Motoyuki; Saraie, Hidenori; Ibaraki, Soichi; Kakino, Yoshiaki

    This paper describes a monitoring method of cutting forces for end milling process by using displacement sensors. Four eddy-current displacement sensors are installed on the spindle housing of a machining center so that they can detect the radial motion of the rotating spindle. Thermocouples are also attached to the spindle structure in order to examine the thermal effect in the displacement sensing. The change in the spindle stiffness due to the spindle temperature and the speed is investigated as well. Finally, the estimation performance of cutting forces using the spindle displacement sensors is experimentally investigated by machining tests on carbon steel in end milling operations under different cutting conditions. It is found that the monitoring errors are attributable to the thermal displacement of the spindle, the time lag of the sensing system, and the modeling error of the spindle stiffness. It is also shown that the root mean square errors between estimated and measured amplitudes of cutting forces are reduced to be less than 20N with proper selection of the linear stiffness.

  7. Cutting method for structural component into block like shape, and device used for cutting

    International Nuclear Information System (INIS)

    Nakazawa, Koichi; Ito, Akira; Tateiwa, Masaaki.

    1995-01-01

    Two grooves each of a predetermined depth are formed along a surface of a structural component, and a portion between the two grooves is cut in the direction of the depth from the surface of the structural component by using a cutting wire of a wire saw device. Then, the cutting wire is moved in the extending direction of the grooves while optionally changing the position in the direction of the depth to conduct cutting for the back face. Further, the cutting wire is moved in the direction of the depth of the groove toward the surface, to cut a portion between the two grooves. The wire saw device comprises a wire saw main body movable along the surface of the structural component, a pair of wire guide portions extending in the direction of the depth, guide pooleys capable of guiding the cutting wire guides revolvably and rotatably disposed at the top end, and an endless annular cutting wire extending between the wire guide portions. Thus, it is possible to continuously cut out blocks set to optional size and thickness. In addition, remote cutting is possible with no requirement for an operator to access to the vicinity of radioactivated portions. (N.H.)

  8. Optimisation Of Cutting Parameters Of Composite Material Laser Cutting Process By Taguchi Method

    Science.gov (United States)

    Lokesh, S.; Niresh, J.; Neelakrishnan, S.; Rahul, S. P. Deepak

    2018-03-01

    The aim of this work is to develop a laser cutting process model that can predict the relationship between the process input parameters and resultant surface roughness, kerf width characteristics. The research conduct is based on the Design of Experiment (DOE) analysis. Response Surface Methodology (RSM) is used in this work. It is one of the most practical and most effective techniques to develop a process model. Even though RSM has been used for the optimization of the laser process, this research investigates laser cutting of materials like Composite wood (veneer)to be best circumstances of laser cutting using RSM process. The input parameters evaluated are focal length, power supply and cutting speed, the output responses being kerf width, surface roughness, temperature. To efficiently optimize and customize the kerf width and surface roughness characteristics, a machine laser cutting process model using Taguchi L9 orthogonal methodology was proposed.

  9. A Proposed Arabic Handwritten Text Normalization Method

    Directory of Open Access Journals (Sweden)

    Tarik Abu-Ain

    2014-11-01

    Full Text Available Text normalization is an important technique in document image analysis and recognition. It consists of many preprocessing stages, which include slope correction, text padding, skew correction, and straight the writing line. In this side, text normalization has an important role in many procedures such as text segmentation, feature extraction and characters recognition. In the present article, a new method for text baseline detection, straightening, and slant correction for Arabic handwritten texts is proposed. The method comprises a set of sequential steps: first components segmentation is done followed by components text thinning; then, the direction features of the skeletons are extracted, and the candidate baseline regions are determined. After that, selection of the correct baseline region is done, and finally, the baselines of all components are aligned with the writing line.  The experiments are conducted on IFN/ENIT benchmark Arabic dataset. The results show that the proposed method has a promising and encouraging performance.

  10. Development of connecting method for mechanically cut reinforced concrete blocks

    International Nuclear Information System (INIS)

    Nishiuchi, Tatsuo

    2005-01-01

    The purpose of the study is to develop a practical method of disposing and recycling in dismantled reinforced concrete structures. We have devised a new method in which mechanically cut reinforced concrete blocks are connected and they are reused as a structural beam. In this method, concrete blocks are connected with several steel bars and the connected surface is wrapped with a fiber sheet. We verified that the load capacity of renewal beams was considerably large as same as that of continuous structural beams on the basis of experimental as well as numerical analysis results. As far as construction cost of reinforced concrete walls are concerned, we demonstrated that the cost of this method is slightly lower than that of the plan to use new and recycle materials. (author)

  11. Ann modeling of kerf transfer in Co2 laser cutting and optimization of cutting parameters using monte carlo method

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2015-01-01

    Full Text Available In this paper, an attempt has been made to develop a mathematical model in order to study the relationship between laser cutting parameters such as laser power, cutting speed, assist gas pressure and focus position, and kerf taper angle obtained in CO2 laser cutting of AISI 304 stainless steel. To this aim, a single hidden layer artificial neural network (ANN trained with gradient descent with momentum algorithm was used. To obtain an experimental database for the ANN training, laser cutting experiment was planned as per Taguchi’s L27 orthogonal array with three levels for each of the cutting parameters. Statistically assessed as adequate, ANN model was then used to investigate the effect of the laser cutting parameters on the kerf taper angle by generating 2D and 3D plots. It was observed that the kerf taper angle was highly sensitive to the selected laser cutting parameters, as well as their interactions. In addition to modeling, by applying the Monte Carlo method on the developed kerf taper angle ANN model, the near optimal laser cutting parameter settings, which minimize kerf taper angle, were determined.

  12. Cutting Method of the CAD model of the Nuclear facility for Dismantling Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ikjune; Choi, ByungSeon; Hyun, Dongjun; Jeong, KwanSeong; Kim, GeunHo; Lee, Jonghwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    Current methods for process simulation cannot simulate the cutting operation flexibly. As is, to simulate a cutting operation, user needs to prepare the result models of cutting operation based on pre-define cutting path, depth and thickness with respect to a dismantle scenario in advance. And those preparations should be built again as scenario changes. To be, user can change parameters and scenarios dynamically within a simulation configuration process so that the user saves time and efforts to simulate cutting operations. This study presents the methodology of cutting operation which can be applied to all the procedure in the simulation of dismantling of nuclear facilities. We developed the cutting simulation module for cutting operation in the dismantling of the nuclear facilities based on proposed cutting methodology. We defined the requirement of model cutting methodology based on the requirement of the dismantling of nuclear facilities. And we implemented cutting simulation module based on API of the commercial CAD system.

  13. Sample normalization methods in quantitative metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Inner tubes cutting method by electrical arc saw

    International Nuclear Information System (INIS)

    Thome, P.

    1990-01-01

    The research program deals on the definition of tools used for dismantling steam generator tubes bundle of PWR and on tool used for cutting pipes of great diameter by using the process of cutting by electrical arc saw. The remote tools are used for cutting by the interior pipes of contamined circuits [fr

  15. Underwater transporting method and device for incore structure cutting piece

    International Nuclear Information System (INIS)

    Kurosawa, Koichi; Chiba, Noboru; Chiba, Isao; Takada, Hiroshi; Furukawa, Hideyasu; Chiba, Noboru.

    1996-01-01

    Cutting pieces are handled by using a pick-up device connected with a wire ropes, a take-up drum, chains and a winch as cutting piece handling means, and moved freely on the water surface by a propulsion machine of a transporting means of the device to transfer them under water to a predetermined position. The pick-up device is lifted by taking-up the rope by the rotation of the take-up drum using chain-driving by way of the winch and the chains. The cut pieces are stored in a cask by lowering them in the cask and releasing the handling. In addition, if the weight of the cut pieces is recognized before cutting, and the load of the weight of the cut pieces is applied to the device previously, the balance of the device and the cut pieces can be kept, and cut pieces can be transported under water always stably. Further, if the cut pieces are supported upon cutting operation, the cut pieces are made stable, and cutting operation with good efficiency can be attained. (N.H.)

  16. Calcium Isotope Analysis with "Peak Cut" Method on Column Chemistry

    Science.gov (United States)

    Zhu, H.; Zhang, Z.; Liu, F.; Li, X.

    2017-12-01

    To eliminate isobaric interferences from elemental and molecular isobars (e.g., 40K+, 48Ti+, 88Sr2+, 24Mg16O+, 27Al16O+) on Ca isotopes during mass determination, samples should be purified through ion-exchange column chemistry before analysis. However, large Ca isotopic fractionation has been observed during column chemistry (Russell and Papanastassiou, 1978; Zhu et al., 2016). Therefore, full recovery during column chemistry is greatly needed, otherwise uncertainties would be caused by poor recovery (Zhu et al., 2016). Generally, matrix effects could be enhanced by full recovery, as other elements might overlap with Ca cut during column chemistry. Matrix effects and full recovery are difficult to balance and both need to be considered for high-precision analysis of stable Ca isotopes. Here, we investigate the influence of poor recovery on δ44/40Ca using TIMS with the double spike technique. The δ44/40Ca values of IAPSO seawater, ML3B-G and BHVO-2 in different Ca subcats (e.g., 0-20, 20-40, 40-60, 60-80, 80-100%) with 20% Ca recovery on column chemistry display limited variation after correction by the 42Ca-43Ca double spike technique with the exponential law. Notably, δ44/40Ca of each Ca subcut is quite consistent with δ44/40Ca of Ca cut with full recovery within error. Our results indicate that the 42Ca-43Ca double spike technique can simultaneously correct both of the Ca isotopic fractionation that occurred during column chemistry and thermal ionization mass spectrometry (TIMS) determination properly, because both of the isotopic fractionation occurred during analysis follow the exponential law well. Therefore, we propose the "peak cut" method on Ca column chemistry for samples with complex matrix effects. Briefly, for samples with low Ca contents, we can add the double spike before column chemistry, and only collect the middle of the Ca eluate and abandon the both sides of Ca eluate that might overlap with other elements (e.g., K, Sr). This method would

  17. Fourier phase analysis on equilibrium gated radionuclide ventriculography: range of phase spread and cut-off limits in normal individuals

    International Nuclear Information System (INIS)

    Ramaiah, Vijayaraghavan L.; Harish, B.; Sunil, H.V.; Selvakumar, Job; Ravi Kishore, A.G.; Nair, Gopinathan

    2011-01-01

    To define the range of phase spread on equilibrium gated radionuclide ventriculography (ERNV) in normal individuals and derive the cut-off limit for the parameters to detect cardiac dyssynchrony. ERNV was carried out in 30 individuals (age 53±23 years, 25 males and 5 females) who had no history of cardiovascular disease. They all had normal left ventricular ejection fraction (LVEF 55-70%) as determined by echocardiography, were in sinus rhythm, with normal QRS duration (≤120 msec) and normal coronary angiography. First harmonic phase analysis was performed on scintigraphic data acquired in best septal view. Left and right ventricular standard deviation (LVSD and RVSD, respectively) and interventricular mechanical delay (IVMD), the absolute difference of mean phase angles of right and left ventricle, were computed and expressed in milliseconds. Mean + 3 standard deviation (SD) was used to derive the cut-off limits. Average LVEF and duration of cardiac cycle in the study group were 62.5%±5.44% and 868.9±114.5 msec, respectively. The observations of LVSD, RVSD and right and left ventricular mean phase angles were shown to be normally distributed by Shapiro-Wilk test. Cut-off limits for LVSD, RVSD and IVMD were calculated to be 80 msec, 85 msec and 75 msec, respectively. Fourier phase analysis on ERNV is an effective tool for the evaluation of synchronicity of cardiac contraction. The cut-off limits of parameters of dyssynchrony can be used to separate heart failure patients with cardiac dyssynchrony from those without. ERNV can be used to select patients for cardiac resynchronization therapy. (author)

  18. Mechanical fragmentation of nuclear reactor fuel assemblies by the double cutting method

    International Nuclear Information System (INIS)

    Voitsekhovskii, B.V.; Istomin, V.L.; Mitrofanov, V.V.

    1995-01-01

    A method is described for cutting a spent fuel assembly with straight shears into pieces of a prescribed size. The method does not require separation of the casing and the lattices. The double cutting method is briefly described, and experiments designed for cutting BN-350 and VVER-440 fuel assemblies are outlined. The testing showed that the cutting method was suitable for mechanical polarization of fuel assemblies. The investigations led to the development of turnkey industrial equipment for cutting spent fuel assemblies of different geometries with a maximum size up to 170 mm. 6 refs., 8 figs., 1 tab

  19. Application of Taguchi method for cutting force optimization in rock

    Indian Academy of Sciences (India)

    In this paper, an optimization study was carried out for the cutting force (Fc) acting on circular diamond sawblades in rock sawing. The peripheral speed, traverse speed, cut depth and flow rate of cooling fluid were considered as operating variables and optimized by using Taguchi approach for the Fc. L16(44) orthogonal ...

  20. The analytic regularization ζ function method and the cut-off method in Casimir effect

    International Nuclear Information System (INIS)

    Svaiter, N.F.; Svaiter, B.F.

    1990-01-01

    The zero point energy associated to a hermitian massless scalar field in the presence of perfectly reflecting plates in a three dimensional flat space-time is discussed. A new technique to unify two different methods - the ζ function and a variant of the cut-off method - used to obtain the so called Casimir energy is presented, and the proof of the analytic equivalence between both methods is given. (author)

  1. Laser beam cutting method. Laser ko ni yoru kaitai koho

    Energy Technology Data Exchange (ETDEWEB)

    Kutsumizu, A. (Obayashi Corp., Osaka (Japan))

    1991-07-01

    In this special issue paper concerning the demolition of concrete structures, was introduced a demolition of concrete structures using laser, of which practical application is expected due to the remarkable progress of generating power and efficiency of laser radiator. The characteristics of laser beam which can give a temperature of one million centigrade at the irradiated spot, the laser radiator consisting of laser medium, laser resonator and pumping apparatus, and the laser kinds for working, such as CO{sub 2} laser, YAG laser and CO laser, were described. The basic constitution of laser cutting equipment consisting of large generating power radiator, beam transmitter, beam condenser, and nozzle for working was also illustrated. Furthermore, strong and weak points in the laser cutting for concrete and reinforcement were enumerated. Applications of laser to cutting of reinforced and unreinforced concrete constructions were shown, and the concept and safety measure for application of laser to practical demolition was discussed. 5 refs., 8 figs.

  2. A review of virtual cutting methods and technology in deformable objects.

    Science.gov (United States)

    Wang, Monan; Ma, Yuzheng

    2018-06-05

    Virtual cutting of deformable objects has been a research topic for more than a decade and has been used in many areas, especially in surgery simulation. We refer to the relevant literature and briefly describe the related research. The virtual cutting method is introduced, and we discuss the benefits and limitations of these methods and explore possible research directions. Virtual cutting is a category of object deformation. It needs to represent the deformation of models in real time as accurately, robustly and efficiently as possible. To accurately represent models, the method must be able to: (1) model objects with different material properties; (2) handle collision detection and collision response; and (3) update the geometry and topology of the deformable model that is caused by cutting. Virtual cutting is widely used in surgery simulation, and research of the cutting method is important to the development of surgery simulation. Copyright © 2018 John Wiley & Sons, Ltd.

  3. An Investigation of Undefined Cut Scores with the Hofstee Standard-Setting Method

    Science.gov (United States)

    Wyse, Adam E.; Babcock, Ben

    2017-01-01

    This article provides an overview of the Hofstee standard-setting method and illustrates several situations where the Hofstee method will produce undefined cut scores. The situations where the cut scores will be undefined involve cases where the line segment derived from the Hofstee ratings does not intersect the score distribution curve based on…

  4. Remotely controlled cutting techniques in the field of nuclear decommissioning. Overview of effectively applied thermal cutting methods

    International Nuclear Information System (INIS)

    Bienia, H.; Klotz, B.

    2008-01-01

    This article describes 3 thermal cutting technologies that are effectively used in nuclear decommissioning projects: the autonomous flame cutting, the plasma arc cutting and the contact arc metal cutting. The autonomous flame cutting technology is based on a high pressure oxygen jet oxidizing the material in a small kerf. Not all metal types are appropriate for this technology. The plasma arc cutting, in contrast to the previous technology, uses an electronically induced plasma arc to melt a kerf in the material. Inside the plasma arc temperatures up to 30.000 K exist, so in theory, this temperature is sufficient to cut all materials. Contact arc metal cutting is a new thermal cutting technology for under-water cutting works. Here, a carbon blade cuts the components. An electric arc between the cutting blade and component melts a kerf into the material easing the cutting. This technology allows the cutting of complex structures with hollows. The applications in nuclear facility dismantling of these 3 cutting technologies and their limits are reported and their requirements (staff, investment) listed in a table. (A.C.)

  5. Logging costs and cutting methods in young-growth ponderosa pine in California

    Science.gov (United States)

    Philip M. McDonald; William A. Atkinson; Dale O. Hall

    1969-01-01

    Mixed-conifer stands at the Challenge Experimental Forest, Calif., were cut to four specifications: seed-tree, group selection, single tree selection, and clearcut. Logging costs and production rates were compared and evaluated. Cutting method had little effect on felling or skidding production; felling ranged from 1,802 to 2,019 bd ft per hour, and skidding from 3,138...

  6. Systems and Methods for Determining Water-Cut of a Fluid Mixture

    KAUST Repository

    Karimi, Muhammad Akram; Shamim, Atif; Arsalan, Muhammad

    2017-01-01

    Provided in some embodiments are systems and methods for measuring the water content (or water-cut) of a fluid mixture. Provided in some embodiments is a water-cut sensor system that includes a helical T-resonator, a helical ground conductor, and a

  7. Systems and Methods for Determining Water-Cut of a Fluid Mixture

    KAUST Repository

    Karimi, Muhammad Akram; Shamim, Atif; Arsalan, Muhammad

    2017-01-01

    Provided in some embodiments are systems and methods for measuring the water content (or water-cut) of a fluid mixture. Provided in some embodiments is a water-cut sensor system that includes a T-resonator, a ground conductor, and a separator. The T

  8. Integration of Small-Diameter Wood Harvesting in Early Thinnings using the Two pile Cutting Method

    Energy Technology Data Exchange (ETDEWEB)

    Kaerhae, Kalle (Metsaeteho Oy, P.O. Box 101, FI-00171 Helsinki (Finland))

    2008-10-15

    Metsaeteho Oy studied the integrated harvesting of industrial roundwood (pulpwood) and energy wood based on a two-pile cutting method, i.e. pulpwood and energy wood fractions are stacked into two separate piles when cutting a first-thinning stand. The productivity and cost levels of the integrated, two-pile cutting method were determined, and the harvesting costs of the two-pile method were compared with those of conventional separate wood harvesting methods. In the time study, when the size of removal was 50 dm3, the productivity in conventional whole-tree cutting was 6% higher than in integrated cutting. With a stem size of 100 dm3, the productivity in whole-tree cutting was 7% higher than in integrated cutting. The results indicated, however, that integrated harvesting based on the two-pile method enables harvesting costs to be decreased to below the current cost level of separate pulpwood harvesting in first thinning stands. The greatest cost-saving potential lies in small-sized first thinnings. The results showed that, when integrated wood harvesting based on the two-pile method is applied, the removals of both energy wood and pulpwood should be more than 15-20 m3/ha at the harvesting sites in order to achieve economically viable integrated procurement

  9. When I cut, you choose method implies intransitivity

    Science.gov (United States)

    Makowski, Marcin; Piotrowski, Edward W.

    2014-12-01

    There is a common belief that humans and many animals follow transitive inference (choosing A over C on the basis of knowing that A is better than B and B is better than C). Transitivity seems to be the essence of rational choice. We present a theoretical model of a repeated game in which the players make a choice between three goods (e.g. food). The rules of the game refer to the simple procedure of fair division among two players, known as the “I cut, you choose” mechanism which has been widely discussed in the literature. In this game one of the players has to make intransitive choices in order to achieve the optimal result (for him/her and his/her co-player). The point is that an intransitive choice can be rational. Previously, an increase in the significance of intransitive strategies was achieved by referring to models of quantum games. We show that relevant intransitive strategies also appear in the classic description of decision algorithms.

  10. EVALUATION OF THE CARBON FOOTPRINT OF INNOVATIVE WATER MAIN REHABILITATION TECHNOLOGIES VS. OPEN CUT METHODS

    Science.gov (United States)

    A major benefit of trenchless rehabilitation technologies touted by many practitioners when comparing their products with tradition open cut construction methods is lower carbon dioxide (CO2) emissions. In an attempt to verify these claims, multiple tools have been dev...

  11. A combination method of the theory and experiment in determination of cutting force coefficients in ball-end mill processes

    Directory of Open Access Journals (Sweden)

    Yung-Chou Kao

    2015-10-01

    Full Text Available In this paper, the cutting force calculation of ball-end mill processing was modeled mathematically. All derivations of cutting forces were directly based on the tangential, radial, and axial cutting force components. In the developed mathematical model of cutting forces, the relationship of average cutting force and the feed per flute was characterized as a linear function. The cutting force coefficient model was formulated by a function of average cutting force and other parameters such as cutter geometry, cutting conditions, and so on. An experimental method was proposed based on the stable milling condition to estimate the cutting force coefficients for ball-end mill. This method could be applied for each pair of tool and workpiece. The developed cutting force model has been successfully verified experimentally with very promising results.

  12. Distinguishing butchery cut marks from crocodile bite marks through machine learning methods.

    Science.gov (United States)

    Domínguez-Rodrigo, Manuel; Baquedano, Enrique

    2018-04-10

    All models of evolution of human behaviour depend on the correct identification and interpretation of bone surface modifications (BSM) on archaeofaunal assemblages. Crucial evolutionary features, such as the origin of stone tool use, meat-eating, food-sharing, cooperation and sociality can only be addressed through confident identification and interpretation of BSM, and more specifically, cut marks. Recently, it has been argued that linear marks with the same properties as cut marks can be created by crocodiles, thereby questioning whether secure cut mark identifications can be made in the Early Pleistocene fossil record. Powerful classification methods based on multivariate statistics and machine learning (ML) algorithms have previously successfully discriminated cut marks from most other potentially confounding BSM. However, crocodile-made marks were marginal to or played no role in these comparative analyses. Here, for the first time, we apply state-of-the-art ML methods on crocodile linear BSM and experimental butchery cut marks, showing that the combination of multivariate taphonomy and ML methods provides accurate identification of BSM, including cut and crocodile bite marks. This enables empirically-supported hominin behavioural modelling, provided that these methods are applied to fossil assemblages.

  13. The effect of silage cutting height on the nutritive value of a normal corn silage hybrid compared with brown midrib corn silage fed to lactating cows.

    Science.gov (United States)

    Kung, L; Moulder, B M; Mulrooney, C M; Teller, R S; Schmidt, R J

    2008-04-01

    A brown midrib (BMR) hybrid and a silage-specific non-BMR (7511FQ) hybrid were harvested at a normal cut height leaving 10 to 15 cm of stalk in the field. The non-BMR hybrid was also cut at a greater height leaving 45 to 50 cm of stalk. Cutting high increased the concentrations of dry matter (+4%), crude protein (+5%), net energy for lactation (+3%), and starch (+7%), but decreased the concentrations of acid detergent fiber (-9%), neutral detergent fiber (-8%), and acid detergent lignin (-13%) for 7511FQ. As expected, the BMR corn silage was 30% lower in lignin concentration than 7511FQ. After 30 h of in vitro ruminal fermentation, the digestibility of neutral detergent fiber for normal cut 7511FQ, the same hybrid cut high, and the normal cut BMR hybrid were 51.7, 51.4, and 63.5%, respectively. Twenty-seven multiparous lactating cows were fed a total mixed ration composed of the respective silages (45% of dry matter) with alfalfa haylage (5%), alfalfa hay (5%), and concentrate (45%) (to make the TMR isocaloric and isonitrogenous) in a study with a 3 x 3 Latin square design with 21-d periods. Milk production was greater for cows fed the BMR hybrid (48.8 kg/d) compared with those fed the normal cut 7511FQ (46.8 kg/d) or cut high (47.7 kg/d). Dry matter intake was not affected by treatment. Feed efficiency for cows fed the BMR silage (1.83) was greater than for those fed high-cut 7511FQ (1.75), but was not different from cows fed the normal cut 7511FQ (1.77). Cows fed the BMR silage had milk with greater concentrations of lactose but lower milk urea nitrogen than cows on other treatments. Harvesting a silage-specific, non-BMR corn hybrid at a high harvest height improved its nutritive content, but the improvement in feeding value was not equivalent to that found when cows were fed BMR corn silage.

  14. Application of Taguchi method for cutting force optimization in rock ...

    Indian Academy of Sciences (India)

    Mechanical properties Uniaxial compressive strength. Sawing characteristics .... texture, coarse-grained, grains between 0.08 mm and 4.80 mm, and the coarsest ..... piezoelectric ceramic of Bi0.5Na0.5TiO3 using the Taguchi method. Powder ...

  15. Remotely controlled cutting techniques in the field of nuclear decommissioning. Overview of effectively applied thermal and non thermal cutting methods

    International Nuclear Information System (INIS)

    Bienia, H.

    2008-01-01

    Remote disassembly of radiologically burdened large components is among the most sophisticated and complex activities in the dismantling of nuclear installations. The required space for the technical equipment during the dismantling operations, especially for the removal of larger components is often an additional problem. Conventional cutting technologies like sawing with a disk saw or band saw require large and heavy frameworks as well as guiding systems with high rigidity. These solutions are expensive and sometimes not applicable. The essential question of all cutting and dismantling tasks is the physiological constitution of the component which will be dismantled. That means size, material and structure of the component. All these points are primarily technological questions. The last question is about the estimated costs of the used dismantling technology. Therefore following questions must be answered. How much are the investments for the cutting equipment itself and how much are the investments for the supporting equipment (e.g. necessary handling equipment)? Can I use this cutting equipment only for one special task or is it applicable for many tasks and therefore saves money because other cutting or dismantling technologies are dispensable? How long is the cutting time and what is the to control this technique required personnel? Four different cutting and dismantling technologies will be introduced and described. These four technologies differ in their principle of operation but all of them are used by cutting and dismantling tasks in nuclear power plants. (author)

  16. Automatic registration method for multisensor datasets adopted for dimensional measurements on cutting tools

    International Nuclear Information System (INIS)

    Shaw, L; Mehari, F; Weckenmann, A; Ettl, S; Häusler, G

    2013-01-01

    Multisensor systems with optical 3D sensors are frequently employed to capture complete surface information by measuring workpieces from different views. During coarse and fine registration the resulting datasets are afterward transformed into one common coordinate system. Automatic fine registration methods are well established in dimensional metrology, whereas there is a deficit in automatic coarse registration methods. The advantage of a fully automatic registration procedure is twofold: it enables a fast and contact-free alignment and further a flexible application to datasets of any kind of optical 3D sensor. In this paper, an algorithm adapted for a robust automatic coarse registration is presented. The method was originally developed for the field of object reconstruction or localization. It is based on a segmentation of planes in the datasets to calculate the transformation parameters. The rotation is defined by the normals of three corresponding segmented planes of two overlapping datasets, while the translation is calculated via the intersection point of the segmented planes. First results have shown that the translation is strongly shape dependent: 3D data of objects with non-orthogonal planar flanks cannot be registered with the current method. In the novel supplement for the algorithm, the translation is additionally calculated via the distance between centroids of corresponding segmented planes, which results in more than one option for the transformation. A newly introduced measure considering the distance between the datasets after coarse registration evaluates the best possible transformation. Results of the robust automatic registration method are presented on the example of datasets taken from a cutting tool with a fringe-projection system and a focus-variation system. The successful application in dimensional metrology is proven with evaluations of shape parameters based on the registered datasets of a calibrated workpiece. (paper)

  17. Phase- and size-adjusted CT cut-off for differentiating neoplastic lesions from normal colon in contrast-enhanced CT colonography

    International Nuclear Information System (INIS)

    Luboldt, W.; Kroll, M.; Wetter, A.; Vogl, T.J.; Toussaint, T.L.; Hoepffner, N.; Holzer, K.; Kluge, A.

    2004-01-01

    A computed tomography (CT) cut-off for differentiating neoplastic lesions (polyps/carcinoma) from normal colon in contrast-enhanced CT colonography (CTC) relating to the contrast phase and lesion size is determined. CT values of 64 colonic lesions (27 polyps 0 . The slope m was determined by linear regression in the correlation (lesion ∝[xA + (1 - x)V]//H) and the Y-intercept y 0 by the minimal shift of the line needed to maximize the accuracy of separating the colonic wall from the lesions. The CT value of the lesions correlated best with the intermediate phase: 0.4A+ 0.6V(r=0.8 for polyps ≥10 mm, r=0.6 for carcinomas, r=0.4 for polyps <10 mm). The accuracy in the differentiation between lesions and normal colonic wall increased with the height implemented as divisor, reached 91% and was obtained by the dynamic cut-off described by the formula: cut-off(A,V,H) = 1.1[0.4A + 0.6V]/H + 69.8. The CT value of colonic polyps or carcinomas can be increased extrinsically by scanning in the phase in which 0.4A + 0.6V reaches its maximum. Differentiating lesions from normal colon based on CT values is possible in contrast-enhanced CTC and improves when the cut-off is adjusted (normalized) to the contrast phase and lesion size. (orig.)

  18. Wear and breakage monitoring of cutting tools by an optical method: theory

    Science.gov (United States)

    Li, Jianfeng; Zhang, Yongqing; Chen, Fangrong; Tian, Zhiren; Wang, Yao

    1996-10-01

    An essential part of a machining system in the unmanned flexible manufacturing system, is the ability to automatically change out tools that are worn or damaged. An optoelectronic method for in situ monitoring of the flank wear and breakage of cutting tools is presented. A flank wear estimation system is implemented in a laboratory environment, and its performance is evaluated through turning experiments. The flank wear model parameters that need to be known a priori are determined through several preliminary experiments, or from data available in the literature. The resulting cutting conditions are typical of those used in finishing cutting operations. Through time and amplitude domain analysis of the cutting tool wear states and breakage states, it is found that the original signal digital specificity (sigma) 2x and the self correlation coefficient (rho) (m) can reflect the change regularity of the cutting tool wear and break are determined, but which is not enough due to the complexity of the wear and break procedure of cutting tools. Time series analysis and frequency spectrum analysis will be carried out, which will be described in the later papers.

  19. Determination of laser cutting process conditions using the preference selection index method

    Science.gov (United States)

    Madić, Miloš; Antucheviciene, Jurgita; Radovanović, Miroslav; Petković, Dušan

    2017-03-01

    Determination of adequate parameter settings for improvement of multiple quality and productivity characteristics at the same time is of great practical importance in laser cutting. This paper discusses the application of the preference selection index (PSI) method for discrete optimization of the CO2 laser cutting of stainless steel. The main motivation for application of the PSI method is that it represents an almost unexplored multi-criteria decision making (MCDM) method, and moreover, this method does not require assessment of the considered criteria relative significances. After reviewing and comparing the existing approaches for determination of laser cutting parameter settings, the application of the PSI method was explained in detail. Experiment realization was conducted by using Taguchi's L27 orthogonal array. Roughness of the cut surface, heat affected zone (HAZ), kerf width and material removal rate (MRR) were considered as optimization criteria. The proposed methodology is found to be very useful in real manufacturing environment since it involves simple calculations which are easy to understand and implement. However, while applying the PSI method it was observed that it can not be useful in situations where there exist a large number of alternatives which have attribute values (performances) very close to those which are preferred.

  20. Empirical evaluation of data normalization methods for molecular classification.

    Science.gov (United States)

    Huang, Huei-Chung; Qin, Li-Xuan

    2018-01-01

    Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.

  1. Investigation on welding and cutting methods for blanket support legs of fusion experimental reactors

    International Nuclear Information System (INIS)

    Tokami, Ikuhide; Nakahira, Masataka; Kurasawa, Toshimasa; Sato, Satoshi; Furuya, Kazuyuki; Hatano, Toshihisa; Takatsu, Hideyuki; Kuroda, Toshimasa.

    1996-07-01

    A toroidally-and poloidally-divided modular blanket has been proposed for a fusion experimental reactor, such as ITER, to enhance its maintainability as well as improve its fabricability. The blanket module, typically the size of 1 m wide, 1-2 m high and 0.4 m deep and the weight of 4 ton, will be supported by support legs which are extruded from back of the module and connected to a 70-100 mm thick strong back plate. The support leg has to withstand large electromagnetic force during plasma disruption and provide the way for in-situ module replacement by remote handling. For the connection method of the support leg to the back plate, a welding approach has been investigated here in terms of its high reliability against the large electromagnetic loads. For the welding approach, the support leg needs to be 70 mm thick, and the working space for welding/cutting heads are limited to 100 mm x 150 mm adjacent to the support leg. Based on a comparison of several welding methods, e.g. NGTIG, NGMIG and laser, NGTIG has been selected as a reference due to its well-established technology and the least R and D required. As for the cutting method, a plasma cutting has been given the highest priority to be pursued because of its compactness and high speed. Through preliminary design studies, the possibility of small welding/cutting heads that will work in the limited space has been shown, and maintenance route for in-situ module replacement with pre-and postfixture of the module has been investigated. Also preliminary R and Ds have resulted in; 1)the welding distortion is predictable according to the shape of weld groove and adjustable to meet the placement requirement of the module first wall, 2)the plasma cut surface can be rewelded without machining, 3)the welding/cutting time will meet the requirement of maintenance time. (author)

  2. Manufacturing Methods for Cutting, Machining and Drilling Composites. Volume 1. Composites Machining Handbook

    Science.gov (United States)

    1978-08-01

    12°±30’ 1180±2° OPTIONAL .0005 IN./IN. BACK TAPER 015 RAD LIPS TO BE WITHIN .002 OF TRUE ANGULAR POSITION NOTES: 1. LAND WIDTH: 28% ± .005... horoscope and dye-penetrant requirements. 79 PHASE 1 PHASE II PHASE III PHASE IV CUTTING DRILLING MACHINING NONDESTRUCTIVE EVALUATION METHOD MATERIAL

  3. Calculation of chiral determinants and multiloop amplitudes by cutting and sewing method

    International Nuclear Information System (INIS)

    Losev, A.

    1989-01-01

    Functional integrals over fermions on open Riemann surfaces are determined up to a multiplicative constant by conservation laws. Using a cutting and sewing method these constants are found. Multiloop statsums and amplitudes as a product of anomaly-free expressions in Schottky parametrization and statsums on spheres are obtained. 5 refs

  4. Fast beam cut-off method in RF-knockout extraction for spot-scanning

    CERN Document Server

    Furukawa, T

    2002-01-01

    An irradiation method with magnetic scanning has been developed in order to provide accurate irradiation even for an irregular target shape. The scanning method has strongly required a lower ripple of the beam spill and a faster response to beam-on/off in slow extraction from a synchrotron ring. At HIMAC, RF-knockout extraction has utilized a bunched beam to reduce the beam-spill ripple. Therefore, particles near the resonance can be spilled out from the separatrices by synchrotron oscillation as well as by a transverse RF field. From this point of view, a fast beam cut-off method has been proposed and verified by both simulations and experiments. The maximum delay from the beam cut-off signal to beam-off has been improved to around 60 mu s from 700 mu s by a usual method. Unwanted dose has been considerably reduced by around a factor of 10 compared with that by the usual method.

  5. The development of underwater remote cutting method for the disassembling of rotary specimen rack KRR-1 and 2

    Energy Technology Data Exchange (ETDEWEB)

    Lee, D. K.; Jung, K. H.; Lee, K. W.; Oh, W. J. [KAERI, Taejon (Korea, Republic of); Lee, K. Y. [Korea Institute of Industrial Technology, Kwangju (Korea, Republic of)

    2004-07-01

    The Rotary Specimen Racks (RSRs) were highly activated and then classified intermediate level radioactive waste for the decommissioning of KRR-1anel2. The RSR can be treated as low level radioactive waste after removing stainless steel parts. To reduce the volume of intermediate level radioactive waste, underwater cutting is required to separate stainless steel parts from RSR because of high radioactivity. In this study, the automatic remote cutting method was developed to disassemble RSR under water. For automatic remote cutting processes, a CAM (Computer Aided Manufacturing) system is employed. A computer inputs NC (Numerical Control) codes to the controller, which are based on CAM model, and the controller instructs the equipment to process according to NC codes automatically. And the cutting force model was improved to cut RSR stably. The automatic cutting was conducted using imitation of RSR and then it was resulted that the developed automatic cutting method can be safely disassemble stainless steel parts of RSR under water.

  6. Hydrothermal Upflow, Serpentinization and Talc Alteration Associated with a High Angle Normal Fault Cutting an Oceanic Detachment, Northern Apennines, Italy

    Science.gov (United States)

    Alt, J.; Crispini, L.; Gaggero, L.; Shanks, W. C., III; Gulbransen, C.; Lavagnino, G.

    2017-12-01

    Normal faults cutting oceanic core complexes are observed at the seafloor and through geophysics, and may act as flow pathways for hydrothermal fluids, but we know little about such faults in the subsurface. We present bulk rock geochemistry and stable isotope data for a fault that acted as a hydrothermal upflow zone in a seafloor ultramafic-hosted hydrothermal system in the northern Apennines, Italy. Peridotites were exposed on the seafloor by detachment faulting, intruded by MORB gabbros, and are overlain by MORB lavas and pelagic sediments. North of the village of Reppia are fault shear zones in serpentinite, oriented at a high angle to the detachment surface and extending 300 m below the paleo-seafloor. The paleo-seafloor strikes roughly east-west, dipping 30˚ to the north. At depth the fault zone occurs as an anticlinal form plunging 40˚ to the west. A second fault strikes approximately north-south, with a near vertical dip. The fault rock outcrops as reddish weathered talc + sulfide in 0.1-2 m wide anastomosing bands, with numerous splays. Talc replaces serpentinite in the fault rocks, and the talc rocks are enriched in Si, metals (Fe, Cu, Pb), Light Rare Earth Elements (LREE), have variable Eu anomalies, and have low Mg, Cr and Ni contents. In some cases gabbro dikes are associated with talc-alteration and may have enhanced fluid flow. Sulfide from a fault rock has d34S=5.7‰. The mineralogy and chemistry of the fault rocks indicate that the fault acted as the upflow pathway for high-T black-smoker type fluids. Traverses away from the fault (up to 1 km) and with depth below the seafloor (up to 500 m) reveal variable influences of hydrothermal fluids, but there are no consistent trends with distance. Background serpentinites 500 m beneath the paleoseafloor have LREE depleted trends. Other serpentinites exhibit correlations of LREE with HFSE as the result of melt percolation, but there is significant scatter, and hydrothermal effects include LREE enrichment

  7. Measuring multiple residual-stress components using the contour method and multiple cuts

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO

    2009-01-01

    The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.

  8. Cutting-edge statistical methods for a life-course approach.

    Science.gov (United States)

    Bub, Kristen L; Ferretti, Larissa K

    2014-01-01

    Advances in research methods, data collection and record keeping, and statistical software have substantially increased our ability to conduct rigorous research across the lifespan. In this article, we review a set of cutting-edge statistical methods that life-course researchers can use to rigorously address their research questions. For each technique, we describe the method, highlight the benefits and unique attributes of the strategy, offer a step-by-step guide on how to conduct the analysis, and illustrate the technique using data from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development. In addition, we recommend a set of technical and empirical readings for each technique. Our goal was not to address a substantive question of interest but instead to provide life-course researchers with a useful reference guide to cutting-edge statistical methods.

  9. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  10. Casimir effect in a d-dimensional flat spacetime and the cut-off method

    International Nuclear Information System (INIS)

    Svaiter, N.F.; Svaiter, B.F.

    1989-01-01

    The CasiMir efeect in a D-dimensional spacetime produced by a Hermitian massless scalar field in the presence of a pair of perfectly reflecting parallel flat plates is discussed. The exponential cut-off regularization method is employed. The regularized vacuum energy and the Casimir energy of this field are evaluated and a detailed analysis of the divergent terms in the regularized vacuum energy is carried out. The two-dimensional version of the Casimir effect is discussed by means of the same cut-off method. A comparison between the above method and the zeta function regularization procedure is presented in a way which gives the unification between these two methods in the present case. (author) [pt

  11. Method for construction of normalized cDNA libraries

    Science.gov (United States)

    Soares, Marcelo B.; Efstratiadis, Argiris

    1998-01-01

    This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to appropriate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. This invention also provides normalized cDNA libraries generated by the above-described method and uses of the generated libraries.

  12. Normalization methods in time series of platelet function assays

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  13. Combining Illumination Normalization Methods for Better Face Recognition

    NARCIS (Netherlands)

    Boom, B.J.; Tao, Q.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. There are two categories of illumination normalization methods. The first category performs a local preprocessing, where they correct a pixel value based on a local neighborhood in the images. The second

  14. A high precision method for normalization of cross sections

    International Nuclear Information System (INIS)

    Aguilera R, E.F.; Vega C, J.J.; Martinez Q, E.; Kolata, J.J.

    1988-08-01

    It was developed a system of 4 monitors and a program to eliminate, in the process of normalization of cross sections, the dependence of the alignment of the equipment and those condition of having centered of the beam. It was carried out a series of experiments with the systems 27 Al + 70, 72, 74, 76 Ge, 35 Cl + 58 Ni, 37 Cl + 58, 60, 62, 64 Ni and ( 81 Br, 109 Rh) + 60 Ni. For these experiments the typical precision of 1% was obtained in the normalization. It is demonstrated theoretical and experimentally the advantage of this method on those that use 1 or 2 monitors. (Author)

  15. Advanced cutting, welding and inspection methods for vacuum vessel assembly and maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Jones, L. E-mail: jonesl@ipp.mgg.de; Alfile, J.-P.; Aubert, Ph.; Punshon, C.; Daenner, W.; Kujanpaeae, V.; Maisonnier, D.; Serre, M.; Schreck, G.; Wykes, M

    2000-11-01

    ITER requires a 316 l stainless steel, double-skinned vacuum vessel (VV), each shell being 60 mm thick. EFDA (European Fusion Development Agreement) is investigating methods to be used for performing welding and NDT during VV assembly and also cutting and re-welding for remote sector replacement, including the development of an Intersector Welding Robot (IWR) [Jones et al. This conference]. To reduce the welding time, distortions and residual stresses of conventional welding, previous work concentrated on CO{sub 2} laser welding and cutting processes [Jones et al. Proc. Symp. Fusion Technol., Marseilles, 1998]. NdYAG laser now provides the focus for welding of the rearside root and for completing the weld for overhead positions with multipass filling. Electron beam (E-beam) welding with local vacuum offers a single-pass for most of the weld depth except for overhead positions. Plasma cutting has shown the capability to contain the backside dross and preliminary work with NdYAG laser cutting has shown good results. Automated ultrasonic inspection of assembly welds will be improved by the use of a phased array probe system that can focus the beam for accurate flaw location and sizing. This paper describes the recent results of process investigations in this R and D programme, involving five European sites and forming part of the overall VV/blanket research effort [W. Daenner et al. This conference].

  16. Paper Cuts.

    Science.gov (United States)

    Greene, Lisa A.

    1990-01-01

    Describes how to create paper cuts and suggests the most appropriate materials for young children that give good quality results. Describes the methods the author, a professional artist, uses to assemble her own paper cuts and how these can be adopted by older students. (KM)

  17. Modelling of tunnelling processes and rock cutting tool wear with the particle finite element method

    Science.gov (United States)

    Carbonell, Josep Maria; Oñate, Eugenio; Suárez, Benjamín

    2013-09-01

    Underground construction involves all sort of challenges in analysis, design, project and execution phases. The dimension of tunnels and their structural requirements are growing, and so safety and security demands do. New engineering tools are needed to perform a safer planning and design. This work presents the advances in the particle finite element method (PFEM) for the modelling and the analysis of tunneling processes including the wear of the cutting tools. The PFEM has its foundation on the Lagrangian description of the motion of a continuum built from a set of particles with known physical properties. The method uses a remeshing process combined with the alpha-shape technique to detect the contacting surfaces and a finite element method for the mechanical computations. A contact procedure has been developed for the PFEM which is combined with a constitutive model for predicting the excavation front and the wear of cutting tools. The material parameters govern the coupling of frictional contact and wear between the interacting domains at the excavation front. The PFEM allows predicting several parameters which are relevant for estimating the performance of a tunnelling boring machine such as wear in the cutting tools, the pressure distribution on the face of the boring machine and the vibrations produced in the machinery and the adjacent soil/rock. The final aim is to help in the design of the excavating tools and in the planning of the tunnelling operations. The applications presented show that the PFEM is a promising technique for the analysis of tunnelling problems.

  18. NOLB: Nonlinear Rigid Block Normal Mode Analysis Method

    OpenAIRE

    Hoffmann , Alexandre; Grudinin , Sergei

    2017-01-01

    International audience; We present a new conceptually simple and computationally efficient method for nonlinear normal mode analysis called NOLB. It relies on the rotations-translations of blocks (RTB) theoretical basis developed by Y.-H. Sanejouand and colleagues. We demonstrate how to physically interpret the eigenvalues computed in the RTB basis in terms of angular and linear velocities applied to the rigid blocks and how to construct a nonlinear extrapolation of motion out of these veloci...

  19. Panel cutting method: new approach to generate panels on a hull in Rankine source potential approximation

    Directory of Open Access Journals (Sweden)

    Hee-Jong Choi

    2011-12-01

    Full Text Available In the present study, a new hull panel generation algorithm, namely panel cutting method, was developed to predict flow phenomena around a ship using the Rankine source potential based panel method, where the iterative method was used to satisfy the nonlinear free surface condition and the trim and sinkage of the ship was taken into account. Numerical computations were performed to investigate the validity of the proposed hull panel generation algorithm for Series 60 (CB=0.60 hull and KRISO container ship (KCS, a container ship designed by Maritime and Ocean Engineering Research Institute (MOERI. The computational results were validated by comparing with the existing experimental data.

  20. Panel cutting method: new approach to generate panels on a hull in Rankine source potential approximation

    Science.gov (United States)

    Choi, Hee-Jong; Chun, Ho-Hwan; Park, Il-Ryong; Kim, Jin

    2011-12-01

    In the present study, a new hull panel generation algorithm, namely panel cutting method, was developed to predict flow phenomena around a ship using the Rankine source potential based panel method, where the iterative method was used to satisfy the nonlinear free surface condition and the trim and sinkage of the ship was taken into account. Numerical computations were performed to investigate the validity of the proposed hull panel generation algorithm for Series 60 (CB=0.60) hull and KRISO container ship (KCS), a container ship designed by Maritime and Ocean Engineering Research Institute (MOERI). The computational results were validated by comparing with the existing experimental data.

  1. [Efficacy on chronic obstructive pulmonary disease at stable stage treated with cutting method and western medication].

    Science.gov (United States)

    Xu, Jian-hua; Xu, Bin; Deng, Yan-qing

    2014-10-01

    To compare the difference in clinical efficacy on chronic obstructive pulmonary disease (COPD) at stable stage in the patients among the combined therapy of cutting method and western medication (combined therapy), simple cutting method and simple western medication. One hundred and twenty cases of COPD were randomized into three groups, 40 cases in each one. In the cutting method group, for excessive phlegm pattern/syndrome, Feishu (BL 13), Danzhong (CV 17), Dingchuan (EX-B 1) and Yuji (LU 10) were selected as the main acupoints, and Lieque (LU 7) and Pianli (LI 6) were as the supplementary acupoints. For the pattern/syndrome of failure to consolidate kidney primary, Shenshu (BL 23), Pishu (BL 20), Guanyuan (CV 4) and Yuji (LU 10) were selected as main acupoints, and Jueyinshu (BL 14) and Zusanli (ST 36) were as the supplementary acupoint. Three acupoints were selected alternatively in each treatment and the cutting method was applied once every 10 days. Three treatments made one session. Two sessions of treatment were required. In the western medication group, salbutamol sulfate aerosol, one press (200 μg/press) was used each night, as well as salmeterol xinafoate and fluticasone propionate powder for inhalation, one inhalation each night. The treatment of 1 month made one session. Two sessions were required. In the combined therapy group, the cutting method and western medication were applied in combination. The results of clinical symptom score, lung function test, arterial blood gas analysis, degree of inflation as well as clinical efficacy were observed before and after treatment in each group. Except the degree of lung inflation, the clinical symptom score, indices of lung function test, partial pressure of arterial blood gas (PaO2) and partial pressure of carbon dioxide (PaCO2) were all obviously improved after treatment as compared with those before treatment in each group (all Psyndrome differentiation and the combined therapy with western medication

  2. Systems and Methods for Determining Water-Cut of a Fluid Mixture

    KAUST Repository

    Karimi, Muhammad Akram

    2017-03-02

    Provided in some embodiments are systems and methods for measuring the water content (or water-cut) of a fluid mixture. Provided in some embodiments is a water-cut sensor system that includes a T-resonator, a ground conductor, and a separator. The T-resonator including a feed line, and an open shunt stub conductively coupled to the feed line. The ground conductor including a bottom ground plane opposite the T-resonator and a ground ring conductively coupled to the bottom ground plane, with the feed line overlapping at least a portion of the ground ring. The separator including a dielectric material disposed between the feed line and the portion of the ground ring overlapped by the feed line, and the separator being adapted to electrically isolate the T-resonator from the ground conductor.

  3. Systems and Methods for Determining Water-Cut of a Fluid Mixture

    KAUST Repository

    Karimi, Muhammad Akram

    2017-12-07

    Provided in some embodiments are systems and methods for measuring the water content (or water-cut) of a fluid mixture. Provided in some embodiments is a water-cut sensor system that includes a helical T-resonator, a helical ground conductor, and a separator provided at an exterior of a cylindrical pipe. The helical T-resonator including a feed line, and a helical open shunt stub conductively coupled to the feed line. The helical ground conductor including a helical ground plane opposite the helical open shunt stub and a ground ring conductively coupled to the helical ground plane. The feed line overlapping at least a portion of the ground ring, and the separator disposed between the feed line and the portion of the ground ring overlapped by the feed line to electrically isolate the helical T-resonator from the helical ground conductor.

  4. The effect of cutting origin and organic plant growth regulator on the growth of Daun Ungu (Graptophyllum pictum) through stem cutting method

    Science.gov (United States)

    Pratama, S. P.; Yunus, A.; Purwanto, E.; Widyastuti, Y.

    2018-03-01

    Graptophyllum pictum is one of medical plants which has important chemical content to treat diseases. Leaf, bark and flower can be used to facilitate menstruation, treat hemorrhoid, constipation, ulcers, ulcers, swelling, and earache. G. pictum is difficult to propagated by seedling due to the long duration of seed formation, thusvegetative propagation is done by stem cutting. The aims of this study are to obtain optimum combination of cutting origin and organic plant growth regulator in various consentration for the growth of Daun Ungu through stem cutting method. This research was conducted at Research center for Medicinal Plant and Traditional DrugTanjungsari, Tegal Gede, Karanganyar in June to August 2016. Origin of cuttings and organic plant growth regulator were used as treatments factor. A completely randomized design (RAL) is used and data were analyzed by F test (ANOVA) with a confidence level of 95%. Any significant differences among treatment followed with Duncan test at a = 5%. The research indicates that longest root was resulted from the treatment of 0,5 ml/l of organic plant growth regulator. The treatment of 1 ml/l is able to increase the fresh and dry weight of root, treatment of 1,5 ml/l of organic plant growth regulator was able to increase the percentage of growing shoots. Treatment of base part as origin of cuttings increases the length, fresh weight and and dry weight of shoot, increase the number of leaves. Interaction treatment between 1 ml/l consentration of organic plant growth regulator and central part origin of cuttings is capable of increasing the leaf area, whereas treatment without organic plant growth regulator and base part as planting material affects the smallest leaf area.

  5. OPTIMASI PARAMETER MESIN LASER CUTTING TERHADAP KEKASARAN DAN LAJU PEMOTONGAN PADA SUS 316L MENGGUNAKAN TAGUCHI GREY RELATIONAL ANALYSIS METHOD

    Directory of Open Access Journals (Sweden)

    Rakasita R

    2016-06-01

    Full Text Available Optimasi parameter adalah teknik yang digunakan pada proses manufaktur untuk menghasilkan produk terbaik. Penelitian ini bertujuan untuk mengoptimasi parameter CNC laser cutting, yaitu titik fokus sinar laser, tekanan gas cutting dan cutting speed untuk mengurangi variasi terhadap respon kekasaran dan laju pemotongan pada material SUS 316L. Masing-masing parameter memiliki 3 level dan pada penelitian ini menggunakan matriks orthogonal L9 (34. Metode ANOVA dan Taguchi digunakan untuk menganalisis data hasil percobaan. Optimasi kekasaran minimum permukaan dan laju pemotongan maksimum pada proses laser cutting dilakukan dengan menggunakan Grey relational analysis. Eksperimen konfirmasi digunakan untuk membuktikan hasil optimal yang telah didapatkan dari metode Taguchi Grey relational analysis. Hasil eksperimen menunjukkan bahwa Taguchi Grey relational analysis efektif digunakan untuk mengoptimasi parameter pemesinan pada laser cutting dengan multi respon.   Abstract Parameter optimization is used in manufacturing as an indicator to produce the best manufacturing product. This paper studies an optimization parameters of CNC laser cutting such as focus of laser beam, pressure cutting gases and cutting speed for reducing variation of surface roughness and cutting rate on material SUS 316L. Based on L9(34 orthogonal array parameters, it is analized using ANOVA based on Taguchi method. In order to optimaze the minimum surface roughness and maximum cutting rate in laser cutting process, it is used Grey relational analysis. The confirmation experiments used to validate the optimal results that has done by Taguchi method. The results show that the Taguchi Grey relational analysis is being effective to optimize the machining parameters for laser cutting process with two responses.

  6. Experimental study on variations in Charpy impact energies of low carbon steel, depending on welding and specimen cutting method

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Zhaorui; Kang, Hansaem; Lee, Young Seog [Chung-Ang University, Seoul (Korea, Republic of)

    2016-05-15

    This paper presents an experimental study that examines variations of Charpy impact energy of a welded steel plate, depending upon the welding method and the method for obtaining the Charpy specimens. Flux cored arc welding (FCAW) and Gas tungsten arc welding (GTAW) were employed to weld an SA516 Gr. 70 steel plate. The methods of wire cutting and water-jet cutting were adopted to take samples from the welded plate. The samples were machined according to the recommendations of ASTM SEC. II SA370, in order to fit the specimen dimension that the Charpy impact test requires. An X-ray diffraction (XRD) method was used to measure the as-weld residual stress and its redistribution after the samples were cut. The Charpy impact energy of specimens was considerably dependent on the cutting methods and locations in the welded plate where the specimens were taken. The specimens that were cut by water jet followed by FCAW have the greatest resistance-to-fracture (Charpy impact energy). Regardless of which welding method was used, redistributed transverse residual stress becomes compressive when the specimens are prepared using water-jet cutting. Meanwhile, redistributed transverse residual stress becomes tensile when the specimens are prepared using wire cutting.

  7. Using a cut-paste method to prepare a carbon nanotube fur electrode

    International Nuclear Information System (INIS)

    Zhang, H; Cao, G P; Yang, Y S

    2007-01-01

    We describe and realize an aligned carbon nanotube array based 'carbon nanotube fur (CNTF)' electrode. We removed an 800 μm long aligned carbon nanotube array from the silica substrate, and then pasted the array on a nickel foam current collector to obtain a CNTF electrode. CNTF's characteristics and electrochemical properties were studied systemically in this paper. The cut-paste method is simple, and does not damage the microstructure of the aligned carbon nanotube array. The CNTF electrode obtained a specific capacitance of 14.1 F g -1 and excellent rate capability

  8. New Graphical Methods and Test Statistics for Testing Composite Normality

    Directory of Open Access Journals (Sweden)

    Marc S. Paolella

    2015-07-01

    Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.

  9. Enlisting Clustering and Graph-Traversal Methods for Cutting Pattern and Net Topology Design in Pneumatic Hybrids

    DEFF Research Database (Denmark)

    Ayres, Phil; Vestartas, Petras; Ramsgaard Thomsen, Mette

    2017-01-01

    Cutting patterns for architectural membranes are generally characterised by rational approaches to surface discretisation and minimisation of geometric deviation between discrete elements that comprise the membrane. In this paper, we present an alternative approach for cutting pattern generation...... to the cutting pattern generation method and the net topology generation method used to produce a constraint net for a given membrane. We test our computational design approach through an iterative cycle of digital and physical prototyping before realising an air-inflated cable restrained pneumatic structural...

  10. Standard Test Method for Normal Spectral Emittance at Elevated Temperatures

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1972-01-01

    1.1 This test method describes a highly accurate technique for measuring the normal spectral emittance of electrically conducting materials or materials with electrically conducting substrates, in the temperature range from 600 to 1400 K, and at wavelengths from 1 to 35 μm. 1.2 The test method requires expensive equipment and rather elaborate precautions, but produces data that are accurate to within a few percent. It is suitable for research laboratories where the highest precision and accuracy are desired, but is not recommended for routine production or acceptance testing. However, because of its high accuracy this test method can be used as a referee method to be applied to production and acceptance testing in cases of dispute. 1.3 The values stated in SI units are to be regarded as the standard. The values in parentheses are for information only. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this stan...

  11. Different methods of measuring ADC values in normal human brain

    International Nuclear Information System (INIS)

    Wei Youping; Sheng Junkang; Zhang Caiyuan

    2009-01-01

    Objective: To investigate better method of measuring ADC values of normal brain, and provide reference for further research. Methods: Twenty healthy people's MR imaging were reviewed. All of them underwent routine MRI scans and echo-planar diffusion-weighted imaging (DWI), and ADC maps were reconstructed on work station. Six regions of interest (ROI) were selected for each object, the mean ADC values were obtained for each position on DWI and ADC maps respectively. Results: On the anisotropic DWI map calculated in the hypothalamus, ADC M , ADC P , ADC S values were no significant difference (P>0.05), in the frontal white matter and internal capsule hindlimb, there was a significant difference (P ave value exist significant difference to direct measurement on the anisotropic (isotropic) ADC map (P<0.001). Conclusion: Diffusion of water in the frontal white matter and internal capsule are anisotropic, but it is isotropic in the hypothalamus; different quantitative methods of diffusion measurement of 4ADC values have significant difference, but ADC values calculated through the DWI map is more accurate, quantitative diffusion study of brain tissue should also consider the diffusion measurement method. (authors)

  12. Method for 3D noncontact measurements of cut trees package area

    Science.gov (United States)

    Knyaz, Vladimir A.; Vizilter, Yuri V.

    2001-02-01

    Progress in imaging sensors and computers create the background for numerous 3D imaging application for wide variety of manufacturing activity. Many demands for automated precise measurements are in wood branch of industry. One of them is the accurate volume definition for cut trees carried on the truck. The key point for volume estimation is determination of the front area of the cut tree package. To eliminate slow and inaccurate manual measurements being now in practice the experimental system for automated non-contact wood measurements is developed. The system includes two non-metric CCD video cameras, PC as central processing unit, frame grabbers and original software for image processing and 3D measurements. The proposed method of measurement is based on capturing the stereo pair of front of trees package and performing the image orthotranformation into the front plane. This technique allows to process transformed image for circle shapes recognition and calculating their area. The metric characteristics of the system are provided by special camera calibration procedure. The paper presents the developed method of 3D measurements, describes the hardware used for image acquisition and the software realized the developed algorithms, gives the productivity and precision characteristics of the system.

  13. Laser capture microdissection: Arcturus(XT) infrared capture and UV cutting methods.

    Science.gov (United States)

    Gallagher, Rosa I; Blakely, Steven R; Liotta, Lance A; Espina, Virginia

    2012-01-01

    Laser capture microdissection (LCM) is a technique that allows the precise procurement of enriched cell populations from a heterogeneous tissue under direct microscopic visualization. LCM can be used to harvest the cells of interest directly or can be used to isolate specific cells by ablating the unwanted cells, resulting in histologically enriched cell populations. The fundamental components of laser microdissection technology are (a) visualization of the cells of interest via microscopy, (b) transfer of laser energy to a thermolabile polymer with either the formation of a polymer-cell composite (capture method) or transfer of laser energy via an ultraviolet laser to photovolatize a region of tissue (cutting method), and (c) removal of cells of interest from the heterogeneous tissue section. Laser energy supplied by LCM instruments can be infrared (810 nm) or ultraviolet (355 nm). Infrared lasers melt thermolabile polymers for cell capture, whereas ultraviolet lasers ablate cells for either removal of unwanted cells or excision of a defined area of cells. LCM technology is applicable to an array of applications including mass spectrometry, DNA genotyping and loss-of-heterozygosity analysis, RNA transcript profiling, cDNA library generation, proteomics discovery, and signal kinase pathway profiling. This chapter describes the unique features of the Arcturus(XT) laser capture microdissection instrument, which incorporates both infrared capture and ultraviolet cutting technology in one instrument, using a proteomic downstream assay as a model.

  14. Physicochemical, microbial and sensory quality of fresh-cut red beetroots in relation to sanization method and storage duration

    Directory of Open Access Journals (Sweden)

    Dulal Chandra

    2015-06-01

    Full Text Available Effects of sanitization and storage on fresh-cut beetroots (Beta vulgaris L. were evaluated following sanitation – peeling - cutting (SPC, peeling – sanitation – cutting (PSC and peeling – cutting – sanitation (PCS methods with (Cl, or without (TW, 100 ppm chlorine solution, then packaged in polyethylene bag and stored at 5°C for up to 14 days. Chroma values of fresh-cut beetroots significantly declined whereas whiteness index and titratable acidity values increased, however, texture and total soluble solid contents showed no significant variation. Betalain contents decreased gradually and total phenol content showed inconsistence trend. PCS-Cl treated samples accounted for higher betalains decline and received lower visual quality scores despite its lower total aerobic bacterial count. Minimum microbial population was observed in PSC-Cl methodsalong with higher levels of betalain contents. Considering pigment retention, microbial and visual qualities, beetroots sanitized with chlorine water following PSC method was the best processingway for fresh-cut beetroots and therefore, PSC-Cl treatment could commercially be used for processing of fresh-cut beetroots.

  15. Novel composite cBN-TiN coating deposition method: structure and performance in metal cutting

    International Nuclear Information System (INIS)

    Russell, W.C.; Malshe, A.P.; Yedave, S.N.; Brown, W.D.

    2001-01-01

    Cubic boron nitride coatings are under development for a variety of applications but stabilization of the pure cBN form and adhesion of films deposited by PVD and ion-based methods has been difficult. An alternative method for depositing a composite cBN-TiN film has been developed for wear related applications. The coating is deposited in a two-stage process utilizing ESC (electrostatic spray coating) and CVI (chemical vapor infiltration). Fully dense films of cBN particles evenly dispersed in a continuous TiN matrix have been developed. Testing in metal cutting has shown an increase in tool life (turning - 4340 steel) of three to seven times, depending of machining parameters, in comparison with CVD deposited TiN films. (author)

  16. Reliability analysis for thermal cutting method based non-explosive separation device

    International Nuclear Information System (INIS)

    Choi, Jun Woo; Hwang, Kuk Ha; Kim, Byung Kyu

    2016-01-01

    In order to increase the reliability of a separation device for a small satellite, a new non-explosive separation device is invented. This device is activated using a thermal cutting method with a Ni-Cr wire. A reliability analysis is carried out for the proposed non-explosive separation device by applying the Fault tree analysis (FTA) method. In the FTA results for the separation device, only ten single-point failure modes are found. The reliability modeling and analysis for the device are performed considering failure of the power supply, the Ni-Cr wire burns failure and unwinds, the holder separation failure, the balls separation failure, and the pin release failure. Ultimately, the reliability of the proposed device is calculated as 0.999989 with five Ni-Cr wire coils

  17. Determination of the stresses and displacements in the cut off curtain body executed by the > method

    International Nuclear Information System (INIS)

    Snisarenko, V.I.; Mel'nikov, A.I.

    1994-01-01

    Construction of the cut-off-curtain (COC) is analyzed as a possible variant to reduce the rate of radioactive horizontal migration. Such constructions can be executed by the > method. The theoretical analysis of the stress-strained state of the carried out using the methods of the theory of elasticity and of the limit equilibrium of the strewing medium. Theoretical dependences are obtained and formulas for practical calculations of the COC-body stress-strained state in the depth intervals which are of practical interest are suggested. The dependences obtained may be used to calculate the consolidation parameters and filtration coefficients, to choose materials for the COC body, geometrical size and film elements included

  18. Reliability analysis for thermal cutting method based non-explosive separation device

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Woo; Hwang, Kuk Ha; Kim, Byung Kyu [Korea Aerospace University, Goyang (Korea, Republic of)

    2016-12-15

    In order to increase the reliability of a separation device for a small satellite, a new non-explosive separation device is invented. This device is activated using a thermal cutting method with a Ni-Cr wire. A reliability analysis is carried out for the proposed non-explosive separation device by applying the Fault tree analysis (FTA) method. In the FTA results for the separation device, only ten single-point failure modes are found. The reliability modeling and analysis for the device are performed considering failure of the power supply, the Ni-Cr wire burns failure and unwinds, the holder separation failure, the balls separation failure, and the pin release failure. Ultimately, the reliability of the proposed device is calculated as 0.999989 with five Ni-Cr wire coils.

  19. A NEW METHOD FOR 3D SHAPE INDEXING AND RETRIEVAL IN LARGE DATABASE BY USING THE LEVEL CUT

    OpenAIRE

    M. Elkhal; A. Lakehal; K. Satori

    2014-01-01

    In this study, we propose a new method for indexing and retrieval of 3D models in large databases based on binary images extracted from the 3D object called “level cut” LC. These cuts are obtained by the intersection of the set of the plans with the 3D object. A set of equidistant parallel plans generates by the intersection with the 3D object a set of cuts that used to indexing the 3D model. We are based on these cuts to describe the 3D object by using the vectors descriptors bas...

  20. Modeling of the Cutting Forces in Turning Process Using Various Methods of Cooling and Lubricating: An Artificial Intelligence Approach

    Directory of Open Access Journals (Sweden)

    Djordje Cica

    2013-01-01

    Full Text Available Cutting forces are one of the inherent phenomena and a very significant indicator of the metal cutting process. The work presented in this paper is an investigation of the prediction of these parameters in turning using soft computing techniques. During the experimental research focus is placed on the application of various methods of cooling and lubricating of the cutting zone. On this occasion were used the conventional method of cooling and lubricating, high pressure jet assisted machining, and minimal quantity lubrication technique. The data obtained by experiment are used to create two different models, namely, artificial neural network and adaptive networks based fuzzy inference systems for prediction of cutting forces. Furthermore, both models are compared with the experimental data and results are indicated.

  1. Indirect questioning method reveals hidden support for female genital cutting in South Central Ethiopia.

    Science.gov (United States)

    Gibson, Mhairi A; Gurmu, Eshetu; Cobo, Beatriz; Rueda, María M; Scott, Isabel M

    2018-01-01

    Female genital cutting (FGC) has major implications for women's physical, sexual and psychological health, and eliminating the practice is a key target for public health policy-makers. To date one of the main barriers to achieving this has been an inability to infer privately-held views on FGC within communities where it is prevalent. As a sensitive (and often illegal) topic, people are anticipated to hide their true support for the practice when questioned directly. Here we use an indirect questioning method (unmatched count technique) to identify hidden support for FGC in a rural South Central Ethiopian community where the practice is common, but thought to be in decline. Employing a socio-demographic household survey of 1620 Arsi Oromo adults, which incorporated both direct and indirect direct response (unmatched count) techniques we compare directly-stated versus privately-held views in support of FGC, and individual variation in responses by age, gender and education and target female (daughters versus daughters-in-law). Both genders express low support for FGC when questioned directly, while indirect methods reveal substantially higher acceptance (of cutting both daughters and daughters-in-law). Educated adults (those who have attended school) are privately more supportive of the practice than they are prepared to admit openly to an interviewer, indicating that education may heighten secrecy rather than decrease support for FGC. Older individuals hold the strongest views in favour of FGC (particularly educated older males), but they are also more inclined to conceal their support for FGC when questioned directly. As these elders represent the most influential members of society, their hidden support for FGC may constitute a pivotal barrier to eliminating the practice in this community. Our results demonstrate the great potential for indirect questioning methods to advance knowledge and inform policy on culturally-sensitive topics like FGC; providing more

  2. Development of cutting and welding methods for thick-walled stainless steel support and containment structures for ITER

    International Nuclear Information System (INIS)

    Jones, L.; Maisonnier, D.; Goussain, J.; Johnson, G.; Petring, D.; Wernwag, L.

    1998-01-01

    In ITER the containment and support structures are made from 316L(N)-IG (ITER Grade) stainless steel plate, 40 to 70 mm thick. The structures are divided into twenty sectors which have to be welded together in situ. The three areas of work described in this paper are, CO 2 laser welding, plasma cutting and CO 2 laser cutting. CO 2 laser welding offers significant advantages due to its high speed and low distortion and the most powerful commercial laser in Europe has been used to investigate single pass welding of thick plates, with strong welds up to 35 mm thick being achieved in one pass. For cutting, the space available on the back-side to collect debris and protect fragile components from damage is limited to 30 mm. A static, water-cooled backside protection plate proved unable to contain the debris from plasma cutting so a reciprocating backside protection system with dry ceramic heat shield demonstrated a solution. A 10 kW CO 2 laser system for nitrogen-assisted laser cutting, provided successful results at 40 mm thickness. This technique shows considerable promise as significant reductions in through power and rate of debris production are expected compared with plasma cutting and thicker cuts appear feasible. The results presented herein represent significant technical advances and will be strong candidates for the mix of methods which will have to be used for the assembly and maintenance of the ITER machine. (authors)

  3. Application of wire sawing method to decommissioning of nuclear power plant. Cutting test with turbine pedestal of thermal power plant

    International Nuclear Information System (INIS)

    Hasegawa, Hideki; Uchiyama, Noriyuki; Sugiyama, Kazuya; Yamashita, Yoshitaka; Watanabe, Morishige

    1995-01-01

    It is very important to reduce radioactive waste volume, and to reduce radiation dose to workers and to the public during dismantling of the activated concrete in the decommissioning stage of a nuclear power plant. For the above, we studied a dismantling method which can separate activated concrete from non-activated concrete safely and effectively. Considering the state of legal regulation about radioactive waste disposal, and the state of developing of decommissioning technologies, we come to a conclusion that wire sawing method is feasible as a concrete cutting method. This study was carried out to evaluate the availability of the wire sawing method to dismantling of concrete structures of nuclear power plants. This study consists of concrete cutting rate test and concrete block cutting test. The former is to obtain data about cutting rate with various steel ratios while the latter is to obtain data about working time and man hour of the whole work with wire sawing. Thirty-six year old turbine pedestal of a thermal power plant was selected as a test piece to simulate actual decommissioning work of nuclear power plant, taking its massive concrete volume and age. Taking account of the handling in the building, the wire sawing machine with motor driven was used in this study considering that it did not produce exhaust gas. The concrete cutting rate test was performed with parameter of steel ratio in the concrete, wire tension and cutting direction. In the concrete block cutting test, imaging the actual cutting situation, cubic blocks which side was approximately 1 meter were taken out, and a large block to be cut and to be taken out is a section of 1m x 1.5m x 10m. Test results are shown below. The difference of cutting rate was mainly caused by the difference of reinforcement steel ratio. Working time data of installation, removal of machines and cutting were obtained. Data on secondary waste (dust, drainage and sludge) and environmental effect (noise and

  4. Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining

    Directory of Open Access Journals (Sweden)

    Qiaokang Liang

    2016-11-01

    Full Text Available Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing.

  5. Methods and Research for Multi-Component Cutting Force Sensing Devices and Approaches in Machining.

    Science.gov (United States)

    Liang, Qiaokang; Zhang, Dan; Wu, Wanneng; Zou, Kunlin

    2016-11-16

    Multi-component cutting force sensing systems in manufacturing processes applied to cutting tools are gradually becoming the most significant monitoring indicator. Their signals have been extensively applied to evaluate the machinability of workpiece materials, predict cutter breakage, estimate cutting tool wear, control machine tool chatter, determine stable machining parameters, and improve surface finish. Robust and effective sensing systems with capability of monitoring the cutting force in machine operations in real time are crucial for realizing the full potential of cutting capabilities of computer numerically controlled (CNC) tools. The main objective of this paper is to present a brief review of the existing achievements in the field of multi-component cutting force sensing systems in modern manufacturing.

  6. Graph cut-based method for segmenting the left ventricle from MRI or echocardiographic images.

    Science.gov (United States)

    Bernier, Michael; Jodoin, Pierre-Marc; Humbert, Olivier; Lalande, Alain

    2017-06-01

    In this paper, we present a fast and interactive graph cut method for 3D segmentation of the endocardial wall of the left ventricle (LV) adapted to work on two of the most widely used modalities: magnetic resonance imaging (MRI) and echocardiography. Our method accounts for the fundamentally different nature of both modalities: 3D echocardiographic images have a low contrast, a poor signal-to-noise ratio and frequent signal drop, while MR images are more detailed but also cluttered and contain highly anisotropic voxels. The main characteristic of our method is to work in a 3D Bezier coordinate system instead of the original Euclidean space. This comes with several advantages, including an implicit shape prior and a result guarantied not to have any holes in it. The proposed method is made of 4 steps. First, a 3D sampling of the LV cavity is made based on a Bezier coordinate system. This allows to warp the input 3D image to a Bezier space in which a plane corresponds to an anatomically plausible 3D Euclidean bullet shape. Second, a 3D graph is built and an energy term (which is based on the image gradient and a 3D probability map) is assigned to each edge of the graph, some of which being given an infinite energy to ensure the resulting 3D structure passes through key anatomical points. Third, a max-flow min-cut procedure is executed on the energy graph to delineate the endocardial surface. And fourth, the resulting surface is projected back to the Euclidean space where a post-processing convex hull algorithm is applied on every short axis slice to remove local concavities. Results obtained on two datasets reveal that our method takes between 2 and 5s to segment a 3D volume, it has better results overall than most state-of-the-art methods on the CETUS echocardiographic dataset and is statistically as good as a human operator on MR images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Preliminary investigation on welding and cutting methods for first wall support leg in ITER blanket module

    International Nuclear Information System (INIS)

    Mohri, Kensuke; Suzuki, Satoshi; Enoeda, Mikio; Kakudate, Satoshi; Shibanuma, Kiyoshi; Akiba, Masato

    2006-08-01

    Concept of a module type of blanket has been applied to ITER shield blanket, of which size is typically 1mW x 1mH x 0.4mB with the weight of 4 ton, in order to enhance its maintainability and fabricability. Each shield blanket module consists of a shield block and four first walls which are separable from the shield block for the purpose of reduction of an electro-magnetic force in disruption events, radio-active waste reduction in the maintenance work and cost reduction in fabrication process. A first wall support leg, a part of the first wall component located between the first wall and the shield block, is required not only to be connected metallurgically to the shield block in order to withstand the electro-magnetic force and coolant pressure, but also to be able to replace the first wall more than 2 times in the hot cell during the life time of the reactor. Therefore, the consistent structure where remote handling equipment can be access to the joint and carry out the welding/cutting works perfectly to replace the first wall in the hot cell is required in the shield blanket design. This study shows an investigation of the blanket module no.10 design with a new type of the first wall support leg structure based on Disc-Cutter technology, which had been developed for the main pipe cutting in the maintenance phase and was selected out of a number of candidate methods, taking its large advantages into account, such as 1) a post-treatment can be eliminated in the hot cell because of no making material chips and of no need of lubricant, 2) the cut surface can be rewelded without any machining. And also, a design for the small type of Disc-Cutter applied to the new blanket module no.10 has been investigated. In conclusion, not only the good performance of Disc-Cutter technology applied to the updated blanket module, but also consistent structure of the simplified shield blanket module including the first wall support leg in order to satisfy the requirements in the

  8. A multi objective optimization of gear cutting in WEDM of Inconel 718 using TOPSIS method

    Directory of Open Access Journals (Sweden)

    K.D. Mohapatra

    2017-07-01

    Full Text Available The present paper deals with the experimental analysis and multi objective optimization of gear cutting process of Inconel 718 using WEDM. The objective of the present work is to optimize the parameters in order to maximize the material removal rate and minimize the kerf in a gear cutting process to get the optimum value. The MRR and kerf play a major role in optimizing the parameters in WEDM process. The experiment is carried out in the wire EDM machine using brass wire as the electrode, Inconel 718 as the work-piece material and distilled water as the dielectric. The design array is created by using Design of Experiment in a Taguchi L16 orthogonal array repeated once. The gear has a base diameter of 20 mm, addendum diameter of 22.5 mm and a pressure angle of 20º with 16 numbers of teeth. The machining operation is carried out by taking 3 input parameters at 4 different levels each. The output parameters such as Material Removal rate and Kerf width were obtained and optimized using TOPSIS method to know the optimum setting. Microstructural analysis of both material and wire were studied to know the various defects during the machining operation. Various plots were obtained to know the effects of the process parameters in WEDM. A regression model was also obtained to validate the statistical model values with the experimental. ANOVA table and Response table were carried out to know the significant parameters and rank respectively in the Wire EDM process. Surface roughness, Addendum and Tooth width of gears were also found out at the optimum settings. The optimum setting of the gear obtained can be used to produce high quality gears and can also be applied for future findings.

  9. Method of vertically and horizontally cutting steel pipe piles and removing them based on the development of a steel pipe pile vertically cutting machine; Kokanko tatehoko setsudanki no kaihatsu ni yoru kochi chubu no juo setsudan tekkyo koho

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, S.; Takeshita, A.; Kobayashi, K.

    1997-07-25

    A machine for vertically cutting steel pipe piles has newly been developed for the purpose of removing the end portions the shore protection steel pipe piles which interfere with the shield tunneling work in the Ohokagawa River tunneling section on the Minato Mirai 21 Line. This paper reports the development of the machine for vertically cutting steel pipe piles, and a method of cutting the shield tunneling work hindering piles under the ground by using this machine. The obstacle-constituting portions of the piles are removed by destroying the copings, excavating the interior of the piles to make the same hollow so that a cutting machine can be inserted, and cutting the piles vertically and horizontally. The basic structure of the cutting machine comprises a lower cutting unit for making forward and backward and upward and downward movements of a cutter, and an upper movable unit for controlling the rotation of the cutting unit. The cutting of a pile is done by projecting the cutter by a cylinder the base of which is joined to a cutter driver, and then moving the rotating cutter upward. The amounts of movements of these parts are detected by sensors, and an arbitrary range of the underground portion of a pile can be cut by a remote control operation. 10 figs., 1 tab.

  10. Direct Extraction of InP/GaAsSb/InP DHBT Equivalent-Circuit Elements From S-Parameters Measured at Cut-Off and Normal Bias Conditions

    DEFF Research Database (Denmark)

    Johansen, Tom Keinicke; Leblanc, Rémy; Poulain, Julien

    2016-01-01

    A unique direct parameter extraction method for the small-signal equivalent-circuit model of InP/GaAsSb/InP double heterojunction bipolar transistors (DHBTs) is presented. $S$-parameters measured at cut-off bias are used, at first, to extract the distribution factor $X_{0}$ for the base-collector......A unique direct parameter extraction method for the small-signal equivalent-circuit model of InP/GaAsSb/InP double heterojunction bipolar transistors (DHBTs) is presented. $S$-parameters measured at cut-off bias are used, at first, to extract the distribution factor $X_{0}$ for the base......-collector capacitance at zero collector current and the collector-to-emitter overlap capacitance $C_{ceo}$ present in InP DHBT devices. Low-frequency $S$-parameters measured at normal bias conditions then allows the extraction of the external access resistances $R_{bx}$, $R_{e}$, and $R_{cx}$ as well as the intrinsic...

  11. Evaluation of normalization methods in mammalian microRNA-Seq data

    Science.gov (United States)

    Garmire, Lana Xia; Subramaniam, Shankar

    2012-01-01

    Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701

  12. Design of Batch Distillation Columns Using Short-Cut Method at Constant Reflux

    Directory of Open Access Journals (Sweden)

    Asteria Narvaez-Garcia

    2013-01-01

    Full Text Available A short-cut method for batch distillation columns working at constant reflux was applied to solve a problem of four components that needed to be separated and purified to a mole fraction of 0.97 or better. Distillation columns with 10, 20, 30, 40, and 50 theoretical stages were used; reflux ratio was varied between 2 and 20. Three quality indexes were used and compared: Luyben’s capacity factor, total annual cost, and annual profit. The best combinations of theoretical stages and reflux ratio were obtained for each method. It was found that the best combinations always required reflux ratios close to the minimum. Overall, annual profit was the best quality index, while the best combination was a distillation column with 30 stages, and reflux ratio’s of 2.0 for separation of benzene (i, 5.0 for the separation of toluene (ii, and 20 for the separation of ethylbenzene (iii and purification of o-xylene (iv.

  13. A One-Sample Test for Normality with Kernel Methods

    OpenAIRE

    Kellner , Jérémie; Celisse , Alain

    2015-01-01

    We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for normality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. O...

  14. α-Cut method based importance measure for criticality analysis in fuzzy probability – Based fault tree analysis

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry; Sony Tjahyani, D.T.; Widodo, Surip; Tjahjono, Hendro

    2017-01-01

    Highlights: •FPFTA deals with epistemic uncertainty using fuzzy probability. •Criticality analysis is important for reliability improvement. •An α-cut method based importance measure is proposed for criticality analysis in FPFTA. •The α-cut method based importance measure utilises α-cut multiplication, α-cut subtraction, and area defuzzification technique. •Benchmarking confirm that the proposed method is feasible for criticality analysis in FPFTA. -- Abstract: Fuzzy probability – based fault tree analysis (FPFTA) has been recently developed and proposed to deal with the limitations of conventional fault tree analysis. In FPFTA, reliabilities of basic events, intermediate events and top event are characterized by fuzzy probabilities. Furthermore, the quantification of the FPFTA is based on fuzzy multiplication rule and fuzzy complementation rule to propagate uncertainties from basic event to the top event. Since the objective of the fault tree analysis is to improve the reliability of the system being evaluated, it is necessary to find the weakest path in the system. For this purpose, criticality analysis can be implemented. Various importance measures, which are based on conventional probabilities, have been developed and proposed for criticality analysis in fault tree analysis. However, not one of those importance measures can be applied for criticality analysis in FPFTA, which is based on fuzzy probability. To be fully applied in nuclear power plant probabilistic safety assessment, FPFTA needs to have its corresponding importance measure. The objective of this study is to develop an α-cut method based importance measure to evaluate and rank the importance of basic events for criticality analysis in FPFTA. To demonstrate the applicability of the proposed measure, a case study is performed and its results are then benchmarked to the results generated by the four well known importance measures in conventional fault tree analysis. The results

  15. A systematic evaluation of normalization methods in quantitative label-free proteomics.

    Science.gov (United States)

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2018-01-01

    To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.

  16. The continuous cut-off method and the relativistic scattering of spin-1/2 particles

    International Nuclear Information System (INIS)

    Dolinszky, T.

    1979-07-01

    A high energy formula, obtained in the framework of the continuous cut-off approach, is shown to improve the correctness of the standard phase shift expression for Dirac scattering by two orders of magnitude in energy. (author)

  17. A new method of ergonomic testing of gloves protecting against cuts and stabs during knife use.

    Science.gov (United States)

    Irzmańska, Emilia; Tokarski, Tomasz

    2017-05-01

    The paper presents a new method of ergonomic evaluation of gloves protecting against cuts and stabs during knife use, consisting of five manual dexterity tests. Two of them were selected based on the available literature and relevant safety standards, and three were developed by the authors. All of the tests were designed to simulate occupational tasks associated with meat processing as performed by the gloved hand in actual workplaces. The tests involved the three most common types of protective gloves (knitted gloves made of a coverspun yarn, metal mesh gloves, and metal mesh gloves with an ergonomic polyurethane tightener) and were conducted on a group of 20 males. The loading on the muscles of the upper limb (adductor pollicis, flexor carpi ulnaris, flexor carpi radialis, and biceps brachii) was measured using surface electromyography. For the obtained muscle activity values, correlations were found between the glove type and loading of the upper limb. ANOVA showed that the activity of all muscles differed significantly between the five tests. A relationship between glove types and electromyographic results was confirmed at a significance level of α = 0.05. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A non-linear branch and cut method for solving discrete minimum compliance problems to global optimality

    DEFF Research Database (Denmark)

    Stolpe, Mathias; Bendsøe, Martin P.

    2007-01-01

    This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities......) and cuts....

  19. A mathematical model and an approximate method for calculating the fracture characteristics of nonmetallic materials during laser cutting

    Energy Technology Data Exchange (ETDEWEB)

    Smorodin, F.K.; Druzhinin, G.V.

    1991-01-01

    A mathematical model is proposed which describes the fracture behavior of amorphous materials during laser cutting. The model, which is based on boundary layer equations, is reduced to ordinary differential equations with the corresponding boundary conditions. The reduced model is used to develop an approximate method for calculating the fracture characteristics of nonmetallic materials.

  20. Study by the disco method of critical components of a P.W.R. normal feedwater system

    International Nuclear Information System (INIS)

    Duchemin, B.; Villeneuve, M.J. de; Vallette, F.; Bruna, J.G.

    1983-03-01

    The DISCO (Determination of Importance Sensitivity of COmponents) method objectif is to rank the components of a system in order to obtain the most important ones versus availability. This method uses the fault tree description of the system and the cut set technique. It ranks the components by ordering the importances attributed to each one. The DISCO method was applied to the study of the 900 MWe P.W.R. normal feedwater system with insufficient flow in steam generator. In order to take account of operating experience several data banks were used and the results compared. This study allowed to determine the most critical component (the turbo-pumps) and to propose and quantify modifications of the system in order to improve its availability

  1. Method for Friction Force Estimation on the Flank of Cutting Tools

    Directory of Open Access Journals (Sweden)

    Luis Huerta

    2017-01-01

    Full Text Available Friction forces are present in any machining process. These forces could play an important role in the dynamics of the system. In the cutting process, friction is mainly present in the rake face and the flank of the tool. Although the one that acts on the rake face has a major influence, the other one can become also important and could take part in the stability of the system. In this work, experimental identification of the friction on the flank is presented. The experimental determination was carried out by machining aluminum samples in a CNC lathe. As a result, two friction functions were obtained as a function of the cutting speed and the relative motion of the contact elements. Experiments using a worn and a new insert were carried out. Force and acceleration were recorded simultaneously and, from these results, different friction levels were observed depending on the cutting parameters, such as cutting speed, feed rate, and tool condition. Finally, a friction model for the flank friction is presented.

  2. An analytical method on the surface residual stress for the cutting tool orientation

    Science.gov (United States)

    Li, Yueen; Zhao, Jun; Wang, Wei

    2010-03-01

    The residual stress is measured by choosing 8 kinds orientations on cutting the H13 dies steel on the HSM in the experiment of this paper. The measured data shows on that the residual stress exists periodicity for the different rake angle (β) and side rake angle (θ) parameters, further study find that the cutting tool orientations have closed relationship with the residual stresses, and for the original of the machined residual stress on the surface from the cutting force and the axial force, it can be gained the simply model of tool-workpiece force, using the model it can be deduced the residual stress model, which is feasible to calculate the size of residual stress. And for almost all the measured residual stresses are compressed stress, the compressed stress size and the direction could be confirmed by the input data for the H13 on HSM. As the result, the residual stress model is the key for optimization of rake angle (β) and side rake angle (θ) in theory, using the theory the more cutting mechanism can be expressed.

  3. Welding and cutting

    International Nuclear Information System (INIS)

    Drews, P.; Schulze Frielinghaus, W.

    1978-01-01

    This is a survey, with 198 literature references, of the papers published in the fields of welding and cutting within the last three years. The subjects dealt with are: weldability of the materials - Welding methods - Thermal cutting - Shaping and calculation of welded joints - Environmental protection in welding and cutting. (orig.) [de

  4. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A.; van t Veld, Aart A.

    2012-01-01

    PURPOSE: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator

  5. On a computer implementation of the block Gauss–Seidel method for normal systems of equations

    OpenAIRE

    Alexander I. Zhdanov; Ekaterina Yu. Bogdanova

    2016-01-01

    This article focuses on the modification of the block option Gauss-Seidel method for normal systems of equations, which is a sufficiently effective method of solving generally overdetermined, systems of linear algebraic equations of high dimensionality. The main disadvantage of methods based on normal equations systems is the fact that the condition number of the normal system is equal to the square of the condition number of the original problem. This fact has a negative impact on the rate o...

  6. Fabricating 40 µm-thin silicon solar cells with different orientations by using SLiM-cut method

    Science.gov (United States)

    Wang, Teng-Yu; Chen, Chien-Hsun; Shiao, Jui-Chung; Chen, Sung-Yu; Du, Chen-Hsun

    2017-10-01

    Thin silicon foils with different crystal orientations were fabricated using the stress induced lift-off (SLiM-cut) method. The thickness of the silicon foils was approximately 40 µm. The ≤ft foil had a smoother surface than the ≤ft foil. With surface passivation, the minority carrier lifetimes of the ≤ft and ≤ft silicon foil were 1.0 µs and 1.6 µs, respectively. In this study, 4 cm2-thin silicon solar cells with heterojunction structures were fabricated. The energy conversion efficiencies were determined to be 10.74% and 14.74% for the ≤ft and ≤ft solar cells, respectively. The surface quality of the silicon foils was determined to affect the solar cell character. This study demonstrated that fabricating the solar cell by using silicon foil obtained from the SLiM-cut method is feasible.

  7. Correlation- and covariance-supported normalization method for estimating orthodontic trainer treatment for clenching activity.

    Science.gov (United States)

    Akdenur, B; Okkesum, S; Kara, S; Günes, S

    2009-11-01

    In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.

  8. Current Observational Constraints to Holographic Dark Energy Model with New Infrared cut-off via Markov Chain Monte Carlo Method

    OpenAIRE

    Wang, Yuting; Xu, Lixin

    2010-01-01

    In this paper, the holographic dark energy model with new infrared (IR) cut-off for both the flat case and the non-flat case are confronted with the combined constraints of current cosmological observations: type Ia Supernovae, Baryon Acoustic Oscillations, current Cosmic Microwave Background, and the observational hubble data. By utilizing the Markov Chain Monte Carlo (MCMC) method, we obtain the best fit values of the parameters with $1\\sigma, 2\\sigma$ errors in the flat model: $\\Omega_{b}h...

  9. Evaluation of Combined Disinfection Methods for Reducing Escherichia coli O157:H7 Population on Fresh-Cut Vegetables.

    Science.gov (United States)

    Petri, Eva; Rodríguez, Mariola; García, Silvia

    2015-07-23

    Most current disinfection strategies for fresh-cut industry are focused on the use of different chemical agents; however, very little has been reported on the effectiveness of the hurdle technology. The effect of combined decontamination methods based on the use of different sanitizers (peroxyacetic acid and chlorine dioxide) and the application of pressure (vacuum/positive pressure) on the inactivation of the foodborne pathogen E. coli O157:H7 on fresh-cut lettuce (Lactuca sativa) and carrots (Daucus carota) was studied. Fresh produce, inoculated with E. coli O157:H7, was immersed (4 °C, 2 min) in tap water (W), chlorine water (CW), chlorine dioxide (ClO2: 2 mg/L) and peroxyacetic acid (PAA: 100 mg/L) in combination with: (a) vacuum (V: 10 mbar) or (b) positive pressure application (P: 3 bar). The product quality and antimicrobial effects of the treatment on bacterial counts were determined both in process washing water and on fresh-cut produce. Evidence obtained in this study, suggests that the use of combined methods (P/V + sanitizers) results in a reduction on the microorganism population on produce similar to that found at atmospheric pressure. Moreover, the application of physical methods led to a significant detrimental effect on the visual quality of lettuce regardless of the solution used. Concerning the process water, PAA proved to be an effective alternative to chlorine for the avoidance of cross-contamination.

  10. Evaluation of Combined Disinfection Methods for Reducing Escherichia coli O157:H7 Population on Fresh-Cut Vegetables

    Directory of Open Access Journals (Sweden)

    Eva Petri

    2015-07-01

    Full Text Available Most current disinfection strategies for fresh-cut industry are focused on the use of different chemical agents; however, very little has been reported on the effectiveness of the hurdle technology. The effect of combined decontamination methods based on the use of different sanitizers (peroxyacetic acid and chlorine dioxide and the application of pressure (vacuum/positive pressure on the inactivation of the foodborne pathogen E. coli O157:H7 on fresh-cut lettuce (Lactuca sativa and carrots (Daucus carota was studied. Fresh produce, inoculated with E. coli O157:H7, was immersed (4 °C, 2 min in tap water (W, chlorine water (CW, chlorine dioxide (ClO2: 2 mg/L and peroxyacetic acid (PAA: 100 mg/L in combination with: (a vacuum (V: 10 mbar or (b positive pressure application (P: 3 bar. The product quality and antimicrobial effects of the treatment on bacterial counts were determined both in process washing water and on fresh-cut produce. Evidence obtained in this study, suggests that the use of combined methods (P/V + sanitizers results in a reduction on the microorganism population on produce similar to that found at atmospheric pressure. Moreover, the application of physical methods led to a significant detrimental effect on the visual quality of lettuce regardless of the solution used. Concerning the process water, PAA proved to be an effective alternative to chlorine for the avoidance of cross-contamination.

  11. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  12. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  13. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  14. Improvement of a separation method for the reduction of secondary waste from the water-jet abrasive suspension cutting technique

    International Nuclear Information System (INIS)

    Brandauer, M.; Gentes, S.; Heneka, A.; Krauss, C.O.; Geckeis, H.; Plaschke, M.; Schild, D.; Tobie, W.

    2017-01-01

    Full text of publication follows. Disassembling the reactor pressure vessel and its built-in components is a huge challenge in the deconstruction of a nuclear power plant. After being exposed to neutron irradiation for years, the activated components need to be disassembled and packed by remote controlled techniques. Underwater disassembling systems have the advantage of the shielding effect of water against radiation. To avoid the generation of aerosols, cold cutting processes are preferred. A cutting method that meets these requirements is the water-jet abrasive suspension cutting technique (WASS). This method provides high flexibility and is immune towards mechanical stress in the components. During the cutting process, a mixture of abrasive particles and radioactive steel particles from the cut components is generated. Depending on the operational conditions, the amount of this secondary waste increases substantially. Therefore, despite of its intrinsic technical benefits, WASS has a serious disadvantage towards other cutting techniques due to the huge disposal costs of secondary waste. During our previous joint research project between KIT and AREVA GmbH called NENAWAS ('New Disposal Methods for the Secondary Waste Treatment of the Water-jet Abrasive Suspension Cutting Technique', funded by the German ministry for education and research, BMBF), a prototype separation device for WASS secondary waste was developed and tested. Using a magnetic filter, steel particles could be successfully separated from the rest of the secondary waste. The separation process is examined using elemental analysis (ICP-OES) for quantification of the separation grade. Additionally, morphologies of particles and particle aggregates before and after the separation process were examined by scanning electron microscopy (SEM). In the abrasive particle fraction after separation of the steel particles a remaining contamination by tiny steel particles could be detected by elemental and

  15. Longitudinal Cut Method Revisited: A Survey on the Main Error Sources

    OpenAIRE

    Moriconi, Alessandro; Lalli, Francesco; Di Felice, Fabio; Esposito, Pier Giorgio; Piscopia, Rodolfo

    2000-01-01

    Some of the main error sources in wave pattern resistance determination were investigated. The experimental data obtained at the Italian Ship Model Basin (longitudinal wave cuts concerned with the steady motion of the Series 60 model and a hard-chine catamaran) were analyzed. It was found that, within the range of Froude numbers tested (0.225 ≤ Fr ≤ 0.345 for the Series 60 and 0.5 ≤ Fr ≤ 1 for the catamaran) two sources of uncertainty play a significant role: (i) the p...

  16. Global optimization of discrete truss topology design problems using a parallel cut-and-branch method

    DEFF Research Database (Denmark)

    Rasmussen, Marie-Louise Højlund; Stolpe, Mathias

    2008-01-01

    the physics, and the cuts (Combinatorial Benders’ and projected Chvátal–Gomory) come from an understanding of the particular mathematical structure of the reformulation. The impact of a stronger representation is investigated on several truss topology optimization problems in two and three dimensions.......The subject of this article is solving discrete truss topology optimization problems with local stress and displacement constraints to global optimum. We consider a formulation based on the Simultaneous ANalysis and Design (SAND) approach. This intrinsically non-convex problem is reformulated...

  17. Alternative normalization methods demonstrate widespread cortical hypometabolism in untreated de novo Parkinson's disease

    DEFF Research Database (Denmark)

    Berti, Valentina; Polito, C; Borghammer, Per

    2012-01-01

    , recent studies suggested that conventional data normalization procedures may not always be valid, and demonstrated that alternative normalization strategies better allow detection of low magnitude changes. We hypothesized that these alternative normalization procedures would disclose more widespread...... metabolic alterations in de novo PD. METHODS: [18F]FDG PET scans of 26 untreated de novo PD patients (Hoehn & Yahr stage I-II) and 21 age-matched controls were compared using voxel-based analysis. Normalization was performed using gray matter (GM), white matter (WM) reference regions and Yakushev...... normalization. RESULTS: Compared to GM normalization, WM and Yakushev normalization procedures disclosed much larger cortical regions of relative hypometabolism in the PD group with extensive involvement of frontal and parieto-temporal-occipital cortices, and several subcortical structures. Furthermore...

  18. Method of distilling machine-cut peat and other finely divided material

    Energy Technology Data Exchange (ETDEWEB)

    1942-03-03

    Apparatus arrangement is given for dry distillation of machine-cut peat and similar materials in particle or powder form, consisting of a tunnel oven through which the material is led and in which it is heated by help of the gases generated in the process. These gases are brought to circulate through the interior of the oven and heat exchangers installed outside the oven, are flushed by hot combustion gases. Through the tunnel oven a mounted turnable shaft is provided with paddles which lift the material and let it fall to make the particles of material get good contact with the circulating gases without offering too much resistance to the flow of the gases.

  19. A non-linear branch and cut method for solving discrete minimum compliance problems to global optimality

    DEFF Research Database (Denmark)

    Stolpe, Mathias; Bendsøe, Martin P.

    2007-01-01

    This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities...

  20. Radioactive wear measurements of cutting tools made of metal in cutting aluminium alloys

    International Nuclear Information System (INIS)

    Frevert, E.

    1977-01-01

    The possibility of making quick checkings of the inhomogeneities of turning materials with radioactive wear measurements has been tested. After activation analysis of the long-lived radioisotopes of cutting tools made of hard metal a method for loss-free collection of the turnings has been developed. The detection limit of the abrasion is about 10 -8 g, the measuring times are 5-10 minutes. Special radiation protection measures are not necessary. An analysis of the abrasion showed that at the beginning of cutting the amount of cobalt is 6 times higher than in the normal composition of the used cutting tools. (author)

  1. Strength on cut edge and ground edge glass beams with the failure analysis method

    Directory of Open Access Journals (Sweden)

    Stefano Agnetti

    2013-10-01

    Full Text Available The aim of this work is the study of the effect of the finishing of the edge of glass when it has a structural function. Experimental investigations carried out for glass specimens are presented. Various series of annealed glass beam were tested, with cut edge and with ground edge. The glass specimens are tested in four-point bending performing flaw detection on the tested specimens after failure, in order to determine glass strength. As a result, bending strength values are obtained for each specimen. Determining some physical parameter as the depth of the flaw and the mirror radius of the fracture, after the failure of a glass element, it could be possible to calculate the failure strength of that.The experimental results were analyzed with the LEFM theory and the glass strength was analyzed with a statistical study using two-parameter Weibull distribution fitting quite well the failure stress data. The results obtained constitute a validation of the theoretical models and show the influence of the edge processing on the failure strength of the glass. Furthermore, series with different sizes were tested in order to evaluate the size effect.

  2. Treatment of Petroleum Drill Cuttings Using Stabilization/Solidification Method by Cement and Modified Clay Mixes

    Directory of Open Access Journals (Sweden)

    Soroush Ghasemi

    2017-04-01

    Full Text Available High organic content in petroleum drill cuttings is a substantial obstacle which hinders cement hydration and subsequently decreases the clean-up efficiency of the stabilization/solidification (S/S process. In this study, a modified clayey soil (montmorillonite with low to moderate polarity was used as an additive to cement. Because of its high adsorption capacity, the clay is capable of mitigating the destructive role of organic materials and preventing their interference with the hydration process. Mixes containing different ratios of cement, waste and modified clay were prepared and tested for their mechanical and chemical characteristics. Total petroleum hydrocarbons (TPH and Pb content of the samples were analyzed as well. For this purpose, the mixes were subjected to unconfined compressive strength (UCS and toxicity characteristic leaching procedure (TCLP tests. The results indicated that the specimens with 28-day curing time at a cement/waste ratio of 25% or higher (w/w and 10% modified clay (w/w met the Environmental Protection Agency (EPA criterion for compressive strength. Moreover, a reduction of 94% in the leaching of TPH was observed with the specimens undergoing the TCLP with a cement/waste ratio of 30% (w/w and a clay/waste ratio of 30% (w/w. Finally, the specimens with 30% cement/waste and 10% clay/waste ratios showed the least concentration (6.14% of leached Pb.

  3. High hydrostatic pressure as a method to preserve fresh-cut Hachiya persimmons: A structural approach.

    Science.gov (United States)

    Vázquez-Gutiérrez, José Luis; Quiles, Amparo; Vonasek, Erica; Jernstedt, Judith A; Hernando, Isabel; Nitin, Nitin; Barrett, Diane M

    2016-12-01

    The "Hachiya" persimmon is the most common astringent cultivar grown in California and it is rich in tannins and carotenoids. Changes in the microstructure and some physicochemical properties during high hydrostatic pressure processing (200-400 MPa, 3 min, 25 ℃) and subsequent refrigerated storage were analyzed in this study in order to evaluate the suitability of this non-thermal technology for preservation of fresh-cut Hachiya persimmons. The effects of high-hydrostatic pressure treatment on the integrity and location of carotenoids and tannins during storage were also analyzed. Significant changes, in particular diffusion of soluble compounds which were released as a result of cell wall and membrane damage, were followed using confocal microscopy. The high-hydrostatic pressure process also induced changes in physicochemical properties, e.g. electrolyte leakage, texture, total soluble solids, pH and color, which were a function of the amount of applied hydrostatic pressure and may affect the consumer acceptance of the product. Nevertheless, the results indicate that the application of 200 MPa could be a suitable preservation treatment for Hachiya persimmon. This treatment seems to improve carotenoid extractability and tannin polymerization, which could improve functionality and remove astringency of the fruit, respectively. © The Author(s) 2016.

  4. Normalization Methods and Selection Strategies for Reference Materials in Stable Isotope Analyses - Review

    International Nuclear Information System (INIS)

    Skrzypek, G.; Sadler, R.; Paul, D.; Forizs, I.

    2011-01-01

    A stable isotope analyst has to make a number of important decisions regarding how to best determine the 'true' stable isotope composition of analysed samples in reference to an international scale. It has to be decided which reference materials should be used, the number of reference materials and how many repetitions of each standard is most appropriate for a desired level of precision, and what normalization procedure should be selected. In this paper we summarise what is known about propagation of uncertainties associated with normalization procedures and propagation of uncertainties associated with reference materials used as anchors for the determination of 'true' values for δ''1''3C and δ''1''8O. Normalization methods Several normalization methods transforming the 'raw' value obtained from mass spectrometers to one of the internationally recognized scales has been developed. However, as summarised by Paul et al. different normalization transforms alone may lead to inconsistencies between laboratories. The most common normalization procedures are: single-point anchoring (versus working gas and certified reference standard), modified single-point normalization, linear shift between the measured and the true isotopic composition of two certified reference standards, two-point and multipoint linear normalization methods. The accuracy of these various normalization methods has been compared by using analytical laboratory data by Paul et al., with the single-point and normalization versus tank calibrations resulting in the largest normalization errors, and that also exceed the analytical uncertainty recommended for δ 13 C. The normalization error depends greatly on the relative differences between the stable isotope composition of the reference material and the sample. On the other hand, the normalization methods using two or more certified reference standards produces a smaller normalization error, if the reference materials are bracketing the whole range of

  5. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    Science.gov (United States)

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Application of normal form methods to the analysis of resonances in particle accelerators

    International Nuclear Information System (INIS)

    Davies, W.G.

    1992-01-01

    The transformation to normal form in a Lie-algebraic framework provides a very powerful method for identifying and analysing non-linear behaviour and resonances in particle accelerators. The basic ideas are presented and illustrated. (author). 4 refs

  7. Drug Use Normalization: A Systematic and Critical Mixed-Methods Review.

    Science.gov (United States)

    Sznitman, Sharon R; Taubman, Danielle S

    2016-09-01

    Drug use normalization, which is a process whereby drug use becomes less stigmatized and more accepted as normative behavior, provides a conceptual framework for understanding contemporary drug issues and changes in drug use trends. Through a mixed-methods systematic review of the normalization literature, this article seeks to (a) critically examine how the normalization framework has been applied in empirical research and (b) make recommendations for future research in this area. Twenty quantitative, 26 qualitative, and 4 mixed-methods studies were identified through five electronic databases and reference lists of published studies. Studies were assessed for relevance, study characteristics, quality, and aspects of normalization examined. None of the studies applied the most rigorous research design (experiments) or examined all of the originally proposed normalization dimensions. The most commonly assessed dimension of drug use normalization was "experimentation." In addition to the original dimensions, the review identified the following new normalization dimensions in the literature: (a) breakdown of demographic boundaries and other risk factors in relation to drug use; (b) de-normalization; (c) drug use as a means to achieve normal goals; and (d) two broad forms of micro-politics associated with managing the stigma of illicit drug use: assimilative and transformational normalization. Further development in normalization theory and methodology promises to provide researchers with a novel framework for improving our understanding of drug use in contemporary society. Specifically, quasi-experimental designs that are currently being made feasible by swift changes in cannabis policy provide researchers with new and improved opportunities to examine normalization processes.

  8. On a computer implementation of the block Gauss–Seidel method for normal systems of equations

    Directory of Open Access Journals (Sweden)

    Alexander I. Zhdanov

    2016-12-01

    Full Text Available This article focuses on the modification of the block option Gauss-Seidel method for normal systems of equations, which is a sufficiently effective method of solving generally overdetermined, systems of linear algebraic equations of high dimensionality. The main disadvantage of methods based on normal equations systems is the fact that the condition number of the normal system is equal to the square of the condition number of the original problem. This fact has a negative impact on the rate of convergence of iterative methods based on normal equations systems. To increase the speed of convergence of iterative methods based on normal equations systems, for solving ill-conditioned problems currently different preconditioners options are used that reduce the condition number of the original system of equations. However, universal preconditioner for all applications does not exist. One of the effective approaches that improve the speed of convergence of the iterative Gauss–Seidel method for normal systems of equations, is to use its version of the block. The disadvantage of the block Gauss–Seidel method for production systems is the fact that it is necessary to calculate the pseudoinverse matrix for each iteration. We know that finding the pseudoinverse is a difficult computational procedure. In this paper, we propose a procedure to replace the matrix pseudo-solutions to the problem of normal systems of equations by Cholesky. Normal equations arising at each iteration of Gauss–Seidel method, have a relatively low dimension compared to the original system. The results of numerical experimentation demonstrating the effectiveness of the proposed approach are given.

  9. An assessment of current methods for stabilising steep cut slopes by ...

    African Journals Online (AJOL)

    This paper examines the various current methods that are being used throughout South Africa and assesses the merits of such methods. Furthermore, factors that have an overriding influence on the success of a particular method in a certain set of circumstances are analysed. Cost comparisons are produced to assist in ...

  10. Normalization method for metabolomics data using optimal selection of multiple internal standards

    Directory of Open Access Journals (Sweden)

    Yetukuri Laxman

    2007-03-01

    Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.

  11. The LS-STAG immersed boundary/cut-cell method for non-Newtonian flows in 3D extruded geometries

    Science.gov (United States)

    Nikfarjam, F.; Cheny, Y.; Botella, O.

    2018-05-01

    The LS-STAG method is an immersed boundary/cut-cell method for viscous incompressible flows based on the staggered MAC arrangement for Cartesian grids, where the irregular boundary is sharply represented by its level-set function, results in a significant gain in computer resources (wall time, memory usage) compared to commercial body-fitted CFD codes. The 2D version of LS-STAG method is now well-established (Cheny and Botella, 2010), and this paper presents its extension to 3D geometries with translational symmetry in the z direction (hereinafter called 3D extruded configurations). This intermediate step towards the fully 3D implementation can be applied to a wide variety of canonical flows and will be regarded as the keystone for the full 3D solver, since both discretization and implementation issues on distributed memory machines are tackled at this stage of development. The LS-STAG method is then applied to various Newtonian and non-Newtonian flows in 3D extruded geometries (axisymmetric pipe, circular cylinder, duct with an abrupt expansion) for which benchmark results and experimental data are available. The purpose of these investigations are (a) to investigate the formal order of accuracy of the LS-STAG method, (b) to assess the versatility of method for flow applications at various regimes (Newtonian and shear-thinning fluids, steady and unsteady laminar to turbulent flows) (c) to compare its performance with well-established numerical methods (body-fitted and immersed boundary methods).

  12. Cutting cleaner

    International Nuclear Information System (INIS)

    Elsen, R.P.H. van; Smits, M.

    1991-01-01

    This paper presents the results of a long term field test of the Cutting Cleaner, which is used for the treatment of wet oil contaminated cuttings (WOCC) produced when drilling with Oil Based Mud (OBM). It was concluded that it is possible to reduce the oil content of cuttings to an average of 1 - 2%. The recovered base oil can be reused to make new oil based mud

  13. Effect of Various Management Methods of Apical Flower Bud on Cut Flower Quality in Three Cultivars of Greenhouse Roses

    Directory of Open Access Journals (Sweden)

    mansour matloobi

    2017-02-01

    Full Text Available Introduction: In greenhouse roses, canopy management has been highly noted and emphasized during the past decades. It was recognized that improving canopy shape by implementing some techniques such as stem bending and flower bud removing can highly affect the marketable quality of cut roses. For most growers, the best method of flower bud treatment has not yet been described and determined physiologically. This experiment was designed to answer some questions related to this problem. Materials and Methods: A plastic commercial cut rose greenhouse was selected to carry out the trial. Three greenhouse rose cultivars, namely Eros, Cherry Brandy and Dancing Queen, were selected as the first factor, and three methods of flower bud treatment along with bending types were chosen as the second factor. Cuttings were taken from mother plants and rooted under mist conditions. The first shoot emerging from the cutting was treated at pea bud stage by one of the following methods: shoot bending at stem base with intact bud, immediate shoot bending at stem base after removing flower bud and shoot bending at stem base two weeks after flower bud removal. Some marketable stem properties including stem length, diameter and weight, and characteristics related to bud growth potential were measured, and then the data were subjected to statistical analysis. Results and Discussion: Analysis of variance showed that cultivars differ in their marketable features. Cherry Brandy produced longer cut flowers with higher stem diameter compared to the two other cultivars. This cultivar was also good in stem weight trait; however its difference from Eros was not significant. Dancing Queen did not perform well in producing high quality stems on the whole. Regarding number of days until bud release and growth, Cherry Brandy’s buds spent fewest days until growing. In many studies, the effect of cultivar on rose shoot growth quality has been documented and explained. For instance

  14. Effect of Various Management Methods of Apical Flower Bud on Cut Flower Quality in Three Cultivars of Greenhouse Roses

    Directory of Open Access Journals (Sweden)

    mansour matloobi

    2017-09-01

    Full Text Available Introduction: In greenhouse roses, canopy management has been highly noted and emphasized during the past decades. It was recognized that improving canopy shape by implementing some techniques such as stem bending and flower bud removing can highly affect the marketable quality of cut roses. For most growers, the best method of flower bud treatment has not yet been described and determined physiologically. This experiment was designed to answer some questions related to this problem. Materials and Methods: A plastic commercial cut rose greenhouse was selected to carry out the trial. Three greenhouse rose cultivars, namely Eros, Cherry Brandy and Dancing Queen, were selected as the first factor, and three methods of flower bud treatment along with bending types were chosen as the second factor. Cuttings were taken from mother plants and rooted under mist conditions. The first shoot emerging from the cutting was treated at pea bud stage by one of the following methods: shoot bending at stem base with intact bud, immediate shoot bending at stem base after removing flower bud and shoot bending at stem base two weeks after flower bud removal. Some marketable stem properties including stem length, diameter and weight, and characteristics related to bud growth potential were measured, and then the data were subjected to statistical analysis. Results and Discussion: Analysis of variance showed that cultivars differ in their marketable features. Cherry Brandy produced longer cut flowers with higher stem diameter compared to the two other cultivars. This cultivar was also good in stem weight trait; however its difference from Eros was not significant. Dancing Queen did not perform well in producing high quality stems on the whole. Regarding number of days until bud release and growth, Cherry Brandy’s buds spent fewest days until growing. In many studies, the effect of cultivar on rose shoot growth quality has been documented and explained. For instance

  15. Comparison of normalization methods for the analysis of metagenomic gene abundance data.

    Science.gov (United States)

    Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik

    2018-04-20

    In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead

  16. Automatic NC-Data generation method for 5-axis cutting of turbine-blades by finding Safe heel-angles and adaptive path-intervals

    International Nuclear Information System (INIS)

    Piao, Cheng Dao; Lee, Cheol Soo; Cho, Kyu Zong; Park, Gwang Ryeol

    2004-01-01

    In this paper, an efficient method for generating 5-axis cutting data for a turbine blade is presented. The interference elimination of 5-axis cutting currently is very complicated, and it takes up a lot of time. The proposed method can generate an interference-free tool path, within an allowance range. Generating the cutting data just point to the cutting process and using it to obtain NC data by calculating the feed rate, allows us to maintain the proper feed rate of the 5-axis machine. This paper includes the algorithms for: (1) CL data generation by detecting an interference-free heel angle, (2) finding the optimal tool path interval considering the cusp-height, (3) finding the adaptive feed rate values for each cutter path, and (4) the inverse kinematics depending on the structure of the 5-axis machine, for generating the NC data

  17. Design of Normal Concrete Mixtures Using Workability-Dispersion-Cohesion Method

    OpenAIRE

    Qasrawi, Hisham

    2016-01-01

    The workability-dispersion-cohesion method is a new proposed method for the design of normal concrete mixes. The method uses special coefficients called workability-dispersion and workability-cohesion factors. These coefficients relate workability to mobility and stability of the concrete mix. The coefficients are obtained from special charts depending on mix requirements and aggregate properties. The method is practical because it covers various types of aggregates that may not be within sta...

  18. THE METHOD OF CONSTRUCTING A BOOLEAN FORMULA OF A POLYGON IN THE DISJUNCTIVE NORMAL FORM

    Directory of Open Access Journals (Sweden)

    A. A. Butov

    2014-01-01

    Full Text Available The paper focuses on finalizing the method of finding a polygon Boolean formula in disjunctive normal form, described in the previous article [1]. An improved method eliminates the drawback asso-ciated with the existence of a class of problems for which the solution is only approximate. The pro-posed method always allows to find an exact solution. The method can be used, in particular, in the systems of computer-aided design of integrated circuits topology.

  19. A new normalization method based on electrical field lines for electrical capacitance tomography

    International Nuclear Information System (INIS)

    Zhang, L F; Wang, H X

    2009-01-01

    Electrical capacitance tomography (ECT) is considered to be one of the most promising process tomography techniques. The image reconstruction for ECT is an inverse problem to find the spatially distributed permittivities in a pipe. Usually, the capacitance measurements obtained from the ECT system are normalized at the high and low permittivity for image reconstruction. The parallel normalization model is commonly used during the normalization process, which assumes the distribution of materials in parallel. Thus, the normalized capacitance is a linear function of measured capacitance. A recently used model is a series normalization model which results in the normalized capacitance as a nonlinear function of measured capacitance. The newest presented model is based on electrical field centre lines (EFCL), and is a mixture of two normalization models. The multi-threshold method of this model is presented in this paper. The sensitivity matrices based on different normalization models were obtained, and image reconstruction was carried out accordingly. Simulation results indicate that reconstructed images with higher quality can be obtained based on the presented model

  20. New method for computing ideal MHD normal modes in axisymmetric toroidal geometry

    International Nuclear Information System (INIS)

    Wysocki, F.; Grimm, R.C.

    1984-11-01

    Analytic elimination of the two magnetic surface components of the displacement vector permits the normal mode ideal MHD equations to be reduced to a scalar form. A Galerkin procedure, similar to that used in the PEST codes, is implemented to determine the normal modes computationally. The method retains the efficient stability capabilities of the PEST 2 energy principle code, while allowing computation of the normal mode frequencies and eigenfunctions, if desired. The procedure is illustrated by comparison with earlier various of PEST and by application to tilting modes in spheromaks, and to stable discrete Alfven waves in tokamak geometry

  1. A statistical analysis of count normalization methods used in positron-emission tomography

    International Nuclear Information System (INIS)

    Holmes, T.J.; Ficke, D.C.; Snyder, D.L.

    1984-01-01

    As part of the Positron-Emission Tomography (PET) reconstruction process, annihilation counts are normalized for photon absorption, detector efficiency and detector-pair duty-cycle. Several normalization methods of time-of-flight and conventional systems are analyzed mathematically for count bias and variance. The results of the study have some implications on hardware and software complexity and on image noise and distortion

  2. The analysis of carbohydrates in milk powder by a new "heart-cutting" two-dimensional liquid chromatography method.

    Science.gov (United States)

    Ma, Jing; Hou, Xiaofang; Zhang, Bing; Wang, Yunan; He, Langchong

    2014-03-01

    In this study, a new"heart-cutting" two-dimensional liquid chromatography method for the simultaneous determination of carbohydrate contents in milk powder was presented. In this two dimensional liquid chromatography system, a Venusil XBP-C4 analysis column was used in the first dimension ((1)D) as a pre-separation column, a ZORBAX carbohydrates analysis column was used in the second dimension ((2)D) as a final-analysis column. The whole process was completed in less than 35min without a particular sample preparation procedure. The capability of the new two dimensional HPLC method was demonstrated in the determination of carbohydrates in various brands of milk powder samples. A conventional one dimensional chromatography method was also proposed. The two proposed methods were both validated in terms of linearity, limits of detection, accuracy and precision. The comparison between the results obtained with the two methods showed that the new and completely automated two dimensional liquid chromatography method is more suitable for milk powder sample because of its online cleanup effect involved. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  3. Accuracy of unfolded map method for determining the left ventricular border. Evaluation of the cut-off value from autopsy finding

    International Nuclear Information System (INIS)

    Sugibayashi, Keiichi; Abe, Yoshiteru; Suga, Yutaka

    1996-01-01

    To improve the quantification of the left ventricular surface area (LVSA) by unfolded map method, we evaluated the cut-off value for determining the left ventricular border. The LVSA measured by unfolded map was compared with those measured using myocardial phantom and autopsy findings. The relative error (RE) was calculated as difference between LVSA in phantom and area of unfolded map. In phantom study, the cut-off value was calculated as 73.3±0.5% when the RE was zero. In autopsy study, the cut-off value was 74.0±7.2%. The area of unfolded map had good correlation with LVSA at autopsy when the cut-off value was 74% (r=0.83, p<0.003). The diameter of left ventricle at autopsy was compared with that of beating heart obtained by two-dimensional echocardiography, because the area of unfolded map was greater than LVSA at autopsy. The ratio of LVSA at autopsy to beating heart was calculated as 1.37. The suitable cut-off value was evaluated as 55.6% when the unfolded map area obtained by autopsy was increased 1.37 magnifications. There was a good correlation between LVSA of unfolded map (cut-off=56%) and the LVSA at autopsy (r=0.90, p<0.001). These results suggest that the cut-off value for determining the left ventricular border in vivo is 56%. (author)

  4. A rapid method of reprocessing for electronic microscopy of cut histological in paraffin

    International Nuclear Information System (INIS)

    Hernandez Chavarri, F.; Vargas Montero, M.; Rivera, P.; Carranza, A.

    2000-01-01

    A simple and rapid method is described for re-processing of light microscopy paraffin sections to observe they under transmission electron microscopy (TEM) and scanning electron microscopy (SEM) The paraffin-embedded tissue is sectioned and deparaffinized in toluene; then exposed to osmium vapor under microwave irradiation using a domestic microwave oven. The tissues were embedded in epoxy resin, polymerized and ultrathin sectioned. The method requires a relatively short time (about 30 minutes for TEM and 15 for SEM), and produces a reasonable quality of the ultrastructure for diagnostic purposes. (Author) [es

  5. A single method for recovery and concentration of enteric viruses and bacteria from fresh-cut vegetables.

    Science.gov (United States)

    Sánchez, G; Elizaquível, P; Aznar, R

    2012-01-03

    Fresh-cut vegetables are prone to be contaminated with foodborne pathogens during growth, harvest, transport and further processing and handling. As most of these products are generally eaten raw or mildly treated, there is an increase in the number of outbreaks caused by viruses and bacteria associated with fresh vegetables. Foodborne pathogens are usually present at very low levels and have to be concentrated (i.e. viruses) or enriched (i.e. bacteria) to enhance their detection. With this aim, a rapid concentration method has been developed for the simultaneous recovery of hepatitis A virus (HAV), norovirus (NV), murine norovirus (MNV) as a surrogate for NV, Escherichia coli O157:H7, Listeria monocytogenes and Salmonella enterica. Initial experiments focused on evaluating the elution conditions suitable for virus release from vegetables. Finally, elution with buffered peptone water (BPW), using a Pulsifier, and concentration by polyethylene glycol (PEG) precipitation were the methods selected for the elution and concentration of both, enteric viruses and bacteria, from three different types of fresh-cut vegetables by quantitative PCR (qPCR) using specific primers. The average recoveries from inoculated parsley, spinach and salad, were ca. 9.2%, 43.5%, and 20.7% for NV, MNV, and HAV, respectively. Detection limits were 132 RT-PCR units (PCRU), 1.5 50% tissue culture infectious dose (TCID₅₀), and 6.6 TCID₅₀ for NV, MNV, and HAV, respectively. This protocol resulted in average recoveries of 57.4%, 64.5% and 64.6% in three vegetables for E. coli O157:H7, L. monocytogenes and Salmonella with corresponding detection limits of 10³, 10² and 10³ CFU/g, respectively. Based on these results, it can be concluded that the procedure herein is suitable to recover, detect and quantify enteric viruses and foodborne pathogenic bacteria within 5 h and can be applied for the simultaneous detection of both types of foodborne pathogens in fresh-cut vegetables. Copyright

  6. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  7. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  8. Logging costs and production rates for the group selection cutting method

    Science.gov (United States)

    Philip M. McDonald

    1965-01-01

    Young-growth, mixed-conifer stands were logged by a group-selection method designed to create openings 30, 60, and 90 feet in diameter. Total costs for felling, limbing, bucking, and skidding on these openings ranged from $7.04 to $7.99 per thousand board feet. Cost differences between openings were not statistically significant. Logging costs for group selection...

  9. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  10. A laser-abrasive method for the cutting of enamel and dentin.

    Science.gov (United States)

    Altshuler, G B; Belikov, A V; Sinelnik, Y A

    2001-01-01

    This paper introduced a new method for the removal of hard dental tissue based upon the use of particles accelerated by laser irradiation, which the authors have called the laser-abrasive method. The particles used were sapphire as powder or an aqueous suspension. The effect of the products of enamel ablation was also investigated. The particles were accelerated using submillisecond pulses of Er:YAG and Nd:YAG lasers. A strobing CCD camera was used to measure the speed of the ejected particles. The additional contribution of these particles to the efficiency of laser ablation of enamel and dentin was also investigated. The results showed that the enamel particles produced by the beam-tissue interaction were also accelerated by this process of ablation and were effective in the removal of enamel and dentin. The use of an aqueous suspension of sapphire particles increased the efficiency of enamel removal threefold when compared with the use of an Er:YAG laser with water spray. The laser-abrasive method allowed for the removal of enamel and dentin at speeds approaching those of the high-speed turbine. Copyright 2001 Wiley-Liss, Inc.

  11. The impact of sample non-normality on ANOVA and alternative methods.

    Science.gov (United States)

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  12. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    International Nuclear Information System (INIS)

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-01

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline

  13. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    Energy Technology Data Exchange (ETDEWEB)

    Tam, Allison [Stanford Institutes of Medical Research Program, Stanford University School of Medicine, Stanford, California 94305 (United States); Barker, Jocelyn [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 (United States); Rubin, Daniel [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 and Department of Medicine (Biomedical Informatics Research), Stanford University School of Medicine, Stanford, California 94305 (United States)

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  14. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. A pseudospectra-based approach to non-normal stability of embedded boundary methods

    Science.gov (United States)

    Rapaka, Narsimha; Samtaney, Ravi

    2017-11-01

    We present non-normal linear stability of embedded boundary (EB) methods employing pseudospectra and resolvent norms. Stability of the discrete linear wave equation is characterized in terms of the normalized distance of the EB to the nearest ghost node (α) in one and two dimensions. An important objective is that the CFL condition based on the Cartesian grid spacing remains unaffected by the EB. We consider various discretization methods including both central and upwind-biased schemes. Stability is guaranteed when α Funds under Award No. URF/1/1394-01.

  16. Chapter 10: Peak Demand and Time-Differentiated Energy Savings Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Stern, Frank [Navigant, Boulder, CO (United States); Spencer, Justin [Navigant, Boulder, CO (United States)

    2017-10-03

    Savings from electric energy efficiency measures and programs are often expressed in terms of annual energy and presented as kilowatt-hours per year (kWh/year). However, for a full assessment of the value of these savings, it is usually necessary to consider the measure or program's impact on peak demand as well as time-differentiated energy savings. This cross-cutting protocol describes methods for estimating the peak demand and time-differentiated energy impacts of measures implemented through energy efficiency programs.

  17. Emulsions: the cutting edge of development in blasting agent technology - a method for economic comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ayat, M.G.; Allen, S.G.

    1988-03-01

    This work examines the history and development of blasting agents beginning with ANFO in the 1950's and concluding with a specific look at the 1980's blasting technology: the emulsion. Properties of emulsions and Emulsion Blend Explosive Systems are compared with ANFO and a method of comparing their costs, useful for comparing any two explosives, is developed. Based on this comparison, the Emulsion Blend Explosive System is determined superior to ANFO on the basis of cost per unit of overburden broken. 4 refs.

  18. Analysis of a renormalization group method and normal form theory for perturbed ordinary differential equations

    Science.gov (United States)

    DeVille, R. E. Lee; Harkin, Anthony; Holzer, Matt; Josić, Krešimir; Kaper, Tasso J.

    2008-06-01

    For singular perturbation problems, the renormalization group (RG) method of Chen, Goldenfeld, and Oono [Phys. Rev. E. 49 (1994) 4502-4511] has been shown to be an effective general approach for deriving reduced or amplitude equations that govern the long time dynamics of the system. It has been applied to a variety of problems traditionally analyzed using disparate methods, including the method of multiple scales, boundary layer theory, the WKBJ method, the Poincaré-Lindstedt method, the method of averaging, and others. In this article, we show how the RG method may be used to generate normal forms for large classes of ordinary differential equations. First, we apply the RG method to systems with autonomous perturbations, and we show that the reduced or amplitude equations generated by the RG method are equivalent to the classical Poincaré-Birkhoff normal forms for these systems up to and including terms of O(ɛ2), where ɛ is the perturbation parameter. This analysis establishes our approach and generalizes to higher order. Second, we apply the RG method to systems with nonautonomous perturbations, and we show that the reduced or amplitude equations so generated constitute time-asymptotic normal forms, which are based on KBM averages. Moreover, for both classes of problems, we show that the main coordinate changes are equivalent, up to translations between the spaces in which they are defined. In this manner, our results show that the RG method offers a new approach for deriving normal forms for nonautonomous systems, and it offers advantages since one can typically more readily identify resonant terms from naive perturbation expansions than from the nonautonomous vector fields themselves. Finally, we establish how well the solution to the RG equations approximates the solution of the original equations on time scales of O(1/ɛ).

  19. CALCULATION OF LASER CUTTING COSTS

    OpenAIRE

    Bogdan Nedic; Milan Eric; Marijana Aleksijevic

    2016-01-01

    The paper presents description methods of metal cutting and calculation of treatment costs based on model that is developed on Faculty of mechanical engineering in Kragujevac. Based on systematization and analysis of large number of calculation models of cutting with unconventional methods, mathematical model is derived, which is used for creating a software for calculation costs of metal cutting. Software solution enables resolving the problem of calculating the cost of laser cutting, compar...

  20. Wet cutting

    Energy Technology Data Exchange (ETDEWEB)

    Hole, B. [IMC Technical Services (United Kingdom)

    1999-08-01

    Continuous miners create dust and methane problems in underground coal mining. Control has usually been achieved using ventilation techniques as experiments with water based suppression have led to flooding and electrical problems. Recent experience in the US has led to renewed interest in wet head systems. This paper describes tests of the Hydraphase system by IMC Technologies. Ventilation around the cutting zone, quenching of hot ignition sources, dust suppression, the surface trial gallery tests, the performance of the cutting bed, and flow of air and methane around the cutting head are reviewed. 1 ref., 2 figs., 2 photos.

  1. Cutting assembly

    Science.gov (United States)

    Racki, Daniel J.; Swenson, Clark E.; Bencloski, William A.; Wineman, Arthur L.

    1984-01-01

    A cutting apparatus includes a support table mounted for movement toward and away from a workpiece and carrying a mirror which directs a cutting laser beam onto the workpiece. A carrier is rotatably and pivotally mounted on the support table between the mirror and workpiece and supports a conduit discharging gas toward the point of impingement of the laser beam on the workpiece. Means are provided for rotating the carrier relative to the support table to place the gas discharging conduit in the proper positions for cuts made in different directions on the workpiece.

  2. The Impact of Normalization Methods on RNA-Seq Data Analysis

    Science.gov (United States)

    Zyprych-Walczak, J.; Szabelska, A.; Handschuh, L.; Górczak, K.; Klamecka, K.; Figlerowicz, M.; Siatkowski, I.

    2015-01-01

    High-throughput sequencing technologies, such as the Illumina Hi-seq, are powerful new tools for investigating a wide range of biological and medical problems. Massive and complex data sets produced by the sequencers create a need for development of statistical and computational methods that can tackle the analysis and management of data. The data normalization is one of the most crucial steps of data processing and this process must be carefully considered as it has a profound effect on the results of the analysis. In this work, we focus on a comprehensive comparison of five normalization methods related to sequencing depth, widely used for transcriptome sequencing (RNA-seq) data, and their impact on the results of gene expression analysis. Based on this study, we suggest a universal workflow that can be applied for the selection of the optimal normalization procedure for any particular data set. The described workflow includes calculation of the bias and variance values for the control genes, sensitivity and specificity of the methods, and classification errors as well as generation of the diagnostic plots. Combining the above information facilitates the selection of the most appropriate normalization method for the studied data sets and determines which methods can be used interchangeably. PMID:26176014

  3. Evaluation of directional normalization methods for Landsat TM/ETM+ over primary Amazonian lowland forests

    Science.gov (United States)

    Van doninck, Jasper; Tuomisto, Hanna

    2017-06-01

    Biodiversity mapping in extensive tropical forest areas poses a major challenge for the interpretation of Landsat images, because floristically clearly distinct forest types may show little difference in reflectance. In such cases, the effects of the bidirectional reflection distribution function (BRDF) can be sufficiently strong to cause erroneous image interpretation and classification. Since the opening of the Landsat archive in 2008, several BRDF normalization methods for Landsat have been developed. The simplest of these consist of an empirical view angle normalization, whereas more complex approaches apply the semi-empirical Ross-Li BRDF model and the MODIS MCD43-series of products to normalize directional Landsat reflectance to standard view and solar angles. Here we quantify the effect of surface anisotropy on Landsat TM/ETM+ images over old-growth Amazonian forests, and evaluate five angular normalization approaches. Even for the narrow swath of the Landsat sensors, we observed directional effects in all spectral bands. Those normalization methods that are based on removing the surface reflectance gradient as observed in each image were adequate to normalize TM/ETM+ imagery to nadir viewing, but were less suitable for multitemporal analysis when the solar vector varied strongly among images. Approaches based on the MODIS BRDF model parameters successfully reduced directional effects in the visible bands, but removed only half of the systematic errors in the infrared bands. The best results were obtained when the semi-empirical BRDF model was calibrated using pairs of Landsat observation. This method produces a single set of BRDF parameters, which can then be used to operationally normalize Landsat TM/ETM+ imagery over Amazonian forests to nadir viewing and a standard solar configuration.

  4. SYNTHESIS METHODS OF ALGEBRAIC NORMAL FORM OF MANY-VALUED LOGIC FUNCTIONS

    Directory of Open Access Journals (Sweden)

    A. V. Sokolov

    2016-01-01

    Full Text Available The rapid development of methods of error-correcting coding, cryptography, and signal synthesis theory based on the principles of many-valued logic determines the need for a more detailed study of the forms of representation of functions of many-valued logic. In particular the algebraic normal form of Boolean functions, also known as Zhegalkin polynomial, that well describe many of the cryptographic properties of Boolean functions is widely used. In this article, we formalized the notion of algebraic normal form for many-valued logic functions. We developed a fast method of synthesis of algebraic normal form of 3-functions and 5-functions that work similarly to the Reed-Muller transform for Boolean functions: on the basis of recurrently synthesized transform matrices. We propose the hypothesis, which determines the rules of the synthesis of these matrices for the transformation from the truth table to the coefficients of the algebraic normal form and the inverse transform for any given number of variables of 3-functions or 5-functions. The article also introduces the definition of algebraic degree of nonlinearity of the functions of many-valued logic and the S-box, based on the principles of many-valued logic. Thus, the methods of synthesis of algebraic normal form of 3-functions applied to the known construction of recurrent synthesis of S-boxes of length N = 3k, whereby their algebraic degrees of nonlinearity are computed. The results could be the basis for further theoretical research and practical applications such as: the development of new cryptographic primitives, error-correcting codes, algorithms of data compression, signal structures, and algorithms of block and stream encryption, all based on the perspective principles of many-valued logic. In addition, the fast method of synthesis of algebraic normal form of many-valued logic functions is the basis for their software and hardware implementation.

  5. Silvicultural systems and cutting methods for ponderosa pine forests in the Front Range of the central Rocky Mountains

    Science.gov (United States)

    Robert R. Alexander

    1986-01-01

    Guidelines are provided to help forest managers and silviculturists develop even- and/or uneven-aged cutting practices needed to convert old-growth and mixed ponderosa pine forests in the Front Range into managed stands for a variety of resource needs. Guidelines consider stand conditions, and insect and disease susceptibility. Cutting practices are designed to...

  6. Discrimination methods of biological contamination on fresh-cut lettuce based on VNIR and NIR hyperspectral imaging

    Science.gov (United States)

    Multispectral imaging algorithms were developed using visible-near-infrared (VNIR) and near-infrared (NIR) hyperspectral imaging (HSI) techniques to detect worms on fresh-cut lettuce. The optimal wavebands that detect worm on fresh-cut lettuce for each type of HSI were investigated using the one-way...

  7. Short-cut math

    CERN Document Server

    Kelly, Gerard W

    1984-01-01

    Clear, concise compendium of about 150 time-saving math short-cuts features faster, easier ways to add, subtract, multiply, and divide. Each problem includes an explanation of the method. No special math ability needed.

  8. Algebraic method for analysis of nonlinear systems with a normal matrix

    International Nuclear Information System (INIS)

    Konyaev, Yu.A.; Salimova, A.F.

    2014-01-01

    A promising method has been proposed for analyzing a class of quasilinear nonautonomous systems of differential equations whose matrix can be represented as a sum of nonlinear normal matrices, which makes it possible to analyze stability without using the Lyapunov functions [ru

  9. Method of normal coordinates in the formulation of a system with dissipation: The harmonic oscillator

    International Nuclear Information System (INIS)

    Mshelia, E.D.

    1994-07-01

    The method of normal coordinates of the theory of vibrations is used in decoupling the motion of n oscillators (1 ≤ n ≤4) representing intrinsic degrees of freedom coupled to collective motion in a quantum mechanical model that allows the determination of the probability for energy transfer from collective to intrinsic excitations in a dissipative system. (author). 21 refs

  10. A study of the up-and-down method for non-normal distribution functions

    DEFF Research Database (Denmark)

    Vibholm, Svend; Thyregod, Poul

    1988-01-01

    The assessment of breakdown probabilities is examined by the up-and-down method. The exact maximum-likelihood estimates for a number of response patterns are calculated for three different distribution functions and are compared with the estimates corresponding to the normal distribution. Estimates...

  11. An asymptotic expression for the eigenvalues of the normalization kernel of the resonating group method

    International Nuclear Information System (INIS)

    Lomnitz-Adler, J.; Brink, D.M.

    1976-01-01

    A generating function for the eigenvalues of the RGM Normalization Kernel is expressed in terms of the diagonal matrix elements of thw GCM Overlap Kernel. An asymptotic expression for the eigenvalues is obtained by using the Method of Steepest Descent. (Auth.)

  12. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, Saira; Bissell, Mina J

    2004-12-17

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double

  13. EMG normalization method based on grade 3 of manual muscle testing: Within- and between-day reliability of normalization tasks and application to gait analysis.

    Science.gov (United States)

    Tabard-Fougère, Anne; Rose-Dulcina, Kevin; Pittet, Vincent; Dayer, Romain; Vuillerme, Nicolas; Armand, Stéphane

    2018-02-01

    Electromyography (EMG) is an important parameter in Clinical Gait Analysis (CGA), and is generally interpreted with timing of activation. EMG amplitude comparisons between individuals, muscles or days need normalization. There is no consensus on existing methods. The gold standard, maximum voluntary isometric contraction (MVIC), is not adapted to pathological populations because patients are often unable to perform an MVIC. The normalization method inspired by the isometric grade 3 of manual muscle testing (isoMMT3), which is the ability of a muscle to maintain a position against gravity, could be an interesting alternative. The aim of this study was to evaluate the within- and between-day reliability of the isoMMT3 EMG normalizing method during gait compared with the conventional MVIC method. Lower limb muscles EMG (gluteus medius, rectus femoris, tibialis anterior, semitendinosus) were recorded bilaterally in nine healthy participants (five males, aged 29.7±6.2years, BMI 22.7±3.3kgm -2 ) giving a total of 18 independent legs. Three repeated measurements of the isoMMT3 and MVIC exercises were performed with an EMG recording. EMG amplitude of the muscles during gait was normalized by these two methods. This protocol was repeated one week later. Within- and between-day reliability of normalization tasks were similar for isoMMT3 and MVIC methods. Within- and between-day reliability of gait EMG normalized by isoMMT3 was higher than with MVIC normalization. These results indicate that EMG normalization using isoMMT3 is a reliable method with no special equipment needed and will support CGA interpretation. The next step will be to evaluate this method in pathological populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Influence of Cutting Fluid Flow Rate and Cutting Parameters on the Surface Roughness and Flank Wear of TiAlN Coated Tool In Turning AISI 1015 Steel Using Taguchi Method

    Directory of Open Access Journals (Sweden)

    Moganapriya C.

    2017-09-01

    Full Text Available This paper presents the influence of cutting parameters (Depth of cut, feed rate, spindle speed and cutting fluid flow rate on the surface roughness and flank wear of physical vapor deposition (PVD Cathodic arc evaporation coated TiAlN tungsten carbide cutting tool insert during CNC turning of AISI 1015 mild steel. Analysis of Variance has been applied to determine the critical influence of cutting parameters. Taguchi orthogonal test design has been employed to optimize the process parameters affecting surface roughness and tool wear. Depth of cut was found to be the most dominant factor contributing to high surface roughness (67.5% of the inserts. However, cutting speed, feed rate and flow rate of cutting fluid showed minimal contribution to surface roughness. On the other hand, cutting speed (45.6% and flow rate of cutting fluid (23% were the dominant factors influencing tool wear. The optimum cutting conditions for desired surface roughness constitutes the following parameters such as medium cutting speed, low feed rate, low depth of cut and high cutting fluid flow rate. Minimal tool wear was achieved for the following process parameters such as low cutting speed, low feed rate, medium depth of cut and high cutting fluid flow rate.

  15. Emission computer tomographic orthopan display of the jaws - method and normal values

    International Nuclear Information System (INIS)

    Bockisch, A.; Koenig, R.; Biersack, H.J.; Wahl, G.

    1990-01-01

    A tomoscintigraphic method is described to create orthopan-like projections of the jaws from SPECT bone scans using cylinder projection. On the basis of this projection a numerical analysis of the dental regions is performed in the same computer code. For each dental region the activity relative to the contralateral region and relative to the average activity of the corresponding jaw is calculated. Using this method, a set of normal activity relations has been established by investigation of 24 patients. (orig.) [de

  16. A systematic study of genome context methods: calibration, normalization and combination

    Directory of Open Access Journals (Sweden)

    Dale Joseph M

    2010-10-01

    Full Text Available Abstract Background Genome context methods have been introduced in the last decade as automatic methods to predict functional relatedness between genes in a target genome using the patterns of existence and relative locations of the homologs of those genes in a set of reference genomes. Much work has been done in the application of these methods to different bioinformatics tasks, but few papers present a systematic study of the methods and their combination necessary for their optimal use. Results We present a thorough study of the four main families of genome context methods found in the literature: phylogenetic profile, gene fusion, gene cluster, and gene neighbor. We find that for most organisms the gene neighbor method outperforms the phylogenetic profile method by as much as 40% in sensitivity, being competitive with the gene cluster method at low sensitivities. Gene fusion is generally the worst performing of the four methods. A thorough exploration of the parameter space for each method is performed and results across different target organisms are presented. We propose the use of normalization procedures as those used on microarray data for the genome context scores. We show that substantial gains can be achieved from the use of a simple normalization technique. In particular, the sensitivity of the phylogenetic profile method is improved by around 25% after normalization, resulting, to our knowledge, on the best-performing phylogenetic profile system in the literature. Finally, we show results from combining the various genome context methods into a single score. When using a cross-validation procedure to train the combiners, with both original and normalized scores as input, a decision tree combiner results in gains of up to 20% with respect to the gene neighbor method. Overall, this represents a gain of around 15% over what can be considered the state of the art in this area: the four original genome context methods combined using a

  17. Experimental Method for Characterizing Electrical Steel Sheets in the Normal Direction

    Directory of Open Access Journals (Sweden)

    Thierry Belgrand

    2010-10-01

    Full Text Available This paper proposes an experimental method to characterise magnetic laminations in the direction normal to the sheet plane. The principle, which is based on a static excitation to avoid planar eddy currents, is explained and specific test benches are proposed. Measurements of the flux density are made with a sensor moving in and out of an air-gap. A simple analytical model is derived in order to determine the permeability in the normal direction. The experimental results for grain oriented steel sheets are presented and a comparison is provided with values obtained from literature.

  18. 26 CFR 1.585-7 - Elective cut-off method of changing from the reserve method of section 585.

    Science.gov (United States)

    2010-04-01

    ... its bad debt reserve for its pre- disqualification loans, as prescribed in paragraph (b) of this... maintain its bad debt reserve for its pre-disqualification loans (as defined in paragraph (b)(2) of this... outstanding pre-disqualification loans under the specific charge-off method of accounting for bad debts, as if...

  19. The research on AP1000 nuclear main pumps’ complete characteristics and the normalization method

    International Nuclear Information System (INIS)

    Zhu, Rongsheng; Liu, Yong; Wang, Xiuli; Fu, Qiang; Yang, Ailing; Long, Yun

    2017-01-01

    Highlights: • Complete characteristics of main pump are researched into. • The quadratic character of head and torque under some operatings. • The characteristics tend to be the same under certain conditions. • The normalization method gives proper estimations on external characteristics. • The normalization method can efficiently improve the security computing. - Abstract: The paper summarizes the complete characteristics of nuclear main pumps based on experimental results and makes a detailed study, and then draws a series of important conclusions: with regard to the overall flow area, the runaway operating and 0-revolving-speed operating of nuclear main pumps both have quadratic characteristics; with regard to the infinite flow, the braking operation and the 0-revolving-speed operation show consistent external characteristics. To remedy the shortcomings of the traditional complete-characteristic expression with regards to only describing limited flow sections at specific revolving speeds, the paper proposes a normalization method. As an important boundary condition of the security computing of unstable transient process of the primary reactor coolant pump and the nuclear island primary circuit and secondary circuit, the precision of complete-characteristic data and curve impacts the precision of security computing. A normalization curve obtained by applying the normalization method to process complete-characteristic data could correctly, completely and precisely express the complete characteristics of the primary reactor coolant pump under any rotational speed and full flow, and is capable of giving proper estimations on external characteristics of the flow outside the test range and even of the infinite flow. These advantages are of great significance for the improvement of security computing of transient processes of the primary reactor coolant pump and the circuit system.

  20. Design of Normal Concrete Mixtures Using Workability-Dispersion-Cohesion Method

    Directory of Open Access Journals (Sweden)

    Hisham Qasrawi

    2016-01-01

    Full Text Available The workability-dispersion-cohesion method is a new proposed method for the design of normal concrete mixes. The method uses special coefficients called workability-dispersion and workability-cohesion factors. These coefficients relate workability to mobility and stability of the concrete mix. The coefficients are obtained from special charts depending on mix requirements and aggregate properties. The method is practical because it covers various types of aggregates that may not be within standard specifications, different water to cement ratios, and various degrees of workability. Simple linear relationships were developed for variables encountered in the mix design and were presented in graphical forms. The method can be used in countries where the grading or fineness of the available materials is different from the common international specifications (such as ASTM or BS. Results were compared to the ACI and British methods of mix design. The method can be extended to cover all types of concrete.

  1. Efeito de hormônios vegetais sôbre o enraizamento de estacas de amoreira, plantadas em estufins, em posição normal e invertida Cuttings of Morus alba treated with plant hormones and planted in propagator covers with plastic covers, in reverse and normal systems

    Directory of Open Access Journals (Sweden)

    Antônio Castilho Rúbia

    1965-01-01

    Full Text Available No presente trabalho são relatados os resultados obtidos com o emprêgo de hormônios vegetais em estacas de amoreira, variedade Catânia 1, plantadas em estufins com cobertura de matéria plástica, em dois sistemas de plantio, invertido e normal. O sistema invertido apresentou melhor resultado no enraizamento das estacas. Com relação aos hormônios vegetais empregados, o ácido-beta-indolacético, na concentração de 100 mg/1 de água destilada, concorreu para aumentar a porcentagem de estacas enraizadas.In the present paper the results are reported that were obtained by the utilization of plant hormones on cuttings of Morus alba of the variety Catania 1, which were planted in propagator covers with plastic covers, obeying two planting systems, the reverse and the normal one. The reverse planting system showed better results in the rooting of the cuttings. As to the plant hormones used, the beta indolacetic acid, at a concentration of 100 mg to the litre of distilled water, contributed to increase the percentage of fixing the roots of the cuttings.

  2. Developing TOPSIS method using statistical normalization for selecting knowledge management strategies

    Directory of Open Access Journals (Sweden)

    Amin Zadeh Sarraf

    2013-09-01

    Full Text Available Purpose: Numerous companies are expecting their knowledge management (KM to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. Design/methodology/approach: An extension of TOPSIS, a multi-attribute decision making (MADM technique, to a group decision environment is investigated. TOPSIS is a practical and useful technique for ranking and selection of a number of externally determined alternatives through distance measures. The entropy method is often used for assessing weights in the TOPSIS method. Entropy in information theory is a criterion uses for measuring the amount of disorder represented by a discrete probability distribution. According to decrease resistance degree of employees opposite of implementing a new strategy, it seems necessary to spot all managers’ opinion. The normal distribution considered the most prominent probability distribution in statistics is used to normalize gathered data. Findings: The results of this study show that by considering 6 criteria for alternatives Evaluation, the most appropriate KM strategy to implement  in our company was ‘‘Personalization’’. Research limitations/implications: In this research, there are some assumptions that might affect the accuracy of the approach such as normal distribution of sample and community. These assumptions can be changed in future work. Originality/value: This paper proposes an effective solution based on combined entropy and TOPSIS approach to help companies that need to evaluate and select KM strategies. In represented solution, opinions of all managers is gathered and normalized by using standard normal distribution and central limit theorem. Keywords: Knowledge management; strategy; TOPSIS; Normal distribution; entropy

  3. Core Cutting Test with Vertical Rock Cutting Rig (VRCR)

    Science.gov (United States)

    Yasar, Serdar; Osman Yilmaz, Ali

    2017-12-01

    Roadheaders are frequently used machines in mining and tunnelling, and performance prediction of roadheaders is important for project economics and stability. Several methods were proposed so far for this purpose and, rock cutting tests are the best choice. Rock cutting tests are generally divided into two groups which are namely, full scale rock cutting tests and small scale rock cutting tests. These two tests have some superiorities and deficiencies over themselves. However, in many cases, where rock sampling becomes problematic, small scale rock cutting test (core cutting test) is preferred for performance prediction, since small block samples and core samples can be conducted to rock cutting testing. Common problem for rock cutting tests are that they can be found in very limited research centres. In this study, a new mobile rock cutting testing equipment, vertical rock cutting rig (VRCR) was introduced. Standard testing procedure was conducted on seven rock samples which were the part of a former study on cutting rocks with another small scale rock cutting test. Results showed that core cutting test can be realized successfully with VRCR with the validation of paired samples t-test.

  4. An automatic method to discriminate malignant masses from normal tissue in digital mammograms

    International Nuclear Information System (INIS)

    Brake, Guido M. te; Karssemeijer, Nico; Hendriks, Jan H.C.L.

    2000-01-01

    Specificity levels of automatic mass detection methods in mammography are generally rather low, because suspicious looking normal tissue is often hard to discriminate from real malignant masses. In this work a number of features were defined that are related to image characteristics that radiologists use to discriminate real lesions from normal tissue. An artificial neural network was used to map the computed features to a measure of suspiciousness for each region that was found suspicious by a mass detection method. Two data sets were used to test the method. The first set of 72 malignant cases (132 films) was a consecutive series taken from the Nijmegen screening programme, 208 normal films were added to improve the estimation of the specificity of the method. The second set was part of the new DDSM data set from the University of South Florida. A total of 193 cases (772 films) with 372 annotated malignancies was used. The measure of suspiciousness that was computed using the image characteristics was successful in discriminating tumours from false positive detections. Approximately 75% of all cancers were detected in at least one view at a specificity level of 0.1 false positive per image. (author)

  5. Evaluation on radioactive waste disposal amount of Kori Unit 1 reactor vessel considering cutting and packaging methods

    International Nuclear Information System (INIS)

    Choi, Yu Jong; Lee, Seong Cheol; Kim, Chang Lak

    2016-01-01

    Decommissioning of nuclear power plants has become a big issue in South Korea as some of the nuclear power plants in operation including Kori unit 1 and Wolsung unit 1 are getting old. Recently, Wolsung unit 1 received permission to continue operation while Kori unit 1 will shut down permanently in June 2017. With the consideration of segmentation method and disposal containers, this paper evaluated final disposal amount of radioactive waste generated from decommissioning of the reactor pressure vessel in Kori unit 1 which will be decommissioned as the first in South Korea. The evaluation results indicated that the final disposal amount from the top and bottom heads of the reactor pressure vessel with hemisphere shape decreased as they were cut in smaller more effectively than the cylindrical part of the reactor pressure vessel. It was also investigated that 200 L and 320 L radioactive waste disposal containers used in Kyung-Ju disposal facility had low payload efficiency because of loading weight limitation

  6. An imbalance fault detection method based on data normalization and EMD for marine current turbines.

    Science.gov (United States)

    Zhang, Milu; Wang, Tianzhen; Tang, Tianhao; Benbouzid, Mohamed; Diallo, Demba

    2017-05-01

    This paper proposes an imbalance fault detection method based on data normalization and Empirical Mode Decomposition (EMD) for variable speed direct-drive Marine Current Turbine (MCT) system. The method is based on the MCT stator current under the condition of wave and turbulence. The goal of this method is to extract blade imbalance fault feature, which is concealed by the supply frequency and the environment noise. First, a Generalized Likelihood Ratio Test (GLRT) detector is developed and the monitoring variable is selected by analyzing the relationship between the variables. Then, the selected monitoring variable is converted into a time series through data normalization, which makes the imbalance fault characteristic frequency into a constant. At the end, the monitoring variable is filtered out by EMD method to eliminate the effect of turbulence. The experiments show that the proposed method is robust against turbulence through comparing the different fault severities and the different turbulence intensities. Comparison with other methods, the experimental results indicate the feasibility and efficacy of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Cutting Cosmos

    DEFF Research Database (Denmark)

    Mikkelsen, Henrik Hvenegaard

    For the first time in over 30 years, a new ethnographic study emerges on the Bugkalot tribe, more widely known as the Ilongot of the northern Philippines. Exploring the notion of masculinity among the Bugkalot, Cutting Cosmos is not only an experimental, anthropological study of the paradoxes...... around which Bugkalot society revolves, but also a reflection on anthropological theory and writing. Focusing on the transgressive acts through which masculinity is performed, this book explores the idea of the cosmic cut, the ritual act that enables the Bugkalot man to momentarily hold still the chaotic...

  8. Performance Testing of Cutting Fluids

    DEFF Research Database (Denmark)

    Belluco, Walter

    The importance of cutting fluid performance testing has increased with documentation requirements of new cutting fluid formulations based on more sustainable products, as well as cutting with minimum quantity of lubrication and dry cutting. Two sub-problems have to be solved: i) which machining...... tests feature repeatability, reproducibility and sensitivity to cutting fluids, and ii) to what extent results of one test ensure relevance to a wider set of machining situations. The present work is aimed at assessing the range of validity of the different testing methods, investigating correlation...... within the whole range of operations, materials, cutting fluids, operating conditions, etc. Cutting fluid performance was evaluated in turning, drilling, reaming and tapping, and with respect to tool life, cutting forces, chip formation and product quality (dimensional accuracy and surface integrity...

  9. Application of specific gravity method for normalization of urinary excretion rates of radionuclides

    International Nuclear Information System (INIS)

    Thakur, Smita S.; Yadav, J.R.; Rao, D.D.

    2015-01-01

    In vitro bioassay monitoring is based on the determination of activity concentration in biological samples excreted from the body and is most suitable for alpha and beta emitters. For occupational workers handling actinides in reprocessing facilities possibility of internal exposure exists and urine assay is preferred method for monitoring such exposure. Urine samples collected for 24 h duration, is the true representative of bioassay sample and hence in the case of insufficient collection time, specific gravity applied method of normalization of urine sample is used. The present study reports the data of specific gravity generated for controlled group of Indian population by the use of densitometer and its application in urinary sample activity normalization. The average specific gravity value obtained for the controlled group was 1.008±0.005 gm/ml. (author)

  10. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  11. Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Dam, Erik Bjørnager

    2018-01-01

    Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic...... image analysis methods used in longitudinal studies. The implication is increased measurement variation and a risk of bias in the estimations (e.g. in the volume change for a structure). We proposed two quite different approaches for scanner drift normalization and demonstrated the performance...... for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured...

  12. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  13. Normal Values of Tissue-Muscle Perfusion Indexes of Lower Limbs Obtained with a Scintigraphic Method.

    Science.gov (United States)

    Manevska, Nevena; Stojanoski, Sinisa; Pop Gjorceva, Daniela; Todorovska, Lidija; Miladinova, Daniela; Zafirova, Beti

    2017-09-01

    Introduction Muscle perfusion is a physiologic process that can undergo quantitative assessment and thus define the range of normal values of perfusion indexes and perfusion reserve. The investigation of the microcirculation has a crucial role in determining the muscle perfusion. Materials and method The study included 30 examinees, 24-74 years of age, without a history of confirmed peripheral artery disease and all had normal findings on Doppler ultrasonography and pedo-brachial index of lower extremity (PBI). 99mTc-MIBI tissue muscle perfusion scintigraphy of lower limbs evaluates tissue perfusion in resting condition "rest study" and after workload "stress study", through quantitative parameters: Inter-extremity index (for both studies), left thigh/right thigh (LT/RT) left calf/right calf (LC/RC) and perfusion reserve (PR) for both thighs and calves. Results In our investigated group we assessed the normal values of quantitative parameters of perfusion indexes. Indexes ranged for LT/RT in rest study 0.91-1.05, in stress study 0.92-1.04. LC/RC in rest 0.93-1.07 and in stress study 0.93-1.09. The examinees older than 50 years had insignificantly lower perfusion reserve of these parameters compared with those younger than 50, LC (p=0.98), and RC (p=0.6). Conclusion This non-invasive scintigraphic method allows in individuals without peripheral artery disease to determine the range of normal values of muscle perfusion at rest and stress condition and to clinically implement them in evaluation of patients with peripheral artery disease for differentiating patients with normal from those with impaired lower limbs circulation.

  14. Impact of the Choice of Normalization Method on Molecular Cancer Class Discovery Using Nonnegative Matrix Factorization.

    Science.gov (United States)

    Yang, Haixuan; Seoighe, Cathal

    2016-01-01

    Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.

  15. Study on Damage Evaluation and Machinability of UD-CFRP for the Orthogonal Cutting Operation Using Scanning Acoustic Microscopy and the Finite Element Method.

    Science.gov (United States)

    Wang, Dongyao; He, Xiaodong; Xu, Zhonghai; Jiao, Weicheng; Yang, Fan; Jiang, Long; Li, Linlin; Liu, Wenbo; Wang, Rongguo

    2017-02-20

    Owing to high specific strength and designability, unidirectional carbon fiber reinforced polymer (UD-CFRP) has been utilized in numerous fields to replace conventional metal materials. Post machining processes are always required for UD-CFRP to achieve dimensional tolerance and assembly specifications. Due to inhomogeneity and anisotropy, UD-CFRP differs greatly from metal materials in machining and failure mechanism. To improve the efficiency and avoid machining-induced damage, this paper undertook to study the correlations between cutting parameters, fiber orientation angle, cutting forces, and cutting-induced damage for UD-CFRP laminate. Scanning acoustic microscopy (SAM) was employed and one-/two-dimensional damage factors were then created to quantitatively characterize the damage of the laminate workpieces. According to the 3D Hashin's criteria a numerical model was further proposed in terms of the finite element method (FEM). A good agreement between simulation and experimental results was validated for the prediction and structural optimization of the UD-CFRP.

  16. A method for named entity normalization in biomedical articles: application to diseases and plants.

    Science.gov (United States)

    Cho, Hyejin; Choi, Wonjun; Lee, Hyunju

    2017-10-13

    In biomedical articles, a named entity recognition (NER) technique that identifies entity names from texts is an important element for extracting biological knowledge from articles. After NER is applied to articles, the next step is to normalize the identified names into standard concepts (i.e., disease names are mapped to the National Library of Medicine's Medical Subject Headings disease terms). In biomedical articles, many entity normalization methods rely on domain-specific dictionaries for resolving synonyms and abbreviations. However, the dictionaries are not comprehensive except for some entities such as genes. In recent years, biomedical articles have accumulated rapidly, and neural network-based algorithms that incorporate a large amount of unlabeled data have shown considerable success in several natural language processing problems. In this study, we propose an approach for normalizing biological entities, such as disease names and plant names, by using word embeddings to represent semantic spaces. For diseases, training data from the National Center for Biotechnology Information (NCBI) disease corpus and unlabeled data from PubMed abstracts were used to construct word representations. For plants, a training corpus that we manually constructed and unlabeled PubMed abstracts were used to represent word vectors. We showed that the proposed approach performed better than the use of only the training corpus or only the unlabeled data and showed that the normalization accuracy was improved by using our model even when the dictionaries were not comprehensive. We obtained F-scores of 0.808 and 0.690 for normalizing the NCBI disease corpus and manually constructed plant corpus, respectively. We further evaluated our approach using a data set in the disease normalization task of the BioCreative V challenge. When only the disease corpus was used as a dictionary, our approach significantly outperformed the best system of the task. The proposed approach shows robust

  17. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    Science.gov (United States)

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  18. Remote Laser Cutting of CFRP: Improvements in the Cut Surface

    Science.gov (United States)

    Stock, Johannes; Zaeh, Michael F.; Conrad, Markus

    In the automotive industry carbon fibre reinforced plastics (CFRP) are considered as a future key material to reduce the weight of the vehicle. Therefore, capable production techniques are required to process this material in mass industry. E.g., state of the art methods for cutting are limited by the high tool wear or the feasible feed rate. Laser cutting processes are still under investigation. This paper presents detailed new studies on remote laser cutting of CFRP focusing on the influence of the material properties and the quality of the cut surface. By adding light absorbing soot particles to the resin of the matrix, the cutting process is improved and fewer defects emerge.

  19. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    International Nuclear Information System (INIS)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-01-01

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  20. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods

    Energy Technology Data Exchange (ETDEWEB)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich [Departments of Electrical and Computer Engineering and Internal Medicine, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, A-8010 Graz (Austria); Department of Electrical and Computer Engineering, Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242 (United States); Department of Radiology, Medical University Graz, Auenbruggerplatz 34, A-8010 Graz (Austria)

    2012-03-15

    Purpose: Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. Methods: A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and/or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Results: Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of

  1. Simulation of moving boundaries interacting with compressible reacting flows using a second-order adaptive Cartesian cut-cell method

    Science.gov (United States)

    Muralidharan, Balaji; Menon, Suresh

    2018-03-01

    A high-order adaptive Cartesian cut-cell method, developed in the past by the authors [1] for simulation of compressible viscous flow over static embedded boundaries, is now extended for reacting flow simulations over moving interfaces. The main difficulty related to simulation of moving boundary problems using immersed boundary techniques is the loss of conservation of mass, momentum and energy during the transition of numerical grid cells from solid to fluid and vice versa. Gas phase reactions near solid boundaries can produce huge source terms to the governing equations, which if not properly treated for moving boundaries, can result in inaccuracies in numerical predictions. The small cell clustering algorithm proposed in our previous work is now extended to handle moving boundaries enforcing strict conservation. In addition, the cell clustering algorithm also preserves the smoothness of solution near moving surfaces. A second order Runge-Kutta scheme where the boundaries are allowed to change during the sub-time steps is employed. This scheme improves the time accuracy of the calculations when the body motion is driven by hydrodynamic forces. Simple one dimensional reacting and non-reacting studies of moving piston are first performed in order to demonstrate the accuracy of the proposed method. Results are then reported for flow past moving cylinders at subsonic and supersonic velocities in a viscous compressible flow and are compared with theoretical and previously available experimental data. The ability of the scheme to handle deforming boundaries and interaction of hydrodynamic forces with rigid body motion is demonstrated using different test cases. Finally, the method is applied to investigate the detonation initiation and stabilization mechanisms on a cylinder and a sphere, when they are launched into a detonable mixture. The effect of the filling pressure on the detonation stabilization mechanisms over a hyper-velocity sphere launched into a hydrogen

  2. Numerical Validation of the Delaunay Normalization and the Krylov-Bogoliubov-Mitropolsky Method

    Directory of Open Access Journals (Sweden)

    David Ortigosa

    2014-01-01

    Full Text Available A scalable second-order analytical orbit propagator programme based on modern and classical perturbation methods is being developed. As a first step in the validation and verification of part of our orbit propagator programme, we only consider the perturbation produced by zonal harmonic coefficients in the Earth’s gravity potential, so that it is possible to analyze the behaviour of the mathematical expressions involved in Delaunay normalization and the Krylov-Bogoliubov-Mitropolsky method in depth and determine their limits.

  3. Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods.

    Science.gov (United States)

    Beichel, Reinhard; Bornik, Alexander; Bauer, Christian; Sorantin, Erich

    2012-03-01

    Liver segmentation is an important prerequisite for the assessment of liver cancer treatment options like tumor resection, image-guided radiation therapy (IGRT), radiofrequency ablation, etc. The purpose of this work was to evaluate a new approach for liver segmentation. A graph cuts segmentation method was combined with a three-dimensional virtual reality based segmentation refinement approach. The developed interactive segmentation system allowed the user to manipulate volume chunks and∕or surfaces instead of 2D contours in cross-sectional images (i.e, slice-by-slice). The method was evaluated on twenty routinely acquired portal-phase contrast enhanced multislice computed tomography (CT) data sets. An independent reference was generated by utilizing a currently clinically utilized slice-by-slice segmentation method. After 1 h of introduction to the developed segmentation system, three experts were asked to segment all twenty data sets with the proposed method. Compared to the independent standard, the relative volumetric segmentation overlap error averaged over all three experts and all twenty data sets was 3.74%. Liver segmentation required on average 16 min of user interaction per case. The calculated relative volumetric overlap errors were not found to be significantly different [analysis of variance (ANOVA) test, p = 0.82] between experts who utilized the proposed 3D system. In contrast, the time required by each expert for segmentation was found to be significantly different (ANOVA test, p = 0.0009). Major differences between generated segmentations and independent references were observed in areas were vessels enter or leave the liver and no accepted criteria for defining liver boundaries exist. In comparison, slice-by-slice based generation of the independent standard utilizing a live wire tool took 70.1 min on average. A standard 2D segmentation refinement approach applied to all twenty data sets required on average 38.2 min of user interaction

  4. Simplified calculation method for radiation dose under normal condition of transport

    International Nuclear Information System (INIS)

    Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.

    1993-01-01

    In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)

  5. Cutting agents for special metals

    International Nuclear Information System (INIS)

    Sugito, Seiji; Sakakibara, Fumi

    1979-01-01

    The quantity of use of special metals has increased year after year in the Plasma Research Institute, Nagoya University, with the development of researches on plasma and nuclear fusion. Most of these special metals are hard to cut, and in order to secure the surface smoothness and dimensional accuracy, considerable efforts are required. The method of experiment is as follows: cutting agents salt water and acetone, rape-seed oil, sulfide and chloride oil and water soluble cutting oil W grade 3; metals to be cut niobium, molybdenum, tantalum, titanium and tungsten; cutting conditions cutting speed 4.7 to 90 m/min, feed 0.07 to 0.2 mm/rev, depth of cut 0.1 to 0.4 mm, tool cemented carbide bit. Chemicals such as tetrachloromethane and trichloroethane give excellent cutting performance, but the toxicity is intense and the stimulative odor exists, accordingly they are hard to use practically. Cutting was easier when the salt water added with acetone was used than the case of rape-seed oil, but salt water is corrosive. Recently, the machining of molybdenum has been often carried out, and the water soluble cutting oil was the best. It is also good for cutting stainless steel, and its lubricating property is improved by adding some additives such as sulfur, chlorine, phosphorus and molybdenum disulfide. However after cutting with it, washing is required. (Kako, I.)

  6. Standard Test Method for Normal Spectral Emittance at Elevated Temperatures of Nonconducting Specimens

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1971-01-01

    1.1 This test method describes an accurate technique for measuring the normal spectral emittance of electrically nonconducting materials in the temperature range from 1000 to 1800 K, and at wavelengths from 1 to 35 μm. It is particularly suitable for measuring the normal spectral emittance of materials such as ceramic oxides, which have relatively low thermal conductivity and are translucent to appreciable depths (several millimetres) below the surface, but which become essentially opaque at thicknesses of 10 mm or less. 1.2 This test method requires expensive equipment and rather elaborate precautions, but produces data that are accurate to within a few percent. It is particularly suitable for research laboratories, where the highest precision and accuracy are desired, and is not recommended for routine production or acceptance testing. Because of its high accuracy, this test method may be used as a reference method to be applied to production and acceptance testing in case of dispute. 1.3 This test metho...

  7. 1H MR spectroscopy of the normal human brains : comparison of automated prescan method with manual method

    International Nuclear Information System (INIS)

    Lim, Myung Kwan; Suh, Chang Hae; Cho, Young Kook; Kim, Jin Hee

    1998-01-01

    The purpose of this paper is to evaluate regional differences in relative metabolite ratios in the normal human brain by 1 H MR spectroscopy (MRS), and compare the spectral quality obtained by the automated prescan method (PROBE) and the manual method. A total of 61 reliable spectra were obtained by PROBE (28/34=82% success) and by the manual method (33/33=100% success). Regional differences in the spectral patterns of the five regions were clearly demonstrated by both PROBE and the manual methods. for prescanning, the manual method took slightly longer than PROBE (3-5 mins and 2 mins, respectively). There were no significant differences in spectral patterns and relative metabolic ratios between the two methods. However, auto-prescan by PROBE seemed to be very vulnerable to slight movement by patients, and in three cases, an acceptable spectrum was thus not obtained. PROBE is a highly practical and reliable method for single voxel 1 H MRS of the human brain; the two methods of prescanning do not result in significantly different spectral patterns and the relative metabolite ratios. PROBE, however, is vulnerable to slight movement by patients, and if the success rate for obtaining quality spectra is to be increased, regardless of the patient's condition and the region of the brain, it must be used in conjunction with the manual method. (author). 23 refs., 2 tabs., 3 figs

  8. Cutting Itch

    OpenAIRE

    Zellweger, Christoph

    2015-01-01

    Cutting Itch” is a curatorial project by artists-duo Baltensperger-Siepert. An exhibition project about the essential need of art to be an active system that reflects, investigates social, cultural and political issues. It is about an existential necessity to shape ones environment, to think about relations, regulating structures and about how we can locate ourselves in a more and more globalised world. (from press material). \\ud \\ud Baltensperger & Siepert identified seven artists from Mexi...

  9. Study on compressive strength of self compacting mortar cubes under normal & electric oven curing methods

    Science.gov (United States)

    Prasanna Venkatesh, G. J.; Vivek, S. S.; Dhinakaran, G.

    2017-07-01

    In the majority of civil engineering applications, the basic building blocks were the masonry units. Those masonry units were developed as a monolithic structure by plastering process with the help of binding agents namely mud, lime, cement and their combinations. In recent advancements, the mortar study plays an important role in crack repairs, structural rehabilitation, retrofitting, pointing and plastering operations. The rheology of mortar includes flowable, passing and filling properties which were analogous with the behaviour of self compacting concrete. In self compacting (SC) mortar cubes, the cement was replaced by mineral admixtures namely silica fume (SF) from 5% to 20% (with an increment of 5%), metakaolin (MK) from 10% to 30% (with an increment of 10%) and ground granulated blast furnace slag (GGBS) from 25% to 75% (with an increment of 25%). The ratio between cement and fine aggregate was kept constant as 1: 2 for all normal and self compacting mortar mixes. The accelerated curing namely electric oven curing with the differential temperature of 128°C for the period of 4 hours was adopted. It was found that the compressive strength obtained from the normal and electric oven method of curing was higher for self compacting mortar cubes than normal mortar cube. The cement replacement by 15% SF, 20% MK and 25%GGBS obtained higher strength under both curing conditions.

  10. Modeling the Circle of Willis Using Electrical Analogy Method under both Normal and Pathological Circumstances

    Science.gov (United States)

    Abdi, Mohsen; Karimi, Alireza; Navidbakhsh, Mahdi; Rahmati, Mohammadali; Hassani, Kamran; Razmkon, Ali

    2013-01-01

    Background and objective: The circle of Willis (COW) supports adequate blood supply to the brain. The cardiovascular system, in the current study, is modeled using an equivalent electronic system focusing on the COW. Methods: In our previous study we used 42 compartments to model whole cardiovascular system. In the current study, nevertheless, we extended our model by using 63 compartments to model whole CS. Each cardiovascular artery is modeled using electrical elements, including resistor, capacitor, and inductor. The MATLAB Simulink software is used to obtain the left and right ventricles pressure as well as pressure distribution at efferent arteries of the circle of Willis. Firstly, the normal operation of the system is shown and then the stenosis of cerebral arteries is induced in the circuit and, consequently, the effects are studied. Results: In the normal condition, the difference between pressure distribution of right and left efferent arteries (left and right ACA–A2, left and right MCA, left and right PCA–P2) is calculated to indicate the effect of anatomical difference between left and right sides of supplying arteries of the COW. In stenosis cases, the effect of internal carotid artery occlusion on efferent arteries pressure is investigated. The modeling results are verified by comparing to the clinical observation reported in the literature. Conclusion: We believe the presented model is a useful tool for representing the normal operation of the cardiovascular system and study of the pathologies. PMID:25505747

  11. CALCULATION OF LASER CUTTING COSTS

    Directory of Open Access Journals (Sweden)

    Bogdan Nedic

    2016-09-01

    Full Text Available The paper presents description methods of metal cutting and calculation of treatment costs based on model that is developed on Faculty of mechanical engineering in Kragujevac. Based on systematization and analysis of large number of calculation models of cutting with unconventional methods, mathematical model is derived, which is used for creating a software for calculation costs of metal cutting. Software solution enables resolving the problem of calculating the cost of laser cutting, comparison' of costs made by other unconventional methods and provides documentation that consists of reports on estimated costs.

  12. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  13. Histological versus stereological methods applied at spermatogonia during normal human development

    DEFF Research Database (Denmark)

    Cortes, D

    1990-01-01

    The number of spermatogonia per tubular transverse section (S/T), and the percentage of seminiferous tubulus containing spermatogonia (the fertility index (FI] were measured in 40 pairs of normal autopsy testes aged 28 weeks of gestation-40 years. S/T and FI showed similar changes during the whole...... period, and were minimal between 1 and 4 years. The number of spermatogonia per testis (S/testis) and the number of spermatogonia per cm3 testis tissue (S/cm3) were estimated by stereological methods in the same testes. S/T and FI respectively were significantly correlated both to S/testis and S/cm3. So...

  14. Discrimination methods for biological contaminants in fresh-cut lettuce based on VNIR and NIR hyperspectral imaging

    Science.gov (United States)

    Mo, Changyeun; Kim, Giyoung; Kim, Moon S.; Lim, Jongguk; Lee, Seung Hyun; Lee, Hong-Seok; Cho, Byoung-Kwan

    2017-09-01

    The rapid detection of biological contaminants such as worms in fresh-cut vegetables is necessary to improve the efficiency of visual inspections carried out by workers. Multispectral imaging algorithms were developed using visible-near-infrared (VNIR) and near-infrared (NIR) hyperspectral imaging (HSI) techniques to detect worms in fresh-cut lettuce. The optimal wavebands that can detect worms in fresh-cut lettuce were investigated for each type of HSI using one-way ANOVA. Worm-detection imaging algorithms for VNIR and NIR imaging exhibited prediction accuracies of 97.00% (RI547/945) and 100.0% (RI1064/1176, SI1064-1176, RSI-I(1064-1173)/1064, and RSI-II(1064-1176)/(1064+1176)), respectively. The two HSI techniques revealed that spectral images with a pixel size of 1 × 1 mm or 2 × 2 mm had the best classification accuracy for worms. The results demonstrate that hyperspectral reflectance imaging techniques have the potential to detect worms in fresh-cut lettuce. Future research relating to this work will focus on a real-time sorting system for lettuce that can simultaneously detect various defects such as browning, worms, and slugs.

  15. Analysis about diamond tool wear in nano-metric cutting of single crystal silicon using molecular dynamics method

    Science.gov (United States)

    Wang, Zhiguo; Liang, Yingchun; Chen, Mingjun; Tong, Zhen; Chen, Jiaxuan

    2010-10-01

    Tool wear not only changes its geometry accuracy and integrity, but also decrease machining precision and surface integrity of workpiece that affect using performance and service life of workpiece in ultra-precision machining. Scholars made a lot of experimental researches and stimulant analyses, but there is a great difference on the wear mechanism, especially on the nano-scale wear mechanism. In this paper, the three-dimensional simulation model is built to simulate nano-metric cutting of a single crystal silicon with a non-rigid right-angle diamond tool with 0 rake angle and 0 clearance angle by the molecular dynamics (MD) simulation approach, which is used to investigate the diamond tool wear during the nano-metric cutting process. A Tersoff potential is employed for the interaction between carbon-carbon atoms, silicon-silicon atoms and carbon-silicon atoms. The tool gets the high alternating shear stress, the tool wear firstly presents at the cutting edge where intension is low. At the corner the tool is splitted along the {1 1 1} crystal plane, which forms the tipping. The wear at the flank face is the structure transformation of diamond that the diamond structure transforms into the sheet graphite structure. Owing to the tool wear the cutting force increases.

  16. Automated PCR setup for forensic casework samples using the Normalization Wizard and PCR Setup robotic methods.

    Science.gov (United States)

    Greenspoon, S A; Sykes, K L V; Ban, J D; Pollard, A; Baisden, M; Farr, M; Graham, N; Collins, B L; Green, M M; Christenson, C C

    2006-12-20

    Human genome, pharmaceutical and research laboratories have long enjoyed the application of robotics to performing repetitive laboratory tasks. However, the utilization of robotics in forensic laboratories for processing casework samples is relatively new and poses particular challenges. Since the quantity and quality (a mixture versus a single source sample, the level of degradation, the presence of PCR inhibitors) of the DNA contained within a casework sample is unknown, particular attention must be paid to procedural susceptibility to contamination, as well as DNA yield, especially as it pertains to samples with little biological material. The Virginia Department of Forensic Science (VDFS) has successfully automated forensic casework DNA extraction utilizing the DNA IQ(trade mark) System in conjunction with the Biomek 2000 Automation Workstation. Human DNA quantitation is also performed in a near complete automated fashion utilizing the AluQuant Human DNA Quantitation System and the Biomek 2000 Automation Workstation. Recently, the PCR setup for casework samples has been automated, employing the Biomek 2000 Automation Workstation and Normalization Wizard, Genetic Identity version, which utilizes the quantitation data, imported into the software, to create a customized automated method for DNA dilution, unique to that plate of DNA samples. The PCR Setup software method, used in conjunction with the Normalization Wizard method and written for the Biomek 2000, functions to mix the diluted DNA samples, transfer the PCR master mix, and transfer the diluted DNA samples to PCR amplification tubes. Once the process is complete, the DNA extracts, still on the deck of the robot in PCR amplification strip tubes, are transferred to pre-labeled 1.5 mL tubes for long-term storage using an automated method. The automation of these steps in the process of forensic DNA casework analysis has been accomplished by performing extensive optimization, validation and testing of the

  17. Attenuation correction of myocardial SPECT by scatter-photopeak window method in normal subjects

    International Nuclear Information System (INIS)

    Okuda, Koichi; Nakajima, Kenichi; Matsuo, Shinro; Kinuya, Seigo; Motomura, Nobutoku; Kubota, Masahiro; Yamaki, Noriyasu; Maeda, Hisato

    2009-01-01

    Segmentation with scatter and photopeak window data using attenuation correction (SSPAC) method can provide a patient-specific non-uniform attenuation coefficient map only by using photopeak and scatter images without X-ray computed tomography (CT). The purpose of this study is to evaluate the performance of attenuation correction (AC) by the SSPAC method on normal myocardial perfusion database. A total of 32 sets of exercise-rest myocardial images with Tc-99m-sestamibi were acquired in both photopeak (140 keV±10%) and scatter (7% of lower side of the photopeak window) energy windows. Myocardial perfusion databases by the SSPAC method and non-AC (NC) were created from 15 female and 17 male subjects with low likelihood of cardiac disease using quantitative perfusion SPECT software. Segmental myocardial counts of a 17-segment model from these databases were compared on the basis of paired t test. AC average myocardial perfusion count was significantly higher than that in NC in the septal and inferior regions (P<0.02). On the contrary, AC average count was significantly lower in the anterolateral and apical regions (P<0.01). Coefficient variation of the AC count in the mid, apical and apex regions was lower than that of NC. The SSPAC method can improve average myocardial perfusion uptake in the septal and inferior regions and provide uniform distribution of myocardial perfusion. The SSPAC method could be a practical method of attenuation correction without X-ray CT. (author)

  18. Antimicrobial Susceptibility of Flavobacterium psychrophilum from Chilean Salmon Farms and Their Epidemiological Cut-Off Values Using Agar Dilution and Disk Diffusion Methods.

    Science.gov (United States)

    Miranda, Claudio D; Smith, Peter; Rojas, Rodrigo; Contreras-Lynch, Sergio; Vega, J M Alonso

    2016-01-01

    Flavobacterium psychrophilum is the most important bacterial pathogen for freshwater farmed salmonids in Chile. The aims of this study were to determine the susceptibility to antimicrobials used in fish farming of Chilean isolates and to calculate their epidemiological cut-off (CO WT ) values. A number of 125 Chilean isolates of F. psychrophilum were isolated from reared salmonids presenting clinical symptoms indicative of flavobacteriosis and their identities were confirmed by 16S rRNA polymerase chain reaction. Susceptibility to antibacterials was tested on diluted Mueller-Hinton by using an agar dilution MIC method and a disk diffusion method. The CO WT values calculated by Normalized Resistance Interpretation (NRI) analysis allow isolates to be categorized either as wild-type fully susceptible (WT) or as manifesting reduced susceptibility (NWT). When MIC data was used, NRI analysis calculated a CO WT of ≤0.125, ≤2, and ≤0.5 μg mL -1 for amoxicillin, florfenicol, and oxytetracycline, respectively. For the quinolones, the CO WT were ≤1, ≤0.5, and ≤0.125 μg mL -1 for oxolinic acid, flumequine, and enrofloxacin, respectively. The disk diffusion data sets obtained in this work were extremely diverse and were spread over a wide range. For the quinolones there was a close agreement between the frequencies of NWT isolates calculated using MIC and disk data. For oxolinic acid, flumequine, and enrofloxacin the frequencies were 45, 39, and 38% using MIC data, and 42, 41, and 44%, when disk data were used. There was less agreement with the other antimicrobials, because NWT frequencies obtained using MIC and disk data, respectively, were 24 and 10% for amoxicillin, 8 and 2% for florfenicol, and 70 and 64% for oxytetracycline. Considering that the MIC data was more precise than the disk diffusion data, MIC determination would be the preferred method for susceptibility testing for this species and the NWT frequencies derived from the MIC data sets should be

  19. Optimization of accelerator parameters using normal form methods on high-order transfer maps

    Energy Technology Data Exchange (ETDEWEB)

    Snopok, Pavel [Michigan State Univ., East Lansing, MI (United States)

    2007-05-01

    Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented

  20. Water-Cut Sensor System

    KAUST Repository

    Karimi, Muhammad Akram; Shamim, Atif; Arsalan, Muhammad

    2018-01-01

    Provided in some embodiments is a method of manufacturing a pipe conformable water-cut sensors system. Provided in some embodiments is method for manufacturing a water-cut sensor system that includes providing a helical T-resonator, a helical ground

  1. Non-binary decomposition trees - a method of reliability computation for systems with known minimal paths/cuts

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, Jacek

    2004-05-01

    A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)

  2. Non-binary decomposition trees - a method of reliability computation for systems with known minimal paths/cuts

    International Nuclear Information System (INIS)

    Malinowski, Jacek

    2004-01-01

    A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)

  3. Effects of Different Cutting Patterns and Experimental Conditions on the Performance of a Conical Drag Tool

    Science.gov (United States)

    Copur, Hanifi; Bilgin, Nuh; Balci, Cemal; Tumac, Deniz; Avunduk, Emre

    2017-06-01

    This study aims at determining the effects of single-, double-, and triple-spiral cutting patterns; the effects of tool cutting speeds on the experimental scale; and the effects of the method of yield estimation on cutting performance by performing a set of full-scale linear cutting tests with a conical cutting tool. The average and maximum normal, cutting and side forces; specific energy; yield; and coarseness index are measured and compared in each cutting pattern at a 25-mm line spacing, at varying depths of cut per revolution, and using two cutting speeds on five different rock samples. The results indicate that the optimum specific energy decreases by approximately 25% with an increasing number of spirals from the single- to the double-spiral cutting pattern for the hard rocks, whereas generally little effect was observed for the soft- and medium-strength rocks. The double-spiral cutting pattern appeared to be more effective than the single- or triple-spiral cutting pattern and had an advantage of lower side forces. The tool cutting speed had no apparent effect on the cutting performance. The estimation of the specific energy by the yield based on the theoretical swept area was not significantly different from that estimated by the yield based on the muck weighing, especially for the double- and triple-spiral cutting patterns and with the optimum ratio of line spacing to depth of cut per revolution. This study also demonstrated that the cutterhead and mechanical miner designs, semi-theoretical deterministic computer simulations and empirical performance predictions and optimization models should be based on realistic experimental simulations. Studies should be continued to obtain more reliable results by creating a larger database of laboratory tests and field performance records for mechanical miners using drag tools.

  4. Prepation and Characterization of TiO2 Nanofluid by Sol-gel Method for Cutting Tools

    OpenAIRE

    BİRLİK, Işıl; AZEM, N.Funda Ak; YİĞİT, Recep; EROL, Mustafa; YILDIRIM, Serdar; YURDDAŞKAL, Metin; SANCAKOĞLU, Orkut; ÇELİK, Erdal

    2014-01-01

    In the past few decades, rapid advances in nanotechnology have lead to emerging of new generation of coolants called as nanofluids. Nanofluids are defined as suspension of nanoparticles in a basefluid. Machining experiences high temperatures due to friction between the tool and workpiece, thus influencing the workpiece dimensional accuracy and surface quality. Further, the cutting fluids also incur a major portion of the total manufacturing cost. Nanofluids are containing oxides including MgO...

  5. Prepation and Characterization of TiO2 Nanofluid by Sol-gel Method for Cutting Tools

    OpenAIRE

    BİRLİK, Işıl; AZEM, N.Funda Ak; YİĞİT, Recep; EROL, Mustafa; YILDIRIM, Serdar; YURDDAŞKAL, Metin; SANCAKOĞLU, Orkut; ÇELİK, Erdal

    2015-01-01

    In the past few decades, rapid advances in nanotechnology have lead to emerging of new generation of coolants called as nanofluids. Nanofluids are defined as suspension of nanoparticles in a basefluid. Machining experiences high temperatures due to friction between the tool and workpiece, thus influencing the workpiece dimensional accuracy and surface quality. Further, the cutting fluids also incur a major portion of the total manufacturing cost. Nanofluids are containing oxides including MgO...

  6. A new method for detection of the electron temperature in laser-plasma short wave cut off of stimulated Raman scattering spectrum

    International Nuclear Information System (INIS)

    Zhang Jiatai

    1994-01-01

    From the theory of stimulated Raman scattering (SRS) three wave interaction, a new method of detecting the electron temperature in laser-plasma is obtained. SRS spectrum obtained from Shenguang No. 12 Nd-laser experiments are analysed. Using the wave length of short wave cut off of SRS, the electron temperature in corona plasma region is calculated consistently. These results agree reasonable with X-ray spectrum experiments

  7. Tubing and cable cutting tool

    Science.gov (United States)

    Mcsmith, D. D.; Richardson, J. I. (Inventor)

    1984-01-01

    A hand held hydraulic cutting tool was developed which is particularly useful in deactivating ejection seats in military aircraft rescue operations. The tool consists primarily of a hydraulic system composed of a fluid reservoir, a pumping piston, and an actuator piston. Mechanical cutting jaws are attached to the actuator piston rod. The hydraulic system is controlled by a pump handle. As the pump handle is operated the actuator piston rod is forced outward and thus the cutting jaws are forced together. The frame of the device is a flexible metal tubing which permits easy positioning of the tool cutting jaws in remote and normally inaccessible locations. Bifurcated cutting edges ensure removal of a section of the tubing or cable to thereby reduce the possibility of accidental reactivation of the tubing or cable being severed.

  8. Fabricating TiO2 nanocolloids by electric spark discharge method at normal temperature and pressure

    Science.gov (United States)

    Tseng, Kuo-Hsiung; Chang, Chaur-Yang; Chung, Meng-Yun; Cheng, Ting-Shou

    2017-11-01

    In this study, TiO2 nanocolloids were successfully fabricated in deionized water without using suspending agents through using the electric spark discharge method at room temperature and under normal atmospheric pressure. This method was exceptional because it did not create nanoparticle dispersion and the produced colloids contained no derivatives. The proposed method requires only traditional electrical discharge machines (EDMs), self-made magnetic stirrers, and Ti wires (purity, 99.99%). The EDM pulse on time (T on) and pulse off time (T off) were respectively set at 50 and 100 μs, 100 and 100 μs, 150 and 100 μs, and 200 and 100 μs to produce four types of TiO2 nanocolloids. Zetasizer analysis of the nanocolloids showed that a decrease in T on increased the suspension stability, but there were no significant correlations between T on and particle size. Colloids produced from the four production configurations showed a minimum particle size between 29.39 and 52.85 nm and a zeta-potential between -51.2 and -46.8 mV, confirming that the method introduced in this study can be used to produce TiO2 nanocolloids with excellent suspension stability. Scanning electron microscopy with energy dispersive spectroscopy also indicated that the TiO2 colloids did not contain elements other than Ti and oxygen.

  9. Fabricating TiO2 nanocolloids by electric spark discharge method at normal temperature and pressure.

    Science.gov (United States)

    Tseng, Kuo-Hsiung; Chang, Chaur-Yang; Chung, Meng-Yun; Cheng, Ting-Shou

    2017-11-17

    In this study, TiO 2 nanocolloids were successfully fabricated in deionized water without using suspending agents through using the electric spark discharge method at room temperature and under normal atmospheric pressure. This method was exceptional because it did not create nanoparticle dispersion and the produced colloids contained no derivatives. The proposed method requires only traditional electrical discharge machines (EDMs), self-made magnetic stirrers, and Ti wires (purity, 99.99%). The EDM pulse on time (T on ) and pulse off time (T off ) were respectively set at 50 and 100 μs, 100 and 100 μs, 150 and 100 μs, and 200 and 100 μs to produce four types of TiO 2 nanocolloids. Zetasizer analysis of the nanocolloids showed that a decrease in T on increased the suspension stability, but there were no significant correlations between T on and particle size. Colloids produced from the four production configurations showed a minimum particle size between 29.39 and 52.85 nm and a zeta-potential between -51.2 and -46.8 mV, confirming that the method introduced in this study can be used to produce TiO 2 nanocolloids with excellent suspension stability. Scanning electron microscopy with energy dispersive spectroscopy also indicated that the TiO 2 colloids did not contain elements other than Ti and oxygen.

  10. Contrast sensitivity measured by two different test methods in healthy, young adults with normal visual acuity.

    Science.gov (United States)

    Koefoed, Vilhelm F; Baste, Valborg; Roumes, Corinne; Høvding, Gunnar

    2015-03-01

    This study reports contrast sensitivity (CS) reference values obtained by two different test methods in a strictly selected population of healthy, young adults with normal uncorrected visual acuity. Based on these results, the index of contrast sensitivity (ICS) is calculated, aiming to establish ICS reference values for this population and to evaluate the possible usefulness of ICS as a tool to compare the degree of agreement between different CS test methods. Military recruits with best eye uncorrected visual acuity 0.00 LogMAR or better, normal colour vision and age 18-25 years were included in a study to record contrast sensitivity using Optec 6500 (FACT) at spatial frequencies of 1.5, 3, 6, 12 and 18 cpd in photopic and mesopic light and CSV-1000E at spatial frequencies of 3, 6, 12 and 18 cpd in photopic light. Index of contrast sensitivity was calculated based on data from the three tests, and the Bland-Altman technique was used to analyse the agreement between ICS obtained by the different test methods. A total of 180 recruits were included. Contrast sensitivity frequency data for all tests were highly skewed with a marked ceiling effect for the photopic tests. The median ICS for Optec 6500 at 85 cd/m2 was -0.15 (95% percentile 0.45), compared with -0.00 (95% percentile 1.62) for Optec at 3 cd/m2 and 0.30 (95% percentile 1.20) FOR CSV-1000E. The mean difference between ICSFACT 85 and ICSCSV was -0.43 (95% CI -0.56 to -0.30, p<0.00) with limits of agreement (LoA) within -2.10 and 1.22. The regression line on the difference of average was near to zero (R2=0.03). The results provide reference CS and ICS values in a young, adult population with normal visual acuity. The agreement between the photopic tests indicated that they may be used interchangeably. There was little agreement between the mesopic and photopic tests. The mesopic test seemed best suited to differentiate between candidates and may therefore possibly be useful for medical selection purposes.

  11. A Classification Method of Normal and Overweight Females Based on Facial Features for Automated Medical Applications

    Directory of Open Access Journals (Sweden)

    Bum Ju Lee

    2012-01-01

    Full Text Available Obesity and overweight have become serious public health problems worldwide. Obesity and abdominal obesity are associated with type 2 diabetes, cardiovascular diseases, and metabolic syndrome. In this paper, we first suggest a method of predicting normal and overweight females according to body mass index (BMI based on facial features. A total of 688 subjects participated in this study. We obtained the area under the ROC curve (AUC value of 0.861 and kappa value of 0.521 in Female: 21–40 (females aged 21–40 years group, and AUC value of 0.76 and kappa value of 0.401 in Female: 41–60 (females aged 41–60 years group. In two groups, we found many features showing statistical differences between normal and overweight subjects by using an independent two-sample t-test. We demonstrated that it is possible to predict BMI status using facial characteristics. Our results provide useful information for studies of obesity and facial characteristics, and may provide useful clues in the development of applications for alternative diagnosis of obesity in remote healthcare.

  12. Normalized impact factor (NIF): an adjusted method for calculating the citation rate of biomedical journals.

    Science.gov (United States)

    Owlia, P; Vasei, M; Goliaei, B; Nassiri, I

    2011-04-01

    The interests in journal impact factor (JIF) in scientific communities have grown over the last decades. The JIFs are used to evaluate journals quality and the papers published therein. JIF is a discipline specific measure and the comparison between the JIF dedicated to different disciplines is inadequate, unless a normalization process is performed. In this study, normalized impact factor (NIF) was introduced as a relatively simple method enabling the JIFs to be used when evaluating the quality of journals and research works in different disciplines. The NIF index was established based on the multiplication of JIF by a constant factor. The constants were calculated for all 54 disciplines of biomedical field during 2005, 2006, 2007, 2008 and 2009 years. Also, ranking of 393 journals in different biomedical disciplines according to the NIF and JIF were compared to illustrate how the NIF index can be used for the evaluation of publications in different disciplines. The findings prove that the use of the NIF enhances the equality in assessing the quality of research works produced by researchers who work in different disciplines. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. ADVANCED CUTTINGS TRANSPORT STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Stefan Miska; Nicholas Takach; Kaveh Ashenayi; Mengjiao Yu; Ramadan Ahmed; Mark Pickell; Len Volk; Lei Zhou; Zhu Chen; Aimee Washington; Crystal Redden

    2003-09-30

    The Quarter began with installing the new drill pipe, hooking up the new hydraulic power unit, completing the pipe rotation system (Task 4 has been completed), and making the SWACO choke operational. Detailed design and procurement work is proceeding on a system to elevate the drill-string section. The prototype Foam Generator Cell has been completed by Temco and delivered. Work is currently underway to calibrate the system. Literature review and preliminary model development for cuttings transportation with polymer foam under EPET conditions are in progress. Preparations for preliminary cuttings transport experiments with polymer foam have been completed. Two nuclear densitometers were re-calibrated. Drill pipe rotation system was tested up to 250 RPM. Water flow tests were conducted while rotating the drill pipe up to 100 RPM. The accuracy of weight measurements for cuttings in the annulus was evaluated. Additional modifications of the cuttings collection system are being considered in order to obtain the desired accurate measurement of cuttings weight in the annular test section. Cutting transport experiments with aerated fluids are being conducted at EPET, and analyses of the collected data are in progress. The printed circuit board is functioning with acceptable noise level to measure cuttings concentration at static condition using ultrasonic method. We were able to conduct several tests using a standard low pass filter to eliminate high frequency noise. We tested to verify that we can distinguish between different depths of sand in a static bed of sand. We tested with water, air and a mix of the two mediums. Major modifications to the DTF have almost been completed. A stop-flow cell is being designed for the DTF, the ACTF and Foam Generator/Viscometer which will allow us to capture bubble images without the need for ultra fast shutter speeds or microsecond flash system.

  14. Environmental dose-assessment methods for normal operations at DOE nuclear sites

    International Nuclear Information System (INIS)

    Strenge, D.L.; Kennedy, W.E. Jr.; Corley, J.P.

    1982-09-01

    Methods for assessing public exposure to radiation from normal operations at DOE facilities are reviewed in this report. The report includes a discussion of environmental doses to be calculated, a review of currently available environmental pathway models and a set of recommended models for use when environmental pathway modeling is necessary. Currently available models reviewed include those used by DOE contractors, the Environmental Protection Agency (EPA), the Nuclear Regulatory Commission (NRC), and other organizations involved in environmental assessments. General modeling areas considered for routine releases are atmospheric transport, airborne pathways, waterborne pathways, direct exposure to penetrating radiation, and internal dosimetry. The pathway models discussed in this report are applicable to long-term (annual) uniform releases to the environment: they do not apply to acute releases resulting from accidents or emergency situations

  15. Review of clinically accessible methods to determine lean body mass for normalization of standardized uptake values

    International Nuclear Information System (INIS)

    DEVRIESE, Joke; POTTEL, Hans; BEELS, Laurence; MAES, Alex; VAN DE WIELE, Christophe; GHEYSENS, Olivier

    2016-01-01

    With the routine use of 2-deoxy-2-[ 18 F]-fluoro-D-glucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) scans, metabolic activity of tumors can be quantitatively assessed through calculation of SUVs. One possible normalization parameter for the standardized uptake value (SUV) is lean body mass (LBM), which is generally calculated through predictive equations based on height and body weight. (Semi-)direct measurements of LBM could provide more accurate results in cancer populations than predictive equations based on healthy populations. In this context, four methods to determine LBM are reviewed: bioelectrical impedance analysis, dual-energy X-ray absorptiometry. CT, and magnetic resonance imaging. These methods were selected based on clinical accessibility and are compared in terms of methodology, precision and accuracy. By assessing each method’s specific advantages and limitations, a well-considered choice of method can hopefully lead to more accurate SUVLBM values, hence more accurate quantitative assessment of 18F-FDG PET images.

  16. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    Directory of Open Access Journals (Sweden)

    Liana Chaves Mendes-Santos

    Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.

  17. Adjustment technique without explicit formation of normal equations /conjugate gradient method/

    Science.gov (United States)

    Saxena, N. K.

    1974-01-01

    For a simultaneous adjustment of a large geodetic triangulation system, a semiiterative technique is modified and used successfully. In this semiiterative technique, known as the conjugate gradient (CG) method, original observation equations are used, and thus the explicit formation of normal equations is avoided, 'huge' computer storage space being saved in the case of triangulation systems. This method is suitable even for very poorly conditioned systems where solution is obtained only after more iterations. A detailed study of the CG method for its application to large geodetic triangulation systems was done that also considered constraint equations with observation equations. It was programmed and tested on systems as small as two unknowns and three equations up to those as large as 804 unknowns and 1397 equations. When real data (573 unknowns, 965 equations) from a 1858-km-long triangulation system were used, a solution vector accurate to four decimal places was obtained in 2.96 min after 1171 iterations (i.e., 2.0 times the number of unknowns).

  18. Shack-Hartmann centroid detection method based on high dynamic range imaging and normalization techniques

    International Nuclear Information System (INIS)

    Vargas, Javier; Gonzalez-Fernandez, Luis; Quiroga, Juan Antonio; Belenguer, Tomas

    2010-01-01

    In the optical quality measuring process of an optical system, including diamond-turning components, the use of a laser light source can produce an undesirable speckle effect in a Shack-Hartmann (SH) CCD sensor. This speckle noise can deteriorate the precision and accuracy of the wavefront sensor measurement. Here we present a SH centroid detection method founded on computer-based techniques and capable of measurement in the presence of strong speckle noise. The method extends the dynamic range imaging capabilities of the SH sensor through the use of a set of different CCD integration times. The resultant extended range spot map is normalized to accurately obtain the spot centroids. The proposed method has been applied to measure the optical quality of the main optical system (MOS) of the mid-infrared instrument telescope smulator. The wavefront at the exit of this optical system is affected by speckle noise when it is illuminated by a laser source and by air turbulence because it has a long back focal length (3017 mm). Using the proposed technique, the MOS wavefront error was measured and satisfactory results were obtained.

  19. Cutting temperature measurement and material machinability

    Directory of Open Access Journals (Sweden)

    Nedić Bogdan P.

    2014-01-01

    Full Text Available Cutting temperature is very important parameter of cutting process. Around 90% of heat generated during cutting process is then away by sawdust, and the rest is transferred to the tool and workpiece. In this research cutting temperature was measured with artificial thermocouples and question of investigation of metal machinability from aspect of cutting temperature was analyzed. For investigation of material machinability during turning artificial thermocouple was placed just below the cutting top of insert, and for drilling thermocouples were placed through screw holes on the face surface. In this way was obtained simple, reliable, economic and accurate method for investigation of cutting machinability.

  20. Reducing the nonconforming products by using the Six Sigma method: A case study of a polyes-ter short cut fiber manufacturing in Indonesia

    Directory of Open Access Journals (Sweden)

    Oky Syafwiratama

    2017-03-01

    Full Text Available Polyester short cut fiber is a textile industry which is rarely explored or researched. This research explains the necessary steps of improvement using Six Sigma method to reduce the nonconform-ing products in a polyester short cut fiber manufacturing in Indonesia. An increased noncon-forming products in the shortcut fiber production process created some quality problems from January to May, 2015. Define, measure, analysis, improve, control (DMAIC steps were im-plemented to determine root cause of the problems and to improve production process using sta-tistical approach. The results of Six Sigma improvement has indicated that the process capability was increased from 2.2 to 3.1 sigma, savings $18,394.2 USD per-month.

  1. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data.

    Science.gov (United States)

    Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun

    2018-01-01

    Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  2. Analytical energy gradient for the two-component normalized elimination of the small component method

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter, E-mail: dcremer@smu.edu [Computational and Theoretical Chemistry Group (CATCO), Department of Chemistry, Southern Methodist University, 3215 Daniel Ave, Dallas, Texas 75275-0314 (United States)

    2015-06-07

    The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg{sub 2} and Cn{sub 2}, which are due to the admixture of more bonding character to the highest occupied spinors.

  3. Analytical energy gradient for the two-component normalized elimination of the small component method

    Science.gov (United States)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter

    2015-06-01

    The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg2 and Cn2, which are due to the admixture of more bonding character to the highest occupied spinors.

  4. A Gauss-Newton method for the integration of spatial normal fields in shape Space

    KAUST Repository

    Balzer, Jonathan

    2011-08-09

    We address the task of adjusting a surface to a vector field of desired surface normals in space. The described method is entirely geometric in the sense, that it does not depend on a particular parametrization of the surface in question. It amounts to solving a nonlinear least-squares problem in shape space. Previously, the corresponding minimization has been performed by gradient descent, which suffers from slow convergence and susceptibility to local minima. Newton-type methods, although significantly more robust and efficient, have not been attempted as they require second-order Hadamard differentials. These are difficult to compute for the problem of interest and in general fail to be positive-definite symmetric. We propose a novel approximation of the shape Hessian, which is not only rigorously justified but also leads to excellent numerical performance of the actual optimization. Moreover, a remarkable connection to Sobolev flows is exposed. Three other established algorithms from image and geometry processing turn out to be special cases of ours. Our numerical implementation founds on a fast finite-elements formulation on the minimizing sequence of triangulated shapes. A series of examples from a wide range of different applications is discussed to underline flexibility and efficiency of the approach. © 2011 Springer Science+Business Media, LLC.

  5. A Bootstrap Based Measure Robust to the Choice of Normalization Methods for Detecting Rhythmic Features in High Dimensional Data.

    Science.gov (United States)

    Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A; Peddada, Shyamal D

    2018-01-01

    Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html.

  6. Evaluation of four methods for separation of lymphocytes from normal individuals and patients with cancer and tuberculosis.

    Science.gov (United States)

    Patrick, C C; Graber, C D; Loadholt, C B

    1976-01-01

    An optimal technique was sought for lymphocyte recovery from normal and chronic diseased individuals. Lymphocytes were separated by four techniques: Plasmagel, Ficoll--Hypaque, a commercial semiautomatic method, and simple centrifugation using blood drawn from ten normal individuals, ten cancer patients, and ten tuberculosis patients. The lymphocyte mixture obtained after using each method was analyzed for percent recovery, amount if contamination by erythrocytes and neutrophils, and percent viability. The results show that the semiautomatic method yielded the best percent recovery of lymphocytes for normal individuals, while the simple centrifugation method contributed the highest percent recovery for cancer and tuberculosis patients. The Ficoll-Hypaque method gave the lowest erythrocyte contamination for all three types of individuals tested, while the Plasmagel method gave the lowest neutrophil contamination for all three types of individuals. The simple centrifugation method yielded all viable lymphocytes and thus gave the highest percent viability.

  7. GMPR: A robust normalization method for zero-inflated count data with application to microbiome sequencing data

    Directory of Open Access Journals (Sweden)

    Li Chen

    2018-04-01

    Full Text Available Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios—a simple but effective normalization method—for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.

  8. Application of Finite Element Method to Analyze the Influences of Process Parameters on the Cut Surface in Fine Blanking Processes by Using Clearance-Dependent Critical Fracture Criteria

    Directory of Open Access Journals (Sweden)

    Phyo Wai Myint

    2018-04-01

    Full Text Available The correct choice of process parameters is important in predicting the cut surface and obtaining a fully-fine sheared surface in the fine blanking process. The researchers used the value of the critical fracture criterion obtained by long duration experiments to predict the conditions of cut surfaces in the fine blanking process. In this study, the clearance-dependent critical ductile fracture criteria obtained by the Cockcroft-Latham and Oyane criteria were used to reduce the time and cost of experiments to obtain the value of the critical fracture criterion. The Finite Element Method (FEM was applied to fine blanking processes to study the influences of process parameters such as the initial compression, the punch and die corner radii and the shape and size of the V-ring indenter on the length of the sheared surface. The effects of stress triaxiality and punch diameters on the cut surface produced by the fine blanking process are also discussed. The verified process parameters and tool geometry for obtaining a fully-fine sheared SPCC surface are described. The results showed that the accurate and stable prediction of ductile fracture initiation can be achieved using the Oyane criterion.

  9. Study on Damage Evaluation and Machinability of UD-CFRP for the Orthogonal Cutting Operation Using Scanning Acoustic Microscopy and the Finite Element Method

    Directory of Open Access Journals (Sweden)

    Dongyao Wang

    2017-02-01

    Full Text Available Owing to high specific strength and designability, unidirectional carbon fiber reinforced polymer (UD-CFRP has been utilized in numerous fields to replace conventional metal materials. Post machining processes are always required for UD-CFRP to achieve dimensional tolerance and assembly specifications. Due to inhomogeneity and anisotropy, UD-CFRP differs greatly from metal materials in machining and failure mechanism. To improve the efficiency and avoid machining-induced damage, this paper undertook to study the correlations between cutting parameters, fiber orientation angle, cutting forces, and cutting-induced damage for UD-CFRP laminate. Scanning acoustic microscopy (SAM was employed and one-/two-dimensional damage factors were then created to quantitatively characterize the damage of the laminate workpieces. According to the 3D Hashin’s criteria a numerical model was further proposed in terms of the finite element method (FEM. A good agreement between simulation and experimental results was validated for the prediction and structural optimization of the UD-CFRP.

  10. Normal mode analysis of macromolecular systems with the mobile block Hessian method

    International Nuclear Information System (INIS)

    Ghysels, An; Van Speybroeck, Veronique; Van Neck, Dimitri; Waroquier, Michel; Brooks, Bernard R.

    2015-01-01

    Until recently, normal mode analysis (NMA) was limited to small proteins, not only because the required energy minimization is a computationally exhausting task, but also because NMA requires the expensive diagonalization of a 3N a ×3N a matrix with N a the number of atoms. A series of simplified models has been proposed, in particular the Rotation-Translation Blocks (RTB) method by Tama et al. for the simulation of proteins. It makes use of the concept that a peptide chain or protein can be seen as a subsequent set of rigid components, i.e. the peptide units. A peptide chain is thus divided into rigid blocks with six degrees of freedom each. Recently we developed the Mobile Block Hessian (MBH) method, which in a sense has similar features as the RTB method. The main difference is that MBH was developed to deal with partially optimized systems. The position/orientation of each block is optimized while the internal geometry is kept fixed at a plausible - but not necessarily optimized - geometry. This reduces the computational cost of the energy minimization. Applying the standard NMA on a partially optimized structure however results in spurious imaginary frequencies and unwanted coordinate dependence. The MBH avoids these unphysical effects by taking into account energy gradient corrections. Moreover the number of variables is reduced, which facilitates the diagonalization of the Hessian. In the original implementation of MBH, atoms could only be part of one rigid block. The MBH is now extended to the case where atoms can be part of two or more blocks. Two basic linkages can be realized: (1) blocks connected by one link atom, or (2) by two link atoms, where the latter is referred to as the hinge type connection. In this work we present the MBH concept and illustrate its performance with the crambin protein as an example

  11. Feasibility of Computed Tomography-Guided Methods for Spatial Normalization of Dopamine Transporter Positron Emission Tomography Image.

    Science.gov (United States)

    Kim, Jin Su; Cho, Hanna; Choi, Jae Yong; Lee, Seung Ha; Ryu, Young Hoon; Lyoo, Chul Hyoung; Lee, Myung Sik

    2015-01-01

    Spatial normalization is a prerequisite step for analyzing positron emission tomography (PET) images both by using volume-of-interest (VOI) template and voxel-based analysis. Magnetic resonance (MR) or ligand-specific PET templates are currently used for spatial normalization of PET images. We used computed tomography (CT) images acquired with PET/CT scanner for the spatial normalization for [18F]-N-3-fluoropropyl-2-betacarboxymethoxy-3-beta-(4-iodophenyl) nortropane (FP-CIT) PET images and compared target-to-cerebellar standardized uptake value ratio (SUVR) values with those obtained from MR- or PET-guided spatial normalization method in healthy controls and patients with Parkinson's disease (PD). We included 71 healthy controls and 56 patients with PD who underwent [18F]-FP-CIT PET scans with a PET/CT scanner and T1-weighted MR scans. Spatial normalization of MR images was done with a conventional spatial normalization tool (cvMR) and with DARTEL toolbox (dtMR) in statistical parametric mapping software. The CT images were modified in two ways, skull-stripping (ssCT) and intensity transformation (itCT). We normalized PET images with cvMR-, dtMR-, ssCT-, itCT-, and PET-guided methods by using specific templates for each modality and measured striatal SUVR with a VOI template. The SUVR values measured with FreeSurfer-generated VOIs (FSVOI) overlaid on original PET images were also used as a gold standard for comparison. The SUVR values derived from all four structure-guided spatial normalization methods were highly correlated with those measured with FSVOI (P normalization methods provided reliable striatal SUVR values comparable to those obtained with MR-guided methods. CT-guided methods can be useful for analyzing dopamine transporter PET images when MR images are unavailable.

  12. Comparing the normalization methods for the differential analysis of Illumina high-throughput RNA-Seq data.

    Science.gov (United States)

    Li, Peipei; Piao, Yongjun; Shon, Ho Sun; Ryu, Keun Ho

    2015-10-28

    Recently, rapid improvements in technology and decrease in sequencing costs have made RNA-Seq a widely used technique to quantify gene expression levels. Various normalization approaches have been proposed, owing to the importance of normalization in the analysis of RNA-Seq data. A comparison of recently proposed normalization methods is required to generate suitable guidelines for the selection of the most appropriate approach for future experiments. In this paper, we compared eight non-abundance (RC, UQ, Med, TMM, DESeq, Q, RPKM, and ERPKM) and two abundance estimation normalization methods (RSEM and Sailfish). The experiments were based on real Illumina high-throughput RNA-Seq of 35- and 76-nucleotide sequences produced in the MAQC project and simulation reads. Reads were mapped with human genome obtained from UCSC Genome Browser Database. For precise evaluation, we investigated Spearman correlation between the normalization results from RNA-Seq and MAQC qRT-PCR values for 996 genes. Based on this work, we showed that out of the eight non-abundance estimation normalization methods, RC, UQ, Med, TMM, DESeq, and Q gave similar normalization results for all data sets. For RNA-Seq of a 35-nucleotide sequence, RPKM showed the highest correlation results, but for RNA-Seq of a 76-nucleotide sequence, least correlation was observed than the other methods. ERPKM did not improve results than RPKM. Between two abundance estimation normalization methods, for RNA-Seq of a 35-nucleotide sequence, higher correlation was obtained with Sailfish than that with RSEM, which was better than without using abundance estimation methods. However, for RNA-Seq of a 76-nucleotide sequence, the results achieved by RSEM were similar to without applying abundance estimation methods, and were much better than with Sailfish. Furthermore, we found that adding a poly-A tail increased alignment numbers, but did not improve normalization results. Spearman correlation analysis revealed that RC, UQ

  13. Distinguishing hyperhidrosis and normal physiological sweat production

    DEFF Research Database (Denmark)

    Thorlacius, Linnea; Gyldenløve, Mette; Zachariae, Claus

    2015-01-01

    of this study was to establish reference intervals for normal physiological axillary and palmar sweat production. METHODS: Gravimetric testing was performed in 75 healthy control subjects. Subsequently, these results were compared with findings in a cohort of patients with hyperhidrosis and with the results...... 100 mg/5 min. CONCLUSIONS: A sweat production rate of 100 mg/5 min as measured by gravimetric testing may be a reasonable cut-off value for distinguishing axillary and palmar hyperhidrosis from normal physiological sweat production....

  14. Quality Analysis of Cutting Steel Using Laser

    Directory of Open Access Journals (Sweden)

    Vladislav Markovič

    2013-02-01

    Full Text Available The article explores the quality dependence of the edge surface of steel C45 LST EN 10083-1 obtained cutting the material using laser on different cutting regimes and variations in the thickness of trial steel. The paper presents the influence of the main modes of laser cutting equipment Trulaser 3030, including cutting speed, pressure, angle and the thickness of the surface on the quality characteristics of the sample. The quality of the edge after laser cutting is the most important indicator influencing such technological spread in industry worldwide. Laser cutting is the most popular method of material cutting. Therefore, the article focuses on cutting equipment, cutting defects and methods of analysis. Research on microstructure, roughness and micro-toughness has been performed with reference to edge samples. At the end of the publication, conclusions are drawn.Article in Lithuanian

  15. Quality Analysis of Cutting Steel Using Laser

    Directory of Open Access Journals (Sweden)

    Vladislav Markovič

    2012-12-01

    Full Text Available The article explores the quality dependence of the edge surface of steel C45 LST EN 10083-1 obtained cutting the material using laser on different cutting regimes and variations in the thickness of trial steel. The paper presents the influence of the main modes of laser cutting equipment Trulaser 3030, including cutting speed, pressure, angle and the thickness of the surface on the quality characteristics of the sample. The quality of the edge after laser cutting is the most important indicator influencing such technological spread in industry worldwide. Laser cutting is the most popular method of material cutting. Therefore, the article focuses on cutting equipment, cutting defects and methods of analysis. Research on microstructure, roughness and micro-toughness has been performed with reference to edge samples. At the end of the publication, conclusions are drawn.Article in Lithuanian

  16. Quantitative Analysis of Differential Proteome Expression in Bladder Cancer vs. Normal Bladder Cells Using SILAC Method.

    Directory of Open Access Journals (Sweden)

    Ganglong Yang

    Full Text Available The best way to increase patient survival rate is to identify patients who are likely to progress to muscle-invasive or metastatic disease upfront and treat them more aggressively. The human cell lines HCV29 (normal bladder epithelia, KK47 (low grade nonmuscle invasive bladder cancer, NMIBC, and YTS1 (metastatic bladder cancer have been widely used in studies of molecular mechanisms and cell signaling during bladder cancer (BC progression. However, little attention has been paid to global quantitative proteome analysis of these three cell lines. We labeled HCV29, KK47, and YTS1 cells by the SILAC method using three stable isotopes each of arginine and lysine. Labeled proteins were analyzed by 2D ultrahigh-resolution liquid chromatography LTQ Orbitrap mass spectrometry. Among 3721 unique identified and annotated proteins in KK47 and YTS1 cells, 36 were significantly upregulated and 74 were significantly downregulated with >95% confidence. Differential expression of these proteins was confirmed by western blotting, quantitative RT-PCR, and cell staining with specific antibodies. Gene ontology (GO term and pathway analysis indicated that the differentially regulated proteins were involved in DNA replication and molecular transport, cell growth and proliferation, cellular movement, immune cell trafficking, and cell death and survival. These proteins and the advanced proteome techniques described here will be useful for further elucidation of molecular mechanisms in BC and other types of cancer.

  17. NDT-Bobath method in normalization of muscle tone in post-stroke patients.

    Science.gov (United States)

    Mikołajewska, Emilia

    2012-01-01

    Ischaemic stroke is responsible for 80-85% of strokes. There is great interest in finding effective methods of rehabilitation for post-stroke patients. The aim of this study was to assess the results of rehabilitation carried out in the normalization of upper limb muscle tonus in patients, estimated on the Ashworth Scale for Grading Spasticity. The examined group consisted of 60 patients after ischaemic stroke. 10 sessions of NDT-Bobath therapy were provided within 2 weeks (ten days of therapy). Patient examinations using the Ashworth Scale for Grading Spasticity were done twice: the first time on admission and the second after the last session of the therapy to assess rehabilitation effects. Among the patients involved in the study, the results measured on the Ashworth Scale (where possible) were as follows: recovery in 16 cases (26.67%), relapse in 1 case (1.67%), no measurable changes (or change within the same grade of the scale) in 8 cases (13.33%). Statistically significant changes were observed in the health status of the patients. These changes, in the area of muscle tone, were favorable and reflected in the outcomes of the assessment using the Ashworth Scale for Grading Spasticity.

  18. A design method for two-layer beams consisting of normal and fibered high strength concrete

    International Nuclear Information System (INIS)

    Iskhakov, I.; Ribakov, Y.

    2007-01-01

    Two-layer fibered concrete beams can be analyzed using conventional methods for composite elements. The compressed zone of such beam section is made of high strength concrete (HSC), and the tensile one of normal strength concrete (NSC). The problems related to such type of beams are revealed and studied. An appropriate depth of each layer is prescribed. Compatibility conditions between HSC and NSC layers are found. It is based on the shear deformations equality on the layers border in a section with maximal depth of the compression zone. For the first time a rigorous definition of HSC is given using a comparative analysis of deformability and strength characteristics of different concrete classes. According to this definition, HSC has no download branch in the stress-strain diagram, the stress-strain function has minimum exponent, the ductility parameter is minimal and the concrete tensile strength remains constant with an increase in concrete compression strength. The application fields of two-layer concrete beams based on different static schemes and load conditions make known. It is known that the main disadvantage of HSCs is their low ductility. In order to overcome this problem, fibers are added to the HSC layer. Influence of different fiber volume ratios on structural ductility is discussed. An upper limit of the required fibers volume ratio is found based on compatibility equation of transverse tensile concrete deformations and deformations of fibers

  19. Statistical methods for estimating normal blood chemistry ranges and variance in rainbow trout (Salmo gairdneri), Shasta Strain

    Science.gov (United States)

    Wedemeyer, Gary A.; Nelson, Nancy C.

    1975-01-01

    Gaussian and nonparametric (percentile estimate and tolerance interval) statistical methods were used to estimate normal ranges for blood chemistry (bicarbonate, bilirubin, calcium, hematocrit, hemoglobin, magnesium, mean cell hemoglobin concentration, osmolality, inorganic phosphorus, and pH for juvenile rainbow (Salmo gairdneri, Shasta strain) trout held under defined environmental conditions. The percentile estimate and Gaussian methods gave similar normal ranges, whereas the tolerance interval method gave consistently wider ranges for all blood variables except hemoglobin. If the underlying frequency distribution is unknown, the percentile estimate procedure would be the method of choice.

  20. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Effects of Different LiDAR Intensity Normalization Methods on Scotch Pine Forest Leaf Area Index Estimation

    Directory of Open Access Journals (Sweden)

    YOU Haotian

    2018-02-01

    Full Text Available The intensity data of airborne light detection and ranging (LiDAR are affected by many factors during the acquisition process. It is of great significance for the normalization and application of LiDAR intensity data to study the effective quantification and normalization of the effect from each factor. In this paper, the LiDAR data were normalized with range, angel of incidence, range and angle of incidence based on radar equation, respectively. Then two metrics, including canopy intensity sum and ratio of intensity, were extracted and used to estimate forest LAI, which was aimed at quantifying the effects of intensity normalization on forest LAI estimation. It was found that the range intensity normalization could improve the accuracy of forest LAI estimation. While the angle of incidence intensity normalization did not improve the accuracy and made the results worse. Although the range and incidence angle normalized intensity data could improve the accuracy, the improvement was less than the result of range intensity normalization. Meanwhile, the differences between the results of forest LAI estimation from raw intensity data and normalized intensity data were relatively big for canopy intensity sum metrics. However, the differences were relatively small for the ratio of intensity metrics. The results demonstrated that the effects of intensity normalization on forest LAI estimation were depended on the choice of affecting factor, and the influential level is closely related to the characteristics of metrics used. Therefore, the appropriate method of intensity normalization should be chosen according to the characteristics of metrics used in the future research, which could avoid the waste of cost and the reduction of estimation accuracy caused by the introduction of inappropriate affecting factors into intensity normalization.

  2. Stem Cuttings as a Quick In Vitro Screening Method of Sodium Chloride Tolerance in Potato (Solanum) Genotypes

    International Nuclear Information System (INIS)

    Elhag, A. Z.; Mix-Wagnar, G.; Elbassam, N.; Horst, W.

    2008-01-01

    This study was conducted to find how far in vitro explants stem cuttings technique could be suitable for quick screening of NaCl tolerance solanum genotypes and to identify some aspects of their NaCl tolerance. Fifteen solanum genotypes were tested on four NaCl concentrations both in vitro and in vivo, two-node stem cuttings of in vitro produced explants were grown on Murashige and Skoog (MS) salts supplemented with four NaCl concentrations (0,40,80 and 120 mM) for six weeks in vitro. The other part of the in vitro grown explants were transplanted in Kick- Brauck- Manns pots containing sandy loam soil supplemented also with four NaCl concentration (0, 0.1, 0.2 and 0.3 NaCl, w/w) and grown further either for eight weeks or till harvest in a green house. Both experiments were in a completely randomized design with four replicates. The main stem length, shoot dry matter and tuber yield as well as mineral element (Na''+, K + , Ca''2''+ and Cl''-) were measured. The growth of all genotypes was affected by increasing of NaCl. There was a close correlation between growth response (length of explant main stem) in vitro and shoot dry matter and tuber yield in vivo (r=0.81** for dry matter and 0.72** for tuber yield. Na''+ and Cl''- concentrations in shoots wee inversely correlated with the vegetative growth (r=-0.73** for both in vitro and r=-0.89** and r=-0.88** in vivo, respectively). The genotypes showed varied ability to reduce the transport of Na''+ and Cl''- to the shoots, where by NaCl tolerant genotypes showed lower content of both elements than the sensitive ones. K''+ and Ca''2''+ concentrations were decreased with increasing NaCl concentration. The responses for mineral element (Na''+ and Cl - ) accumulation or restriction of explants in vitro and intact plants in vivo were also closely correlated (r=0.79** and 0.71**, respectively) especially at the medium NaCl concentrations (80 mM and 0.2% NaCl). The similar response of the explant and the intact plant

  3. Experimental improvement of the technology of cutting of high-pressure hoses with metal braid on hand cutting machine

    OpenAIRE

    Karpenko, Mykola; Bogdevicius, Marijonas; Prentkovskis, Olegas

    2016-01-01

    In the article the review of the problem of improvement of technology of high pressure hoses cutting on the hand cutting machines is analyzed. Different methods of cutting of high pressure hoses into the billets are overviewed and the quality of edge cuts of hoses is analyzed. The comparison of treatment on automatic cutting machines and on hand cutting machines is carried out. Different experimental techniques of improvement of the quality of edges cutting of high pressure hoses are prese...

  4. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  5. Normalization Methods and Selection Strategies for Reference Materials in Stable Isotope Analyes. Review

    Energy Technology Data Exchange (ETDEWEB)

    Skrzypek, G. [West Australian Biogeochemistry Centre, John de Laeter Centre of Mass Spectrometry, School of Plant Biology, University of Western Australia, Crawley (Australia); Sadler, R. [School of Agricultural and Resource Economics, University of Western Australia, Crawley (Australia); Paul, D. [Department of Civil Engineering (Geosciences), Indian Institute of Technology Kanpur, Kanpur (India); Forizs, I. [Institute for Geochemical Research, Hungarian Academy of Sciences, Budapest (Hungary)

    2013-07-15

    Stable isotope ratio mass spectrometers are highly precise, but not accurate instruments. Therefore, results have to be normalized to one of the isotope scales (e.g., VSMOW, VPDB) based on well calibrated reference materials. The selection of reference materials, numbers of replicates, {delta}-values of these reference materials and normalization technique have been identified as crucial in determining the uncertainty associated with the final results. The most common normalization techniques and reference materials have been tested using both Monte Carlo simulations and laboratory experiments to investigate aspects of error propagation during the normalization of isotope data. The range of observed differences justifies the need to employ the same sets of standards worldwide for each element and each stable isotope analytical technique. (author)

  6. Integrating atlas and graph cut methods for right ventricle blood-pool segmentation from cardiac cine MRI

    Science.gov (United States)

    Dangi, Shusil; Linte, Cristian A.

    2017-03-01

    Segmentation of right ventricle from cardiac MRI images can be used to build pre-operative anatomical heart models to precisely identify regions of interest during minimally invasive therapy. Furthermore, many functional parameters of right heart such as right ventricular volume, ejection fraction, myocardial mass and thickness can also be assessed from the segmented images. To obtain an accurate and computationally efficient segmentation of right ventricle from cardiac cine MRI, we propose a segmentation algorithm formulated as an energy minimization problem in a graph. Shape prior obtained by propagating label from an average atlas using affine registration is incorporated into the graph framework to overcome problems in ill-defined image regions. The optimal segmentation corresponding to the labeling with minimum energy configuration of the graph is obtained via graph-cuts and is iteratively refined to produce the final right ventricle blood pool segmentation. We quantitatively compare the segmentation results obtained from our algorithm to the provided gold-standard expert manual segmentation for 16 cine-MRI datasets available through the MICCAI 2012 Cardiac MR Right Ventricle Segmentation Challenge according to several similarity metrics, including Dice coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  7. Inexpensive, rapid prototyping of microfluidic devices using overhead transparencies and a laser print, cut and laminate fabrication method.

    Science.gov (United States)

    Thompson, Brandon L; Ouyang, Yiwen; Duarte, Gabriela R M; Carrilho, Emanuel; Krauss, Shannon T; Landers, James P

    2015-06-01

    We describe a technique for fabricating microfluidic devices with complex multilayer architectures using a laser printer, a CO2 laser cutter, an office laminator and common overhead transparencies as a printable substrate via a laser print, cut and laminate (PCL) methodology. The printer toner serves three functions: (i) it defines the microfluidic architecture, which is printed on the overhead transparencies; (ii) it acts as the adhesive agent for the bonding of multiple transparency layers; and (iii) it provides, in its unmodified state, printable, hydrophobic 'valves' for fluidic flow control. By using common graphics software, e.g., CorelDRAW or AutoCAD, the protocol produces microfluidic devices with a design-to-device time of ∼40 min. Devices of any shape can be generated for an array of multistep assays, with colorimetric detection of molecular species ranging from small molecules to proteins. Channels with varying depths can be formed using multiple transparency layers in which a CO2 laser is used to remove the polyester from the channel sections of the internal layers. The simplicity of the protocol, availability of the equipment and substrate and cost-effective nature of the process make microfluidic devices available to those who might benefit most from expedited, microscale chemistry.

  8. Methods to evvaluate normal rainfall for short-term wetland hydrology assessment

    Science.gov (United States)

    Jaclyn Sumner; Michael J. Vepraskas; Randall K. Kolka

    2009-01-01

    Identifying sites meeting wetland hydrology requirements is simple when long-term (>10 years) records are available. Because such data are rare, we hypothesized that a single-year of hydrology data could be used to reach the same conclusion as with long-term data, if the data were obtained during a period of normal or below normal rainfall. Long-term (40-45 years)...

  9. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    Science.gov (United States)

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  10. Plasma arc cutting: speed and cut quality

    International Nuclear Information System (INIS)

    Nemchinsky, V A; Severance, W S

    2009-01-01

    When cutting metal with plasma arc cutting, the walls of the cut are narrower at the bottom than at the top. This lack of squareness increases as the cutting speed increases. A model of this phenomenon, affecting cut quality, is suggested. A thin liquid layer, which separates the plasma from the solid metal to be melted, plays a key role in the suggested model. This layer decreases heat transfer from the plasma to the solid metal; the decrease is more pronounced the higher the speed and the thicker the liquid metal layer. Since the layer is thicker at the bottom of the cut, the heat transfer effectiveness is lower at the bottom. The decrease in heat transfer effectiveness is compensated by the narrowness of the cut. The suggested model allows one to calculate the profile of the cut. The result of the calculations of the cutting speeds for plates of various thicknesses, at which the squareness of the cut is acceptable, agrees well with the speeds recommended by manufacturers. The second effect considered in the paper is the deflection of the plasma jet from the vertical at a high cutting speed. A qualitative explanation of this phenomenon is given. We believe the considerations of this paper are pertinent to other types of cutting with moving heat sources.

  11. An adaptive simplex cut-cell method for high-order discontinuous Galerkin discretizations of elliptic interface problems and conjugate heat transfer problems

    Science.gov (United States)

    Sun, Huafei; Darmofal, David L.

    2014-12-01

    In this paper we propose a new high-order solution framework for interface problems on non-interface-conforming meshes. The framework consists of a discontinuous Galerkin (DG) discretization, a simplex cut-cell technique, and an output-based adaptive scheme. We first present a DG discretization with a dual-consistent output evaluation for elliptic interface problems on interface-conforming meshes, and then extend the method to handle multi-physics interface problems, in particular conjugate heat transfer (CHT) problems. The method is then applied to non-interface-conforming meshes using a cut-cell technique, where the interface definition is completely separate from the mesh generation process. No assumption is made on the interface shape (other than Lipschitz continuity). We then equip our strategy with an output-based adaptive scheme for an accurate output prediction. Through numerical examples, we demonstrate high-order convergence for elliptic interface problems and CHT problems with both smooth and non-smooth interface shapes.

  12. Drilling and coring methods that minimize the disturbance of cuttings, core, and rock formation in the unsaturated zone, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Hammermeister, D.P.; Blout, D.O.; McDaniel, J.C.

    1985-01-01

    A drilling-and-casing method (Odex 115 system) utilizing air as a drilling fluid was used successfully to drill through various rock types within the unsaturated zone at Yucca Mountain, Nevada. This paper describes this method and the equipment used to rapidly penetrate bouldery alluvial-colluvial deposits, poorly consolidated bedded and nonwelded tuff, and fractured, densely welded tuff to depths of about 130 meters. A comparison of water-content and water-potential data from drill cuttings with similar measurements on rock cores indicates that drill cuttings were only slightly disturbed for several of the rock types penetrated. Coring, sampling, and handling methods were devised to obtain minimally disturbed drive core from bouldery alluvial-colluvial deposits. Bulk-density values obtained from bulk samples dug from nearby trenches were compared to bulk-density values obtained from drive core to determine the effects of drive coring on the porosity of the core. Rotary coring methods utilizing a triple-tube core barrel and air as the drilling fluid were used to obtain core from welded and nonwelded tuff. Results indicate that the disturbance of the water content of the core was minimal. Water-content distributions in alluvium-colluvium were determined before drilling occurred by drive-core methods. After drilling, water-content distributions were determined by nuclear-logging methods. A comparison of the water-content distributions made before and after drilling indicates that Odex 115 drilling minimally disturbs the water content of the formation rock. 10 refs., 12 figs., 4 tabs

  13. Different percentages of false-positive results obtained using five methods for the calculation of reference change values based on simulated normal and ln-normal distributions of data

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2016-01-01

    a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...

  14. A simple method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation

    International Nuclear Information System (INIS)

    Begnozzi, L.; Gentile, F.P.; Di Nallo, A.M.; Chiatti, L.; Zicari, C.; Consorti, R.; Benassi, M.

    1994-01-01

    Since volumetric dose distributions are available with 3-dimensional radiotherapy treatment planning they can be used in statistical evaluation of response to radiation. This report presents a method to calculate the influence of dose inhomogeneity and fractionation in normal tissue complication probability evaluation. The mathematical expression for the calculation of normal tissue complication probability has been derived combining the Lyman model with the histogram reduction method of Kutcher et al. and using the normalized total dose (NTD) instead of the total dose. The fitting of published tolerance data, in case of homogeneous or partial brain irradiation, has been considered. For the same total or partial volume homogeneous irradiation of the brain, curves of normal tissue complication probability have been calculated with fraction size of 1.5 Gy and of 3 Gy instead of 2 Gy, to show the influence of fraction size. The influence of dose distribution inhomogeneity and α/β value has also been simulated: Considering α/β=1.6 Gy or α/β=4.1 Gy for kidney clinical nephritis, the calculated curves of normal tissue complication probability are shown. Combining NTD calculations and histogram reduction techniques, normal tissue complication probability can be estimated taking into account the most relevant contributing factors, including the volume effect. (orig.) [de

  15. Investigating the Effect of Normalization Norms in Flexible Manufacturing Sytem Selection Using Multi-Criteria Decision-Making Methods

    Directory of Open Access Journals (Sweden)

    Prasenjit Chatterjee

    2014-07-01

    Full Text Available The main objective of this paper is to assess the effect of different normalization norms within multi-criteria decisionmaking (MADM models. Three well accepted MCDM tools, namely, preference ranking organization method for enrichment evaluation (PROMETHEE, grey relation analysis (GRA and technique for order preference by similarity to ideal solution (TOPSIS methods are applied for solving a flexible manufacturing system (FMS selection problem in a discrete manufacturing environment. Finally, by the introduction of different normalization norms to the decision algorithms, its effct on the FMS selection problem using these MCDM models are also studied.

  16. Gluebond strength of laser cut wood

    Science.gov (United States)

    Charles W. McMillin; Henry A. Huber

    1985-01-01

    The degree of strength loss when gluing laser cut wood as compared to conventionally sawn wood and the amount of additional surface treatment needed to improve bond quality were assessed under normal furniture plant operating conditions. The strength of laser cut oak glued with polyvinyl acetate adhesive was reduced to 75 percent of sawn joints and gum was reduced 43...

  17. Histological versus stereological methods applied at spermatogonia during normal human development

    DEFF Research Database (Denmark)

    Cortes, Dina

    1990-01-01

    The number of spermatogonia per tubular transverse section (S/T), and the percentage of seminiferous tubulus containing spermatogonia (the fertility index (FI] were measured in 40 pairs of normal autopsy testes aged 28 weeks of gestation-40 years. S/T and FI showed similar changes during the whol...

  18. An analysis of normalization methods for Drosophila RNAi genomic screens and development of a robust validation scheme

    Science.gov (United States)

    Wiles, Amy M.; Ravi, Dashnamoorthy; Bhavani, Selvaraj; Bishop, Alexander J.R.

    2010-01-01

    Genome-wide RNAi screening is a powerful, yet relatively immature technology that allows investigation into the role of individual genes in a process of choice. Most RNAi screens identify a large number of genes with a continuous gradient in the assessed phenotype. Screeners must then decide whether to examine just those genes with the most robust phenotype or to examine the full gradient of genes that cause an effect and how to identify the candidate genes to be validated. We have used RNAi in Drosophila cells to examine viability in a 384-well plate format and compare two screens, untreated control and treatment. We compare multiple normalization methods, which take advantage of different features within the data, including quantile normalization, background subtraction, scaling, cellHTS2 1, and interquartile range measurement. Considering the false-positive potential that arises from RNAi technology, a robust validation method was designed for the purpose of gene selection for future investigations. In a retrospective analysis, we describe the use of validation data to evaluate each normalization method. While no normalization method worked ideally, we found that a combination of two methods, background subtraction followed by quantile normalization and cellHTS2, at different thresholds, captures the most dependable and diverse candidate genes. Thresholds are suggested depending on whether a few candidate genes are desired or a more extensive systems level analysis is sought. In summary, our normalization approaches and experimental design to perform validation experiments are likely to apply to those high-throughput screening systems attempting to identify genes for systems level analysis. PMID:18753689

  19. Inside-sediment partitioning of PAH, PCB and organochlorine compounds and inferences on sampling and normalization methods

    International Nuclear Information System (INIS)

    Opel, Oliver; Palm, Wolf-Ulrich; Steffen, Dieter; Ruck, Wolfgang K.L.

    2011-01-01

    Comparability of sediment analyses for semivolatile organic substances is still low. Neither screening of the sediments nor organic-carbon based normalization is sufficient to obtain comparable results. We are showing the interdependency of grain-size effects with inside-sediment organic-matter distribution for PAH, PCB and organochlorine compounds. Surface sediment samples collected by Van-Veen grab were sieved and analyzed for 16 PAH, 6 PCB and 18 organochlorine pesticides (OCP) as well as organic-matter content. Since bulk concentrations are influenced by grain-size effects themselves, we used a novel normalization method based on the sum of concentrations in the separate grain-size fractions of the sediments. By calculating relative normalized concentrations, it was possible to clearly show underlying mechanisms throughout a heterogeneous set of samples. Furthermore, we were able to show that, for comparability, screening at <125 μm is best suited and can be further improved by additional organic-carbon normalization. - Research highlights: → New method for the comparison of heterogeneous sets of sediment samples. → Assessment of organic pollutants partitioning mechanisms in sediments. → Proposed method for more comparable sediment sampling. - Inside-sediment partitioning mechanisms are shown using a new mathematical approach and discussed in terms of sediment sampling and normalization.

  20. Laser cutting of Kevlar laminates

    Energy Technology Data Exchange (ETDEWEB)

    VanCleave, R.A.

    1977-09-01

    An investigation has been conducted of the use of laser energy for cutting contours, diameters, and holes in flat and shaped Kevlar 49 fiber-reinforced epoxy laminates as an alternate to conventional machining. The investigation has shown that flat laminates 6.35 mm thick may be cut without backup by using a high-powered (1000-watt) continuous wave CO/sub 2/ laser at high feedrates (33.87 mm per second). The cut produced was free of the burrs and delaminations resulting from conventional machining methods without intimate contact backup. In addition, the process cycle time was greatly reduced.

  1. International feedback experience on the cutting of reactor internal components

    International Nuclear Information System (INIS)

    Boucau, J.

    2014-01-01

    Westinghouse capitalizes more than 30 years of experience in the cutting of internal components of reactor and their packaging in view of their storage. Westinghouse has developed and validated different methods for cutting: plasma torch cutting, high pressure abrasive water jet cutting, electric discharge cutting and mechanical cutting. A long feedback experience has enabled Westinghouse to list the pros and cons of each cutting technology. The plasma torch cutting is fast but rises dosimetry concerns linked to the control of the cuttings and the clarity of water. Abrasive water jet cutting requires the installation of costly safety devices and of an equipment for filtering water but this technology allows accurate cuttings in hard-to-reach zones. Mechanical cutting is the most favourable technology in terms of wastes generation and of the clarity of water but the cutting speed is low. (A.C.)

  2. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    Science.gov (United States)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  3. Study of propagation of Berberis thunbergii L. by cuttings, with using less-known methods of stimulation

    Directory of Open Access Journals (Sweden)

    Martin Říha

    2007-01-01

    Full Text Available The different type of own produce stimulators were tested at Berberis thunbergii L. 'Green Carpet', Berberis thunbergii 'Red Shift' and Berberis thunbergii 'Aureum'. We used the combination of growing inhibitors and quick-dip method, single quick-dip metod in solution of acetone and stimulant in form of gel. Groving inhibitors is including paclobutrazol and CCC in test. We used IBK, NAA, IAA and nicotin acid as auxins in quick-dip method. Medium was aceton solution.

  4. [Quantitative analysis method based on fractal theory for medical imaging of normal brain development in infants].

    Science.gov (United States)

    Li, Heheng; Luo, Liangping; Huang, Li

    2011-02-01

    The present paper is aimed to study the fractal spectrum of the cerebral computerized tomography in 158 normal infants of different age groups, based on the calculation of chaotic theory. The distribution range of neonatal period was 1.88-1.90 (mean = 1.8913 +/- 0.0064); It reached a stable condition at the level of 1.89-1.90 during 1-12 months old (mean = 1.8927 +/- 0.0045); The normal range of 1-2 years old infants was 1.86-1.90 (mean = 1.8863 +/- 4 0.0085); It kept the invariance of the quantitative value among 1.88-1.91(mean = 1.8958 +/- 0.0083) during 2-3 years of age. ANOVA indicated there's no significant difference between boys and girls (F = 0.243, P > 0.05), but the difference of age groups was significant (F = 8.947, P development.

  5. Multibeam Fibre Laser Cutting

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove

    The appearance of the high power high brilliance fibre laser has opened for new possibilities in laser materials processing. In laser cutting this laser has demonstrated high cutting performance compared to the dominating cutting laser, the CO2-laser. However, quality problems in fibre......-laser cutting have until now limited its application in metal cutting. In this paper the first results of proof-of-principle studies applying a new approach (patent pending) for laser cutting with high brightness short wavelength lasers will be presented. In the approach, multi beam patterns are applied...... to control the melt flow out of the cut kerf resulting in improved cut quality in metal cutting. The beam patterns in this study are created by splitting up beams from 2 single mode fibre lasers and combining these beams into a pattern in the cut kerf. The results are obtained with a total of 550 W of single...

  6. Multibeam fiber laser cutting

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove; Hansen, Klaus Schütt; Nielsen, Jakob Skov

    2009-01-01

    The appearance of the high power high brilliance fiber laser has opened for new possibilities in laser materials processing. In laser cutting this laser has demonstrated high cutting performance compared to the dominating Cutting laser, the CO2 laser. However, quality problems in fiber......-laser cutting have until now limited its application to metal cutting. In this paper the first results of proof-of-principle Studies applying a new approach (patent pending) for laser cutting with high brightness and short wavelength lasers will be presented. In the approach, multibeam patterns are applied...... to control the melt flow out of the cut kerf resulting in improved cut quality in metal cutting. The beam patterns in this study are created by splitting up beams from two single mode fiber lasers and combining these beams into a pattern in the cut kerf. The results are obtained with a total of 550 W...

  7. Study of normal and shear material properties for viscoelastic model of asphalt mixture by discrete element method

    DEFF Research Database (Denmark)

    Feng, Huan; Pettinari, Matteo; Stang, Henrik

    2015-01-01

    In this paper, the viscoelastic behavior of asphalt mixture was studied by using discrete element method. The dynamic properties of asphalt mixture were captured by implementing Burger’s contact model. Different ways of taking into account of the normal and shear material properties of asphalt mi...

  8. Development and application of the analytical energy gradient for the normalized elimination of the small component method

    NARCIS (Netherlands)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter

    2011-01-01

    The analytical energy gradient of the normalized elimination of the small component (NESC) method is derived for the first time and implemented for the routine calculation of NESC geometries and other first order molecular properties. Essential for the derivation is the correct calculation of the

  9. Innovative methods to study human intestinal drug metabolism in vitro : Precision-cut slices compared with Ussing chamber preparations

    NARCIS (Netherlands)

    van de Kerkhof, Esther G.; Ungell, Anna-Lena B.; Sjoberg, Asa K.; de Jager, Marina H.; Hilgendorf, Constanze; de Graaf, Inge A. M.; Groothuis, Geny M. M.

    2006-01-01

    Predictive in vitro methods to investigate drug metabolism in the human intestine using intact tissue are of high importance. Therefore, we studied the metabolic activity of human small intestinal and colon slices and compared it with the metabolic activity of the same human intestinal segments

  10. SU-E-J-178: A Normalization Method Can Remove Discrepancy in Ventilation Function Due to Different Breathing Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Qu, H; Yu, N; Stephans, K; Xia, P [Cleveland Clinic, Cleveland, OH (United States)

    2014-06-01

    Purpose: To develop a normalization method to remove discrepancy in ventilation function due to different breathing patterns. Methods: Twenty five early stage non-small cell lung cancer patients were included in this study. For each patient, a ten phase 4D-CT and the voluntarily maximum inhale and exhale CTs were acquired clinically and retrospectively used for this study. For each patient, two ventilation maps were calculated from voxel-to-voxel CT density variations from two phases of the quiet breathing and two phases of the extreme breathing. For the quiet breathing, 0% (inhale) and 50% (exhale) phases from 4D-CT were used. An in-house tool was developed to calculate and display the ventilation maps. To enable normalization, the whole lung of each patient was evenly divided into three parts in the longitude direction at a coronal image with a maximum lung cross section. The ratio of cumulated ventilation from the top one-third region to the middle one-third region of the lung was calculated for each breathing pattern. Pearson's correlation coefficient was calculated on the ratios of the two breathing patterns for the group. Results: For each patient, the ventilation map from the quiet breathing was different from that of the extreme breathing. When the cumulative ventilation was normalized to the middle one-third of the lung region for each patient, the normalized ventilation functions from the two breathing patterns were consistent. For this group of patients, the correlation coefficient of the normalized ventilations for the two breathing patterns was 0.76 (p < 0.01), indicating a strong correlation in the ventilation function measured from the two breathing patterns. Conclusion: For each patient, the ventilation map is dependent of the breathing pattern. Using a regional normalization method, the discrepancy in ventilation function induced by the different breathing patterns thus different tidal volumes can be removed.

  11. SU-E-J-178: A Normalization Method Can Remove Discrepancy in Ventilation Function Due to Different Breathing Patterns

    International Nuclear Information System (INIS)

    Qu, H; Yu, N; Stephans, K; Xia, P

    2014-01-01

    Purpose: To develop a normalization method to remove discrepancy in ventilation function due to different breathing patterns. Methods: Twenty five early stage non-small cell lung cancer patients were included in this study. For each patient, a ten phase 4D-CT and the voluntarily maximum inhale and exhale CTs were acquired clinically and retrospectively used for this study. For each patient, two ventilation maps were calculated from voxel-to-voxel CT density variations from two phases of the quiet breathing and two phases of the extreme breathing. For the quiet breathing, 0% (inhale) and 50% (exhale) phases from 4D-CT were used. An in-house tool was developed to calculate and display the ventilation maps. To enable normalization, the whole lung of each patient was evenly divided into three parts in the longitude direction at a coronal image with a maximum lung cross section. The ratio of cumulated ventilation from the top one-third region to the middle one-third region of the lung was calculated for each breathing pattern. Pearson's correlation coefficient was calculated on the ratios of the two breathing patterns for the group. Results: For each patient, the ventilation map from the quiet breathing was different from that of the extreme breathing. When the cumulative ventilation was normalized to the middle one-third of the lung region for each patient, the normalized ventilation functions from the two breathing patterns were consistent. For this group of patients, the correlation coefficient of the normalized ventilations for the two breathing patterns was 0.76 (p < 0.01), indicating a strong correlation in the ventilation function measured from the two breathing patterns. Conclusion: For each patient, the ventilation map is dependent of the breathing pattern. Using a regional normalization method, the discrepancy in ventilation function induced by the different breathing patterns thus different tidal volumes can be removed

  12. NormaCurve: a SuperCurve-based method that simultaneously quantifies and normalizes reverse phase protein array data.

    Directory of Open Access Journals (Sweden)

    Sylvie Troncale

    Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.

  13. [Comparison of surface light scattering of acrylic intraocular lenses made by lathe-cutting and cast-molding methods--long-term observation and experimental study].

    Science.gov (United States)

    Nishihara, Hitoshi; Ayaki, Masahiko; Watanabe, Tomiko; Ohnishi, Takeo; Kageyama, Toshiyuki; Yaguchi, Shigeo

    2004-03-01

    To compare the long-term clinical and experimental results of soft acrylic intraocular lenses(IOLs) manufactured by the lathe-cut(LC) method and by the cast-molding(CM) method. This was a retrospective study of 20 patients(22 eyes) who were examined in a 5- and 7-year follow-up study. Sixteen eyes were implanted with polyacrylic IOLs manufactured by the LC method and 6 eyes were implanted with polyacrylic IOLs manufactured by the CM method. Postoperative measurements included best corrected visual acuity, contrast sensitivity, biomicroscopic examination, and Scheimpflug slit-lamp images to evaluate surface light scattering. Scanning electron microscopy and three-dimensional surface analysis were conducted. At 7 years, the mean visual acuity was 1.08 +/- 0.24 (mean +/- standard deviation) in the LC group and 1.22 +/- 0.27 in the CM group. Surface light-seatter was 12.0 +/- 4.0 computer compatible tapes(CCT) in the LC group and 37.4 +/- 5.4 CCT in the CM group. Mean surface roughness was 0.70 +/- 0.07 nm in the LC group and 6.16 +/- 0.97 nm in the CM group. Acrylic IOLs manufactured by the LC method are more stable in long-termuse.

  14. Multi-satellites normalization of the FengYun-2s visible detectors by the MVP method

    Science.gov (United States)

    Li, Yuan; Rong, Zhi-guo; Zhang, Li-jun; Sun, Ling; Xu, Na

    2013-08-01

    After January 13, 2012, FY-2F had successfully launched, the total number of the in orbit operating FengYun-2 geostationary meteorological satellites reached three. For accurate and efficient application of multi-satellite observation data, the study of the multi-satellites normalization of the visible detector was urgent. The method required to be non-rely on the in orbit calibration. So as to validate the calibration results before and after the launch; calculate day updating surface bidirectional reflectance distribution function (BRDF); at the same time track the long-term decay phenomenon of the detector's linearity and responsivity. By research of the typical BRDF model, the normalization method was designed. Which could effectively solute the interference of surface directional reflectance characteristics, non-rely on visible detector in orbit calibration. That was the Median Vertical Plane (MVP) method. The MVP method was based on the symmetry of principal plane, which were the directional reflective properties of the general surface targets. Two geostationary satellites were taken as the endpoint of a segment, targets on the intersecting line of the segment's MVP and the earth surface could be used as a normalization reference target (NRT). Observation on the NRT by two satellites at the moment the sun passing through the MVP brought the same observation zenith, solar zenith, and opposite relative direction angle. At that time, the linear regression coefficients of the satellite output data were the required normalization coefficients. The normalization coefficients between FY-2D, FY-2E and FY-2F were calculated, and the self-test method of the normalized results was designed and realized. The results showed the differences of the responsivity between satellites could up to 10.1%(FY-2E to FY-2F); the differences of the output reflectance calculated by the broadcast calibration look-up table could up to 21.1%(FY-2D to FY-2F); the differences of the output

  15. Distance Determination Method for Normally Distributed Obstacle Avoidance of Mobile Robots in Stochastic Environments

    Directory of Open Access Journals (Sweden)

    Jinhong Noh

    2016-04-01

    Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.

  16. Cut elimination in multifocused linear logic

    DEFF Research Database (Denmark)

    Guenot, Nicolas; Brock-Nannestad, Taus

    2015-01-01

    We study cut elimination for a multifocused variant of full linear logic in the sequent calculus. The multifocused normal form of proofs yields problems that do not appear in a standard focused system, related to the constraints in grouping rule instances in focusing phases. We show that cut...... elimination can be performed in a sensible way even though the proof requires some specific lemmas to deal with multifocusing phases, and discuss the difficulties arising with cut elimination when considering normal forms of proofs in linear logic....

  17. Novel Approach to Design Ultra Wideband Microwave Amplifiers: Normalized Gain Function Method

    Directory of Open Access Journals (Sweden)

    R. Kopru

    2013-09-01

    Full Text Available In this work, we propose a novel approach called as “Normalized Gain Function (NGF method” to design low/medium power single stage ultra wide band microwave amplifiers based on linear S parameters of the active device. Normalized Gain Function TNGF is defined as the ratio of T and |S21|^2, desired shape or frequency response of the gain function of the amplifier to be designed and the shape of the transistor forward gain function, respectively. Synthesis of input/output matching networks (IMN/OMN of the amplifier requires mathematically generated target gain functions to be tracked in two different nonlinear optimization processes. In this manner, NGF not only facilitates a mathematical base to share the amplifier gain function into such two distinct target gain functions, but also allows their precise computation in terms of TNGF=T/|S21|^2 at the very beginning of the design. The particular amplifier presented as the design example operates over 800-5200 MHz to target GSM, UMTS, Wi-Fi and WiMAX applications. An SRFT (Simplified Real Frequency Technique based design example supported by simulations in MWO (MicroWave Office from AWR Corporation is given using a 1400mW pHEMT transistor, TGF2021-01 from TriQuint Semiconductor.

  18. Application of in situ current normalized PIGE method for determination of total boron and its isotopic composition

    International Nuclear Information System (INIS)

    Chhillar, Sumit; Acharya, R.; Sodaye, S.; Pujari, P.K.

    2014-01-01

    A particle induced gamma-ray emission (PIGE) method using proton beam has been standardized for determination of isotopic composition of natural boron and enriched boron samples. Target pellets of boron standard and samples were prepared in cellulose matrix. The prompt gamma rays of 429 keV, 718 keV and 2125 keV were measured from 10 B(p,αγ) 7 Be, 10 B(p, p'γ) 10 B and 11 B(p, p'γ) 11 B nuclear reactions, respectively. For normalizing the beam current variations in situ current normalization method was used. Validation of method was carried out using synthetic samples of boron carbide, borax, borazine and lithium metaborate in cellulose matrix. (author)

  19. Staining Methods for Normal and Regenerative Myelin in the Nervous System.

    Science.gov (United States)

    Carriel, Víctor; Campos, Antonio; Alaminos, Miguel; Raimondo, Stefania; Geuna, Stefano

    2017-01-01

    Histochemical techniques enable the specific identification of myelin by light microscopy. Here we describe three histochemical methods for the staining of myelin suitable for formalin-fixed and paraffin-embedded materials. The first method is conventional luxol fast blue (LFB) method which stains myelin in blue and Nissl bodies and mast cells in purple. The second method is a LBF-based method called MCOLL, which specifically stains the myelin as well the collagen fibers and cells, giving an integrated overview of the histology and myelin content of the tissue. Finally, we describe the osmium tetroxide method, which consist in the osmication of previously fixed tissues. Osmication is performed prior the embedding of tissues in paraffin giving a permanent positive reaction for myelin as well as other lipids present in the tissue.

  20. Cutting state identification

    International Nuclear Information System (INIS)

    Berger, B.S.; Minis, I.; Rokni, M.

    1997-01-01

    Cutting states associated with the orthogonal cutting of stiff cylinders are identified through an analysis of the singular values of a Toeplitz matrix of third order cumulants of acceleration measurements. The ratio of the two pairs of largest singular values is shown to differentiate between light cutting, medium cutting, pre-chatter and chatter states. Sequences of cutting experiments were performed in which either depth of cut or turning frequency was varied. Two sequences of experiments with variable turning frequency and five with variable depth of cut, 42 cutting experiments in all, provided a database for the calculation of third order cumulants. Ratios of singular values of cumulant matrices find application in the analysis of control of orthogonal cutting

  1. Flexible Laser Metal Cutting

    DEFF Research Database (Denmark)

    Villumsen, Sigurd; Jørgensen, Steffen Nordahl; Kristiansen, Morten

    2014-01-01

    This paper describes a new flexible and fast approach to laser cutting called ROBOCUT. Combined with CAD/CAM technology, laser cutting of metal provides the flexibility to perform one-of-a-kind cutting and hereby realises mass production of customised products. Today’s laser cutting techniques...... possess, despite their wide use in industry, limitations regarding speed and geometry. Research trends point towards remote laser cutting techniques which can improve speed and geometrical freedom and hereby the competitiveness of laser cutting compared to fixed-tool-based cutting technology...... such as punching. This paper presents the concepts and preliminary test results of the ROBOCUT laser cutting technology, a technology which potentially can revolutionise laser cutting....

  2. Laser Cutting of Carbon Fiber Fabrics

    Science.gov (United States)

    Fuchs, A. N.; Schoeberl, M.; Tremmer, J.; Zaeh, M. F.

    Due to their high weight-specific mechanical stiffness and strength, parts made from carbon fiber reinforced polymers (CFRP) are increasingly used as structural components in the aircraft and automotive industry. However, the cutting of preforms, as with most automated manufacturing processes for CFRP components, has not yet been fully optimized. This paper discusses laser cutting, an alternative method to the mechanical cutting of preforms. Experiments with remote laser cutting and gas assisted laser cutting were carried out in order to identify achievable machining speeds. The advantages of the two different processes as well as their fitness for use in mass production are discussed.

  3. The normalization of surface anisotropy effects present in SEVIRI reflectances by using the MODIS BRDF method

    DEFF Research Database (Denmark)

    Proud, Simon Richard; Zhang, Qingling; Schaaf, Crystal

    2014-01-01

    A modified version of the MODerate resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF) algorithm is presented for use in the angular normalization of surface reflectance data gathered by the Spinning Enhanced Visible and InfraRed Imager (SEVIRI...... acquisition period than the comparable MODIS products while, at the same time, removing many of the angular perturbations present within the original MSG data. The NBAR data are validated against reflectance data from the MODIS instrument and in situ data gathered at a field location in Africa throughout 2008....... It is found that the MSG retrievals are stable and are of high-quality across much of the SEVIRI disk while maintaining a higher temporal resolution than the MODIS BRDF products. However, a number of circumstances are discovered whereby the BRDF model is unable to function correctly with the SEVIRI...

  4. Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models

    Science.gov (United States)

    Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...

  5. Measurement of plasma histamine: description of an improved method and normal values

    International Nuclear Information System (INIS)

    Dyer, J.; Warren, K.; Merlin, S.; Metcalfe, D.D.; Kaliner, M.

    1982-01-01

    The single isotopic-enzymatic assay of histamine was modified to increase its sensitivity and to facilitate measurement of plasma histamine levels. The modification involved extracting 3 H-1-methylhistamine (generated by the enzyme N-methyltransferase acting on histamine in the presence of S-[methyl- 3 H]-adenosyl-L-methionine) into chloroform and isolating the 3 H-1-methylhistamine by thin-layer chromatography (TLC). The TLC was developed in acetone:ammonium hydroxide (95:10), and the methylhistamine spot (Rf . 0.50) was identified with an o-phthalaldehyde spray, scraped from the plate, and assayed in a scintillation counter. The assay in plasma demonstrated a linear relationship from 200 to 5000 pg histamine/ml. Plasma always had higher readings than buffer, and dialysis of plasma returned these values to the same level as buffer, suggesting that the baseline elevations might be attributable to histamine. However, all histamine standard curves were run in dialyzed plasma to negate any additional influences plasma might exert on the assay. The arithmetic mean (+/- SEM) in normal plasma histamine was 318.4 +/- 25 pg/ml (n . 51), and the geometric mean was 280 +/- 35 pg/ml. Plasma histamine was significantly elevated by infusion of histamine at 0.05 to 1.0 micrograms/kg/min or by cold immersion of the hand of a cold-urticaria patient. Therefore this modified isotopic-enzymatic assay of histamine is extremely sensitive, capable of measuring fluctuations in plasma histamine levels within the normal range, and potentially useful in analysis of the role histamine plays in human physiology

  6. A method for detecting nonlinear determinism in normal and epileptic brain EEG signals.

    Science.gov (United States)

    Meghdadi, Amir H; Fazel-Rezai, Reza; Aghakhani, Yahya

    2007-01-01

    A robust method of detecting determinism for short time series is proposed and applied to both healthy and epileptic EEG signals. The method provides a robust measure of determinism through characterizing the trajectories of the signal components which are obtained through singular value decomposition. Robustness of the method is shown by calculating proposed index of determinism at different levels of white and colored noise added to a simulated chaotic signal. The method is shown to be able to detect determinism at considerably high levels of additive noise. The method is then applied to both intracranial and scalp EEG recordings collected in different data sets for healthy and epileptic brain signals. The results show that for all of the studied EEG data sets there is enough evidence of determinism. The determinism is more significant for intracranial EEG recordings particularly during seizure activity.

  7. Evaluating new methods for direct measurement of the moderator temperature coefficient in nuclear power plants during normal operation

    International Nuclear Information System (INIS)

    Makai, M.; Kalya, Z.; Nemes, I.; Pos, I.; Por, G.

    2007-01-01

    Moderator temperature coefficient of reactivity is not monitored during fuel cycles in WWER reactors, because it is not very easy or impossible to measure it without disturbing the normal operation. Two new methods were tested in our WWER type nuclear power plant to try methodologies, which enable to measure that important to safety parameter during the fuel cycle. One is based on small perturbances, and only small changes are requested in operation, the other is based on noise methods, which means it is without interference with reactor operation. Both method is new that aspects that they uses the plant computer data(VERONA) based signals calculated by C P ORCA diffusion code (Authors)

  8. Determination of moderate-to-severe postoperative pain on the numeric rating scale: a cut-off point analysis applying four different methods.

    Science.gov (United States)

    Gerbershagen, H J; Rothaug, J; Kalkman, C J; Meissner, W

    2011-10-01

    Cut-off points (CPs) of the numeric rating scale (NRS 0-10) are regularly used in postoperative pain treatment. However, there is insufficient evidence to identify the optimal CP between mild and moderate pain. A total of 435 patients undergoing general, trauma, or oral and maxillofacial surgery were studied. To determine the optimal CP for pain treatment, four approaches were used: first, patients estimated their tolerable postoperative pain intensity before operation; secondly, 24 h after surgery, they indicated if they would have preferred to receive more analgesics; thirdly, satisfaction with pain treatment was analysed, and fourthly, multivariate analysis was used to calculate the optimal CP for pain intensities in relation to pain-related interference with movement, breathing, sleep, and mood. The estimated tolerable postoperative pain before operation was median (range) NRS 4.0 (0-10). Patients who would have liked more analgesics reported significantly higher average pain since surgery [median NRS 5.0 (0-9)] compared with those without this request [NRS 3.0 (0-8)]. Patients satisfied with pain treatment reported an average pain intensity of median NRS 3.0 (0-8) compared with less satisfied patients with NRS 5.0 (2-9). Analysis of average postoperative pain in relation to pain-related interference with mood and activity indicated pain categories of NRS 0-2, mild; 3-4, moderate; and 5-10, severe pain. Three of the four methods identified a treatment threshold of average pain of NRS≥4. This was considered to identify patients with pain of moderate-to-severe intensity. This cut-off was indentified as the tolerable pain threshold.

  9. Libraries for spectrum identification: Method of normalized coordinates versus linear correlation

    International Nuclear Information System (INIS)

    Ferrero, A.; Lucena, P.; Herrera, R.G.; Dona, A.; Fernandez-Reyes, R.; Laserna, J.J.

    2008-01-01

    In this work it is proposed that an easy solution based directly on linear algebra in order to obtain the relation between a spectrum and a spectrum base. This solution is based on the algebraic determination of an unknown spectrum coordinates with respect to a spectral library base. The identification capacity comparison between this algebraic method and the linear correlation method has been shown using experimental spectra of polymers. Unlike the linear correlation (where the presence of impurities may decrease the discrimination capacity), this method allows to detect quantitatively the existence of a mixture of several substances in a sample and, consequently, to beer in mind impurities for improving the identification

  10. Métodos de conservação aplicados a melão minimamente processado Conservation methods applied to fresh-cut melon

    Directory of Open Access Journals (Sweden)

    Anaí Peter Batista

    2013-05-01

    Full Text Available O objetivo desta revisão é apresentar alguns métodos de conservação que podem ser utilizados para prolongar a vida útil do melão minimamente processado. Dentre os métodos, serão abordados revestimento comestível, irradiação, antimicrobianos naturais, antioxidantes, agentes de firmeza, atmosfera modificada, branqueamento, luz ultravioleta e alta pressão. Dependendo do método pode haver redução das alterações associadas ao processo mínimo do melão, como a perda de água, alteração da cor e firmeza, alteração do metabolismo e crescimento de micro-organismos, sendo o resultado muitas vezes dependente da cultivar do melão utilizado.The objective of this review is to present some conservation methods that can be used to prolong the life of fresh-cut melon. Among the methods, edible coating, irradiation, natural antimicrobials, antioxidants, firmness agent, modified atmosphere, whitening, ultraviolet light and high pressure will be discussed. Depending on the method, the changes associated to minimum process of melon, such as water loss, change in color and firmness, change in the metabolism and growth of micro-organisms can be reduced and the result is often dependent on the melon cultivar used.

  11. A Gauss-Newton method for the integration of spatial normal fields in shape Space

    KAUST Repository

    Balzer, Jonathan

    2011-01-01

    to solving a nonlinear least-squares problem in shape space. Previously, the corresponding minimization has been performed by gradient descent, which suffers from slow convergence and susceptibility to local minima. Newton-type methods, although significantly

  12. Normal boundary intersection method for suppliers' strategic bidding in electricity markets: An environmental/economic approach

    International Nuclear Information System (INIS)

    Vahidinasab, V.; Jadid, S.

    2010-01-01

    In this paper the problem of developing optimal bidding strategies for the participants of oligopolistic energy markets is studied. Special attention is given to the impacts of suppliers' emission of pollutants on their bidding strategies. The proposed methodology employs supply function equilibrium (SFE) model to represent the strategic behavior of each supplier and locational marginal pricing mechanism for the market clearing. The optimal bidding strategies are developed mathematically using a bilevel optimization problem where the upper-level subproblem maximizes individual supplier payoff and the lower-level subproblem solves the Independent System Operator's market clearing problem. In order to solve market clearing mechanism the multiobjective optimal power flow is used with supplier emission of pollutants, as an extra objective, subject to the supplier physical constraints. This paper uses normal boundary intersection (NBI) approach for generating Pareto optimal set and then fuzzy decision making to select the best compromise solution. The developed algorithm is applied to an IEEE 30-bus test system. Numerical results demonstrate the potential and effectiveness of the proposed multiobjective approach to develop successful bidding strategies in those energy markets that minimize generation cost and emission of pollutants simultaneously.

  13. The Normalization of Surface Anisotropy Effects Present in SEVIRI Reflectances by Using the MODIS BRDF Method

    Science.gov (United States)

    Proud, Simon Richard; Zhang, Qingling; Schaaf, Crystal; Fensholt, Rasmus; Rasmussen, Mads Olander; Shisanya, Chris; Mutero, Wycliffe; Mbow, Cheikh; Anyamba, Assaf; Pak, Ed; hide

    2014-01-01

    A modified version of the MODerate resolution Imaging Spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF) algorithm is presented for use in the angular normalization of surface reflectance data gathered by the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) aboard the geostationary Meteosat Second Generation (MSG) satellites. We present early and provisional daily nadir BRDFadjusted reflectance (NBAR) data in the visible and near-infrared MSG channels. These utilize the high temporal resolution of MSG to produce BRDF retrievals with a greatly reduced acquisition period than the comparable MODIS products while, at the same time, removing many of the angular perturbations present within the original MSG data. The NBAR data are validated against reflectance data from the MODIS instrument and in situ data gathered at a field location in Africa throughout 2008. It is found that the MSG retrievals are stable and are of high-quality across much of the SEVIRI disk while maintaining a higher temporal resolution than the MODIS BRDF products. However, a number of circumstances are discovered whereby the BRDF model is unable to function correctly with the SEVIRI observations-primarily because of an insufficient spread of angular data due to the fixed sensor location or localized cloud contamination.

  14. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    Directory of Open Access Journals (Sweden)

    Statovci Driton

    2006-01-01

    Full Text Available We present a practical solution for dynamic spectrum management (DSM in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA. Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA, and the bidirectional IWFA (bi-IWFA. We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  15. The Normalized-Rate Iterative Algorithm: A Practical Dynamic Spectrum Management Method for DSL

    Science.gov (United States)

    Statovci, Driton; Nordström, Tomas; Nilsson, Rickard

    2006-12-01

    We present a practical solution for dynamic spectrum management (DSM) in digital subscriber line systems: the normalized-rate iterative algorithm (NRIA). Supported by a novel optimization problem formulation, the NRIA is the only DSM algorithm that jointly addresses spectrum balancing for frequency division duplexing systems and power allocation for the users sharing a common cable bundle. With a focus on being implementable rather than obtaining the highest possible theoretical performance, the NRIA is designed to efficiently solve the DSM optimization problem with the operators' business models in mind. This is achieved with the help of two types of parameters: the desired network asymmetry and the desired user priorities. The NRIA is a centralized DSM algorithm based on the iterative water-filling algorithm (IWFA) for finding efficient power allocations, but extends the IWFA by finding the achievable bitrates and by optimizing the bandplan. It is compared with three other DSM proposals: the IWFA, the optimal spectrum balancing algorithm (OSBA), and the bidirectional IWFA (bi-IWFA). We show that the NRIA achieves better bitrate performance than the IWFA and the bi-IWFA. It can even achieve performance almost as good as the OSBA, but with dramatically lower requirements on complexity. Additionally, the NRIA can achieve bitrate combinations that cannot be supported by any other DSM algorithm.

  16. First-order systems of linear partial differential equations: normal forms, canonical systems, transform methods

    Directory of Open Access Journals (Sweden)

    Heinz Toparkus

    2014-04-01

    Full Text Available In this paper we consider first-order systems with constant coefficients for two real-valued functions of two real variables. This is both a problem in itself, as well as an alternative view of the classical linear partial differential equations of second order with constant coefficients. The classification of the systems is done using elementary methods of linear algebra. Each type presents its special canonical form in the associated characteristic coordinate system. Then you can formulate initial value problems in appropriate basic areas, and you can try to achieve a solution of these problems by means of transform methods.

  17. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  18. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  19. Normal Science and the Paranormal: The Effect of a Scientific Method Course on Students' Beliefs.

    Science.gov (United States)

    Morier, Dean; Keeports, David

    1994-01-01

    A study investigated the effects of an interdisciplinary course on the scientific method on the attitudes of 34 college students toward the paranormal. Results indicated that the course substantially reduced belief in the paranormal, relative to a control group. Student beliefs in their own paranormal powers, however, did not change. (Author/MSE)

  20. The Stochastic Galerkin Method for Darcy Flow Problem with Log-Normal Random

    Czech Academy of Sciences Publication Activity Database

    Beres, Michal; Domesová, Simona

    2017-01-01

    Roč. 15, č. 2 (2017), s. 267-279 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Darcy flow * Gaussian random field * Karhunen-Loeve decomposition * polynomial chaos * Stochastic Galerkin method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2280

  1. A method for unsupervised change detection and automatic radiometric normalization in multispectral data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton John

    2011-01-01

    Based on canonical correlation analysis the iteratively re-weighted multivariate alteration detection (MAD) method is used to successfully perform unsupervised change detection in bi-temporal Landsat ETM+ images covering an area with villages, woods, agricultural fields and open pit mines in North...... to carry out the analyses is available from the authors' websites....

  2. Normal mode analysis as a method to derive protein dynamics information from the Protein Data Bank.

    Science.gov (United States)

    Wako, Hiroshi; Endo, Shigeru

    2017-12-01

    Normal mode analysis (NMA) can facilitate quick and systematic investigation of protein dynamics using data from the Protein Data Bank (PDB). We developed an elastic network model-based NMA program using dihedral angles as independent variables. Compared to the NMA programs that use Cartesian coordinates as independent variables, key attributes of the proposed program are as follows: (1) chain connectivity related to the folding pattern of a polypeptide chain is naturally embedded in the model; (2) the full-atom system is acceptable, and owing to a considerably smaller number of independent variables, the PDB data can be used without further manipulation; (3) the number of variables can be easily reduced by some of the rotatable dihedral angles; (4) the PDB data for any molecule besides proteins can be considered without coarse-graining; and (5) individual motions of constituent subunits and ligand molecules can be easily decomposed into external and internal motions to examine their mutual and intrinsic motions. Its performance is illustrated with an example of a DNA-binding allosteric protein, a catabolite activator protein. In particular, the focus is on the conformational change upon cAMP and DNA binding, and on the communication between their binding sites remotely located from each other. In this illustration, NMA creates a vivid picture of the protein dynamics at various levels of the structures, i.e., atoms, residues, secondary structures, domains, subunits, and the complete system, including DNA and cAMP. Comparative studies of the specific protein in different states, e.g., apo- and holo-conformations, and free and complexed configurations, provide useful information for studying structurally and functionally important aspects of the protein.

  3. A four dimensional separation method based on continuous heart-cutting gas chromatography with ion mobility and high resolution mass spectrometry.

    Science.gov (United States)

    Lipok, Christian; Hippler, Jörg; Schmitz, Oliver J

    2018-02-09

    A two-dimensional GC (2D-GC) method was developed and coupled to an ion mobility-high resolution mass spectrometer, which enables the separation of complex samples in four dimensions (2D-GC, ion mobilility spectrometry and mass spectrometry). This approach works as a continuous multiheart-cutting GC-system (GC+GC), using a long modulation time of 20s, which allows the complete transfer of most of the first dimension peaks to the second dimension column without fractionation, in comparison to comprehensive two-dimensional gas chromatography (GCxGC). Hence, each compound delivers only one peak in the second dimension, which simplifies the data handling even when ion mobility spectrometry as a third and mass spectrometry as a fourth dimension are introduced. The analysis of a plant extract from Calendula officinales shows the separation power of this four dimensional separation method. The introduction of ion mobility spectrometry provides an additional separation dimension and allows to determine collision cross sections (CCS) of the analytes as a further physicochemical constant supporting the identification. A CCS database with more than 800 standard substances including drug-like compounds and pesticides was used for CCS data base search in this work. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Single-Phase Full-Wave Rectifier as an Effective Example to Teach Normalization, Conduction Modes, and Circuit Analysis Methods

    Directory of Open Access Journals (Sweden)

    Predrag Pejovic

    2013-12-01

    Full Text Available Application of a single phase rectifier as an example in teaching circuit modeling, normalization, operating modes of nonlinear circuits, and circuit analysis methods is proposed.The rectifier supplied from a voltage source by an inductive impedance is analyzed in the discontinuous as well as in the continuous conduction mode. Completely analytical solution for the continuous conduction mode is derived. Appropriate numerical methods are proposed to obtain the circuit waveforms in both of the operating modes, and to compute the performance parameters. Source code of the program that performs such computation is provided.

  5. Exploitation study of the ore-body ''Tigre III''. Open-cut design and study of high-recovery underground mining method for the Tigre III ore-body

    International Nuclear Information System (INIS)

    Baluszka, J.C.

    1980-01-01

    The paper first carries out an analysis for the purpose of determining the limiting sterile/ore ratio for open-cut and underground mining in the specific filling case of Tigre III. In this connection it considers a high-recovery method of underground mining (involving the use of cemented hydropneumatic chambers), a general mine plan covering access, transport, ventilation and removal of ore as well as auxiliary services relating to the Tigre III ore body as a whole. The costs of this method of mining are determined for purposes of comparison with the open-cut method. Similarly, the limiting sterile/ore ratio is taken as the basis for an analysis of different types of pit and a design suited to the limiting ratio is adopted. As a final solution the paper favours a method which combines open-cut and underground mining. It proposes the use of the open-cut method up to the limiting ratio (in accordance with the pit design chosen) and of underground method (by the filling chamber method) for the rest of the area. (author)

  6. Normalization of shielding structure quality and the method of its studying

    International Nuclear Information System (INIS)

    Bychkov, Ya.A.; Lavdanskij, P.A.

    1987-01-01

    Method for evaluation of nuclear facility radiation shield quality is suggested. Indexes of shielding structure radiation efficiency and face efficiency are used as the shielding structure quality indexes. The first index is connected with radiation dose rate during personnel irradiation behind the shield, and the second one - with the stresses in shielding structure introduction of the indexes presented allows to evaluate objectively the quality of nuclear facility shielding structure quality design construction and operation and to economize labour and material resources

  7. Low flow measurement for infusion pumps: implementation and uncertainty determination of the normalized method

    International Nuclear Information System (INIS)

    Cebeiro, J; Musacchio, A; Sardá, E Fernández

    2011-01-01

    Intravenous drug delivery is a standard practice in hospitalized patients. As the blood concentration reached depends directly on infusion rate, it is important to use safe devices that guarantee output accuracy. In pediatric intensive care units, low infusion rates (i.e. lower than 10.0 ml/h) are frequently used. Thus, it would be necessary to use control programs to search for deviations at this flow range. We describe the implementation of a gravimetric method to test infusion pumps in low flow delivery. The procedure recommended by the ISO/IEC 60601-2-24 standard was used being a reasonable option among the methods frequently used in hospitals, such as infusion pumps analyzers and volumetric cylinders. The main uncertainty sources affecting this method are revised and a numeric and graphic uncertainty analysis is presented in order to show its dependence on flow. Additionally, the obtained uncertainties are compared to those presented by an automatic flow analyzer. Finally, the results of a series of tests performed on a syringe infusion pump operating at low rates are shown.

  8. Detection of normal plantar fascia thickness in adults via the ultrasonographic method.

    Science.gov (United States)

    Abul, Kadir; Ozer, Devrim; Sakizlioglu, Secil Sezgin; Buyuk, Abdul Fettah; Kaygusuz, Mehmet Akif

    2015-01-01

    Heel pain is a prevalent concern in orthopedic clinics, and there are numerous pathologic abnormalities that can cause heel pain. Plantar fasciitis is the most common cause of heel pain, and the plantar fascia thickens in this process. It has been found that thickening to greater than 4 mm in ultrasonographic measurements can be accepted as meaningful in diagnoses. Herein, we aimed to measure normal plantar fascia thickness in adults using ultrasonography. We used ultrasonography to measure the plantar fascia thickness of 156 healthy adults in both feet between April 1, 2011, and June 30, 2011. These adults had no previous heel pain. The 156 participants comprised 88 women (56.4%) and 68 men (43.6%) (mean age, 37.9 years; range, 18-65 years). The weight, height, and body mass index of the participants were recorded, and statistical analyses were conducted. The mean ± SD (range) plantar fascia thickness measurements for subgroups of the sample were as follows: 3.284 ± 0.56 mm (2.4-5.1 mm) for male right feet, 3.3 ± 0.55 mm (2.5-5.0 mm) for male left feet, 2.842 ± 0.42 mm (1.8-4.1 mm) for female right feet, and 2.8 ± 0.44 mm (1.8-4.3 mm) for female left feet. The overall mean ± SD (range) thickness for the right foot was 3.035 ± 0.53 mm (1.8-5.1 mm) and for the left foot was 3.053 ± 0.54 mm (1.8-5.0 mm). There was a statistically significant and positive correlation between plantar fascia thickness and participant age, weight, height, and body mass index. The plantar fascia thickness of adults without heel pain was measured to be less than 4 mm in most participants (~92%). There was no statistically significant difference between the thickness of the right and left foot plantar fascia.

  9. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  10. Mandibulary dental arch form differences between level four polynomial method and pentamorphic pattern for normal occlusion sample

    Directory of Open Access Journals (Sweden)

    Y. Yuliana

    2011-07-01

    Full Text Available The aim of an orthodontic treatment is to achieve aesthetic, dental health and the surrounding tissues, occlusal functional relationship, and stability. The success of an orthodontic treatment is influenced by many factors, such as diagnosis and treatment plan. In order to do a diagnosis and a treatment plan, medical record, clinical examination, radiographic examination, extra oral and intra oral photos, as well as study model analysis are needed. The purpose of this study was to evaluate the differences in dental arch form between level four polynomial and pentamorphic arch form and to determine which one is best suitable for normal occlusion sample. This analytic comparative study was conducted at Faculty of Dentistry Universitas Padjadjaran on 13 models by comparing the dental arch form using the level four polynomial method based on mathematical calculations, the pattern of the pentamorphic arch and mandibular normal occlusion as a control. The results obtained were tested using statistical analysis T student test. The results indicate a significant difference both in the form of level four polynomial method and pentamorphic arch form when compared with mandibular normal occlusion dental arch form. Level four polynomial fits better, compare to pentamorphic arch form.

  11. Alcohol and illicit drugs in drivers involved in road traffic crashes in the Milan area. A comparison with normal traffic reveals the possible inadequacy of current cut-off limits.

    Science.gov (United States)

    Ferrari, Davide; Manca, Monica; Banfi, Giuseppe; Locatelli, Massimo

    2018-01-01

    Driving under the influence of alcohol and/or illicit drugs in Italy is regulated by the articles 186 and 187 of the National Street Code. Epidemiological studies on drivers involved in road traffic crashes (RTC) provide useful information about the use/abuse of these substances in the general population. Comparison with case control studies may reveal important information like the cut-off limits adequacy. Data from 1587 blood tests for alcohol and 1258 blood tests for illicit drugs on drivers involved in RTC around Milan between 2012 and 2016, were analyzed and compared with a published random survey (DRUID) from the European Community. Our data from RTC-involved drivers show that alcohol abuse is not age-related whereas illicit drugs are more common in young people. Cannabinoids are frequent among younger drivers (median age 27) whereas cocaine is more often detected in adults (median age 34). The calculated odds ratio after comparison with the DRUID survey shows that a blood alcohol concentration below the legal limit does not represent a risk factor in having a car accident whereas concentrations of cocaine and cannabinoids within the legal limits are associated with being involved in a car accident. Despite authority efforts, the abuse of alcohol and illicit drugs is still common in young drivers. We suspect that the cut-off limits for cannabinoids and cocaine and/or the pre-analytical procedures for these substances are inadequate. We suggest a better standardization of the procedure by shortening the time interval between the request for investigation and blood collection and propose the adoption of more stringent cut-off limits. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. ADVANCED CUTTINGS TRANSPORT STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Stefan Miska; Nicholas Takach; Kaveh Ashenayi

    2004-07-31

    We have tested the loop elevation system. We raised the mast to approximately 25 to 30 degrees from horizontal. All went well. However, while lowering the mast, it moved laterally a couple of degrees. Upon visual inspection, severe spalling of the concrete on the face of the support pillar, and deformation of the steel support structure was observed. At this time, the facility is ready for testing in the horizontal position. A new air compressor has been received and set in place for the ACTS test loop. A new laboratory has been built near the ACTS test loop Roughened cups and rotors for the viscometer (RS300) were obtained. Rheologies of aqueous foams were measured using three different cup-rotor assemblies that have different surface roughness. The relationship between surface roughness and foam rheology was investigated. Re-calibration of nuclear densitometers has been finished. The re-calibration was also performed with 1% surfactant foam. A new cuttings injection system was installed at the bottom of the injection tower. It replaced the previous injection auger. A mechanistic model for cuttings transport with aerated mud has been developed. Cuttings transport mechanisms with aerated water at various conditions were experimentally investigated. A total of 39 tests were performed. Comparisons between the model predictions and experimental measurements show a satisfactory agreement. Results from the ultrasonic monitoring system indicated that we could distinguish between different sand levels. We also have devised ways to achieve consistency of performance by securing the sensors in the caps in exactly the same manner as long as the sensors are not removed from the caps. A preliminary test was conducted on the main flow loop at 100 gpm flow rate and 20 lb/min cuttings injection rate. The measured bed thickness using the ultrasonic method showed a satisfactory agreement with nuclear densitometer readings. Thirty different data points were collected after the test

  13. Performance improvement of two-dimensional EUV spectroscopy based on high frame rate CCD and signal normalization method

    International Nuclear Information System (INIS)

    Zhang, H.M.; Morita, S.; Ohishi, T.; Goto, M.; Huang, X.L.

    2014-01-01

    In the Large Helical Device (LHD), the performance of two-dimensional (2-D) extreme ultraviolet (EUV) spectroscopy with wavelength range of 30-650A has been improved by installing a high frame rate CCD and applying a signal intensity normalization method. With upgraded 2-D space-resolved EUV spectrometer, measurement of 2-D impurity emission profiles with high horizontal resolution is possible in high-density NBI discharges. The variation in intensities of EUV emission among a few discharges is significantly reduced by normalizing the signal to the spectral intensity from EUV_—Long spectrometer which works as an impurity monitor with high-time resolution. As a result, high resolution 2-D intensity distribution has been obtained from CIV (384.176A), CV(2x40.27A), CVI(2x33.73A) and HeII(303.78A). (author)

  14. Laser cutting: industrial relevance, process optimization, and laser safety

    Science.gov (United States)

    Haferkamp, Heinz; Goede, Martin; von Busse, Alexander; Thuerk, Oliver

    1998-09-01

    Compared to other technological relevant laser machining processes, up to now laser cutting is the application most frequently used. With respect to the large amount of possible fields of application and the variety of different materials that can be machined, this technology has reached a stable position within the world market of material processing. Reachable machining quality for laser beam cutting is influenced by various laser and process parameters. Process integrated quality techniques have to be applied to ensure high-quality products and a cost effective use of the laser manufacturing plant. Therefore, rugged and versatile online process monitoring techniques at an affordable price would be desirable. Methods for the characterization of single plant components (e.g. laser source and optical path) have to be substituted by an omnivalent control system, capable of process data acquisition and analysis as well as the automatic adaptation of machining and laser parameters to changes in process and ambient conditions. At the Laser Zentrum Hannover eV, locally highly resolved thermographic measurements of the temperature distribution within the processing zone using cost effective measuring devices are performed. Characteristic values for cutting quality and plunge control as well as for the optimization of the surface roughness at the cutting edges can be deducted from the spatial distribution of the temperature field and the measured temperature gradients. Main influencing parameters on the temperature characteristic within the cutting zone are the laser beam intensity and pulse duration in pulse operation mode. For continuous operation mode, the temperature distribution is mainly determined by the laser output power related to the cutting velocity. With higher cutting velocities temperatures at the cutting front increase, reaching their maximum at the optimum cutting velocity. Here absorption of the incident laser radiation is drastically increased due to

  15. Cut-off values of distal forearm bone density for the diagnosis of ...

    African Journals Online (AJOL)

    Background: The objective of this study was to establish a triage cut-off point or threshold for peripheral bone mineral density (BMD), applicable to black postmenopausal women, and that could be used as a screening method to differentiate between women with normal BMD, and those with possible central osteoporosis.

  16. Device for cutting protrusions

    Science.gov (United States)

    Bzorgi, Fariborz M [Knoxville, TN

    2011-07-05

    An apparatus for clipping a protrusion of material is provided. The protrusion may, for example, be a bolt head, a nut, a rivet, a weld bead, or a temporary assembly alignment tab protruding from a substrate surface of assembled components. The apparatus typically includes a cleaver having a cleaving edge and a cutting blade having a cutting edge. Generally, a mounting structure configured to confine the cleaver and the cutting blade and permit a range of relative movement between the cleaving edge and the cutting edge is provided. Also typically included is a power device coupled to the cutting blade. The power device is configured to move the cutting edge toward the cleaving edge. In some embodiments the power device is activated by a momentary switch. A retraction device is also generally provided, where the retraction device is configured to move the cutting edge away from the cleaving edge.

  17. Thermophysical problems of laser cutting of metals

    Directory of Open Access Journals (Sweden)

    Orishich Anatoliy

    2017-01-01

    Full Text Available Variety and complex interaction of physical processes during laser cutting is a critical characteristic of the laser cutting of metals. Small spatial and temporal scales complicate significantly the experimental investigations of the multi-phase fluid flow in the conditions of laser cutting of metals. In these conditions, the surface formed during the cutting is an indicator determining the melt flow character. The quantitative parameter reflecting the peculiarities of the multi-phase fluid flow, is normally the roughness of the forming surface, and the minimal roughness is the criterion of the qualitative flow [1 – 2]. The purpose of this work is to perform the experimental comparative investigation of the thermophysical pattern of the multi-phase melt flow in the conditions of the laser cutting of metals with the laser wavelength of 10.6 μm and 1.07 μm.

  18. A non-Hertzian method for solving wheel-rail normal contact problem taking into account the effect of yaw

    Science.gov (United States)

    Liu, Binbin; Bruni, Stefano; Vollebregt, Edwin

    2016-09-01

    A novel approach is proposed in this paper to deal with non-Hertzian normal contact in wheel-rail interface, extending the widely used Kik-Piotrowski method. The new approach is able to consider the effect of the yaw angle of the wheelset against the rail on the shape of the contact patch and on pressure distribution. Furthermore, the method considers the variation of profile curvature across the contact patch, enhancing the correspondence to CONTACT for highly non-Hertzian contact conditions. The simulation results show that the proposed method can provide more accurate estimation than the original algorithm compared to Kalker's CONTACT, and that the influence of yaw on the contact results is significant under certain circumstances.

  19. CAD for cutting head exchange of roadheader

    Energy Technology Data Exchange (ETDEWEB)

    Tao, Z.; Wu, Z.; Qian, P. [China Coal Research Institute (China). Shanghai Branch

    1999-08-01

    Improving the cutting method according to the actual operating conditions is an effective way to raise production efficiency. A cutting head designed by means of computer and CAD software is characterized by short design cycle but high design quality. Taking the AM-50 road header as an example, this paper shows that it is feasible to design an interchangeable cutting head for the machine without interfering with the main technical parameters. 2 refs., 5 figs., 1 tab.

  20. ChIPnorm: a statistical method for normalizing and identifying differential regions in histone modification ChIP-seq libraries.

    Science.gov (United States)

    Nair, Nishanth Ulhas; Sahu, Avinash Das; Bucher, Philipp; Moret, Bernard M E

    2012-01-01

    The advent of high-throughput technologies such as ChIP-seq has made possible the study of histone modifications. A problem of particular interest is the identification of regions of the genome where different cell types from the same organism exhibit different patterns of histone enrichment. This problem turns out to be surprisingly difficult, even in simple pairwise comparisons, because of the significant level of noise in ChIP-seq data. In this paper we propose a two-stage statistical method, called ChIPnorm, to normalize ChIP-seq data, and to find differential regions in the genome, given two libraries of histone modifications of different cell types. We show that the ChIPnorm method removes most of the noise and bias in the data and outperforms other normalization methods. We correlate the histone marks with gene expression data and confirm that histone modifications H3K27me3 and H3K4me3 act as respectively a repressor and an activator of genes. Compared to what was previously reported in the literature, we find that a substantially higher fraction of bivalent marks in ES cells for H3K27me3 and H3K4me3 move into a K27-only state. We find that most of the promoter regions in protein-coding genes have differential histone-modification sites. The software for this work can be downloaded from http://lcbb.epfl.ch/software.html.

  1. Methods and data for HTGR fuel performance and radionuclide release modeling during normal operation and accidents for safety analysis

    International Nuclear Information System (INIS)

    Verfondern, K.; Martin, R.C.; Moormann, R.

    1993-01-01

    The previous status report released in 1987 on reference data and calculation models for fission product transport in High-Temperature, Gas-Cooled Reactor (HTGR) safety analyses has been updated to reflect the current state of knowledge in the German HTGR program. The content of the status report has been expanded to include information from other national programs in HTGRs to provide comparative information on methods of analysis and the underlying database for fuel performance and fission product transport. The release and transport of fission products during normal operating conditions and during the accident scenarios of core heatup, water and air ingress, and depressurization are discussed. (orig.) [de

  2. Method for the determination of the equation of state of advanced fuels based on the properties of normal fluids

    International Nuclear Information System (INIS)

    Hecht, M.J.; Catton, I.; Kastenberg, W.E.

    1976-12-01

    An equation of state based on the properties of normal fluids, the law of rectilinear averages, and the second law of thermodynamics can be derived for advanced LMFBR fuels on the basis of the vapor pressure, enthalpy of vaporization, change in heat capacity upon vaporization, and liquid density at the melting point. The method consists of estimating an equation of state by means of the law of rectilinear averages and the second law of thermodynamics, integrating by means of the second law until an instability is reached, and then extrapolating by means of a self-consistent estimation of the enthalpy of vaporization

  3. The method of normal forms for singularly perturbed systems of Fredholm integro-differential equations with rapidly varying kernels

    Energy Technology Data Exchange (ETDEWEB)

    Bobodzhanov, A A; Safonov, V F [National Research University " Moscow Power Engineering Institute" , Moscow (Russian Federation)

    2013-07-31

    The paper deals with extending the Lomov regularization method to classes of singularly perturbed Fredholm-type integro-differential systems, which have not so far been studied. In these the limiting operator is discretely noninvertible. Such systems are commonly known as problems with unstable spectrum. Separating out the essential singularities in the solutions to these problems presents great difficulties. The principal one is to give an adequate description of the singularities induced by 'instability points' of the spectrum. A methodology for separating singularities by using normal forms is developed. It is applied to the above type of systems and is substantiated in these systems. Bibliography: 10 titles.

  4. Impact of PET/CT image reconstruction methods and liver uptake normalization strategies on quantitative image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kuhnert, Georg; Sterzer, Sergej; Kahraman, Deniz; Dietlein, Markus; Drzezga, Alexander; Kobe, Carsten [University Hospital of Cologne, Department of Nuclear Medicine, Cologne (Germany); Boellaard, Ronald [VU University Medical Centre, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Scheffler, Matthias; Wolf, Juergen [University Hospital of Cologne, Lung Cancer Group Cologne, Department I of Internal Medicine, Center for Integrated Oncology Cologne Bonn, Cologne (Germany)

    2016-02-15

    In oncological imaging using PET/CT, the standardized uptake value has become the most common parameter used to measure tracer accumulation. The aim of this analysis was to evaluate ultra high definition (UHD) and ordered subset expectation maximization (OSEM) PET/CT reconstructions for their potential impact on quantification. We analyzed 40 PET/CT scans of lung cancer patients who had undergone PET/CT. Standardized uptake values corrected for body weight (SUV) and lean body mass (SUL) were determined in the single hottest lesion in the lung and normalized to the liver for UHD and OSEM reconstruction. Quantitative uptake values and their normalized ratios for the two reconstruction settings were compared using the Wilcoxon test. The distribution of quantitative uptake values and their ratios in relation to the reconstruction method used were demonstrated in the form of frequency distribution curves, box-plots and scatter plots. The agreement between OSEM and UHD reconstructions was assessed through Bland-Altman analysis. A significant difference was observed after OSEM and UHD reconstruction for SUV and SUL data tested (p < 0.0005 in all cases). The mean values of the ratios after OSEM and UHD reconstruction showed equally significant differences (p < 0.0005 in all cases). Bland-Altman analysis showed that the SUV and SUL and their normalized values were, on average, up to 60 % higher after UHD reconstruction as compared to OSEM reconstruction. OSEM and HD reconstruction brought a significant difference for SUV and SUL, which remained constantly high after normalization to the liver, indicating that standardization of reconstruction and the use of comparable SUV measurements are crucial when using PET/CT. (orig.)

  5. Influences of Normalization Method on Biomarker Discovery in Gas Chromatography-Mass Spectrometry-Based Untargeted Metabolomics: What Should Be Considered?

    Science.gov (United States)

    Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo

    2017-05-16

    Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.

  6. Plasma cutting or laser cutting. Plasma setsudan ka laser setsudan ka

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, A. (Tanaka Engineering Works Ltd., Saitama (Japan))

    1991-05-01

    Comparisons and discussions were made on the plasma cutting and laser cutting in sheet steel cutting, referring partly to gas cutting. Historically, the cutting has been developed from gas, plasma, and laser in that order, and currently these three methods are used mixedly. Generally, the plasma cutting is superior in cutting speed, but inferior in cut face quality, and it requires measures of dust collection. Due to high accuracy and quality in cut face, the laser cutting has been practically used for quite some time in the thin sheet industry, but medium to thick sheet cutting had a problem of unavailability of high output laser suitable for these ranges. However, the recent technologies have overcome the problem as a result of development at the authors {prime} company of a 2 kW class laser cutter capable of cutting 19 mm thick sheet. The cutter has been proven being particularly excellent in controllability. Choice of whether plasma or laser would depend upon which priority is to be taken, cost or accuracy. 15 figs., 3 tabs.

  7. Probing the effect of human normal sperm morphology rate on cycle outcomes and assisted reproductive methods selection.

    Directory of Open Access Journals (Sweden)

    Bo Li

    Full Text Available Sperm morphology is the best predictor of fertilization potential, and the critical predictive information for supporting assisted reproductive methods selection. Given its important predictive value and the declining reality of semen quality in recent years, the threshold of normal sperm morphology rate (NSMR is being constantly corrected and controversial, from the 4th edition (14% to the 5th version (4%. We retrospectively analyzed 4756 cases of infertility patients treated with conventional-IVF(c-IVF or ICSI, which were divided into three groups according to NSMR: ≥14%, 4%-14% and <4%. Here, we demonstrate that, with decrease in NSMR(≥14%, 4%-14%, <4%, in the c-IVF group, the rate of fertilization, normal fertilization, high-quality embryo, multi-pregnancy and birth weight of twins gradually decreased significantly (P<0.05, while the miscarriage rate was significantly increased (p<0.01 and implantation rate, clinical pregnancy rate, ectopic pregnancy rate, preterm birth rate, live birth rate, sex ratio, and birth weight(Singleton showed no significant change. In the ICSI group, with decrease in NSMR (≥14%, 4%-14%, <4%, high-quality embryo rate, multi-pregnancy rate and birth weight of twins were gradually decreased significantly (p<0.05, while other parameters had no significant difference. Considering the clinical assisted methods selection, in the NFMR ≥14% group, normal fertilization rate of c-IVF was significantly higher than the ICSI group (P<0.05, in the 4%-14% group, birth weight (twins of c-IVF were significantly higher than the ICSI group, in the <4% group, miscarriage of IVF was significantly higher than the ICSI group. Therefore, we conclude that NSMR is positively related to embryo reproductive potential, and when NSMR<4% (5th edition, ICSI should be considered first, while the NSMR≥4%, c-IVF assisted reproduction might be preferred.

  8. Extension without Cut

    OpenAIRE

    Straßburger , Lutz

    2012-01-01

    International audience; In proof theory one distinguishes sequent proofs with cut and cut-free sequent proofs, while for proof complexity one distinguishes Frege-systems and extended Frege-systems. In this paper we show how deep inference can provide a uniform treatment for both classifications, such that we can define cut-free systems with extension, which is neither possible with Frege-systems, nor with the sequent calculus. We show that the propositional pigeon-hole principle admits polyno...

  9. Model-free methods of analyzing domain motions in proteins from simulation : A comparison of normal mode analysis and molecular dynamics simulation of lysozyme

    NARCIS (Netherlands)

    Hayward, S.; Kitao, A.; Berendsen, H.J.C.

    Model-free methods are introduced to determine quantities pertaining to protein domain motions from normal mode analyses and molecular dynamics simulations, For the normal mode analysis, the methods are based on the assumption that in low frequency modes, domain motions can be well approximated by

  10. Chapter 9: Metering Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy-Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Mort, Dan [ADM Associates, Inc. Sacramento, CA (United States)

    2017-09-01

    Estimated energy savings are calculated as the difference between the energy use during the baseline period and the energy use during the post installation period of the EEM. This chapter describes the physical properties measured in the process of evaluating EEMs and the specific metering methods for several types of measurements. Skill-level requirements and other operating considerations are discussed, including where, when, and how often measurements should be made. The subsequent section identifies metering equipment types and their respective measurement accuracies. This is followed by sections containing suggestions regarding proper data handling procedures and the categorization and definition of several load types.

  11. Chapter 13: Assessing Persistence and Other Evaluation Issues Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Violette, Daniel M. [Navigant, Boulder, CO (United States)

    2017-09-01

    Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).

  12. Chapter 12: Survey Design and Implementation for Estimating Gross Savings Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Baumgartner, Robert [Tetra Tech, Madison, WI (United States)

    2017-10-05

    This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savings from energy efficiency programs.

  13. Chapter 11: Sample Design Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Khawaja, M. Sami [The Cadmus Group, Portland, OR (United States); Rushton, Josh [The Cadmus Group, Portland, OR (United States); Keeling, Josh [The Cadmus Group, Portland, OR (United States)

    2017-09-01

    Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.

  14. Development of cutting techniques of steel pipe by wire sawing

    International Nuclear Information System (INIS)

    Kamiyama, Yoshinori; Inai, Shinsuke

    2004-01-01

    A cutting method has a high cutting efficiency and enable cutting in safe. A wire saw cutting method is used for dismantling of massive concrete structures such as nuclear power plants with an effective and safe mean. In the case of dismantling of structures with multiple pipes installed at these facilities, an effective method is also demanded. If a wire saw method to remotely cut target objects in a large block in bulk is applicable, it will be expected an effective dismantling work under severe conditions with radioactivity. Although the wire saw method has adaptability for any shapes of cutting target objects and is widely adopted in dismantling of concrete constructs, it has few actual achievements in dismantling of steel structures such as steel pipe bundle. This study aims to verify its cutting characteristics and adaptability as a cutting method by conducting a cutting basic test to develop a diamond wire saw method to efficiently cut constructs with multiple pipes in a bundle. The test proved that a wire saw cutting method apply to dismantle structures with steel pipe bundle. A wire saw for metal cutting is adaptable in dismantling of bundle of thick carbon steel and stainless steel pipes. And also a wire saw for concrete cutting is adaptable in dismantling of pipe bundle structure with a mortar. (author)

  15. Underwater cutting techniques developments

    International Nuclear Information System (INIS)

    Bach, F.-W.

    1990-01-01

    The primary circuit structures of different nuclear powerplants are constructed out of stainless steels, ferritic steels, plated ferritic steels and alloys of aluminium. According to the level of the specific radiation of these structures, it is necessary for dismantling to work with remote controlled cutting techniques. The most successful way to protect the working crew against exposure of radiation is to operate underwater in different depths. The following thermal cutting processes are more or less developed to work under water: For ferritic steels only - flame cutting; For ferritic steels, stainless steels, cladded steels and aluminium alloys - oxy-arc-cutting, arc-waterjet-cutting with a consumable electrode, arc-saw-cutting, plasma-arc-cutting and plasma-arc-saw. The flame cutting is a burning process, all the other processes are melt-cutting processes. This paper explains the different techniques, giving a short introduction of the theory, a discussion of the possibilities with the advantages and disadvantages of these processes giving a view into the further research work in this interesting field. (author)

  16. The Effect of Muscle Fiber Direction on the Cut Surface Angle of Frozen Fish Muscular Tissue Cut by Bending Force

    OpenAIRE

    岡本, 清; 羽倉, 義雄; 鈴木, 寛一; 久保田, 清

    1996-01-01

    We have proposed a new cutting method named "Cryo-cutting" for frozen foodstuffs by applying a bending force instead of conventional cutting methods with band saw. This paper investigated the effect of muscle fiber angle (θf) to cut surface angle (θs) of frozen tuna muscular tissue at -70, -100 and -130°C for the purpose of evaluating the applicability of the cryo-cutting method to frozen fishes. The results were as follows : (1) There were two typical cutting patterns ("across the muscle fib...

  17. Cut Locus Construction using Deformable Simplicial Complexes

    DEFF Research Database (Denmark)

    Misztal, Marek Krzysztof; Bærentzen, Jakob Andreas; Anton, François

    2011-01-01

    In this paper we present a method for appproximating cut loci for a given point p on Riemannian 2D manifolds, closely related to the notion of Voronoi diagrams. Our method finds the cut locus by advecting a front of points equally distant from p along the geodesics originating at p and finding...... the domain to have disk topology. We test our method for tori of revolution and compare our results to the benchmark ones from [2]. The method, however, is generic and can be easily adapted to construct cut loci for other manifolds of genera other than 1....

  18. Designing for hot-blade cutting

    DEFF Research Database (Denmark)

    Brander, David; Bærentzen, Jakob Andreas; Clausen, Kenn

    2016-01-01

    In this paper we present a novel method for the generation of doubly-curved, architectural design surfaces using swept Euler elastica and cubic splines. The method enables a direct design to production workflow with robotic hot-blade cutting, a novel robotic fabrication method under development......-trivial constraints of blade-cutting in a bottom-up fashion, enabling an exploration of the unique architectural potential of this fabrication approach. The method is implemented as prototype design tools in MatLAB, C++, GhPython, and Python and demonstrated through cutting of expanded polystyrene foam design...

  19. Antimicrobial Susceptibility of Flavobacterium psychrophilum from Chilean Salmon Farms and their Epidemiological Cut-off Values using Agar Dilution and Disk Diffusion Methods

    Directory of Open Access Journals (Sweden)

    Claudio D Miranda

    2016-11-01

    Full Text Available Flavobacterium psychrophilum is the most important bacterial pathogen for freshwater farmed salmonids in Chile. The aims of this study were to determine the susceptibility to antimicrobials used in fish farming of Chilean isolates and to calculate their epidemiological cut-off (COWT values. A number of 125 Chilean isolates of F. psychrophilum were isolated from reared salmonids presenting clinical symptoms indicative of flavobacteriosis and their identities were confirmed by 16S rRNA polymerase chain reaction. Susceptibility to antibacterials was tested on diluted Mueller-Hinton by using an agar dilution MIC method and a disk diffusion method. The COWT values calculated by Normalised Resistance Interpretation (NRI analysis allow isolates to be categorized either as wild-type fully susceptible (WT or as manifesting reduced susceptibility (NWT. When MIC data was used, NRI analysis calculated a COWT of ≤ 0.125 μg mL-1, ≤ 2 μg mL-1 and ≤ 0.5 μg mL-1 for amoxicillin, florfenicol and oxytetracycline, respectively. For the quinolones, the COWT were ≤1 μg mL-1, ≤ 0.5 μg mL-1 and ≤ 0.125 μg mL-1 for oxolinic acid, flumequine and enrofloxacin respectively. The disc diffusion data sets obtained in this work were extremely diverse and were spread over a wide range. For the quinolones there was a close agreement between the frequencies of NWT isolates calculated using MIC and disc data. For oxolinic acid, flumequine and enrofloxacin the frequencies were 45, 39 and 38% using MIC data, and 42, 41 and 44%, when disc data were used. There was less agreement with the other antimicrobials, because NWT frequencies obtained using MIC and disc data respectively, were 24% and 10% for amoxicillin, 8% and 2% for florfenicol and 70% and 64% for oxytetracycline. Considering that the MIC data was more precise than the disc diffusion data, MIC determination would be the preferred method for susceptibility testing for this species and the NWT frequencies

  20. Propagation by Cuttings, Layering and Division

    OpenAIRE

    Relf, Diane; Ball, Elizabeth Carter

    2009-01-01

    The major methods of asexual propagation are cuttings, layering, division, and budding/grafting. Cuttings involve rooting a severed piece of the parent plant; layering involves rooting a part of the parent and then severing it; and budding and grafting are joining two plant parts from different varieties.

  1. Application of the Finite Element Method to Reveal the Causes of Loss of Planeness of Hot-Rolled Steel Sheets during Laser Cutting

    Science.gov (United States)

    Garber, E. A.; Bolobanova, N. L.; Trusov, K. A.

    2018-01-01

    A finite element technique is developed to simulate the stresses and the strains during strip flattening to reveal the causes of the cutting-assisted loss of planeness of hot-rolled steel sheets processed in roller levelers. The loss of planeness is found to be caused by a nonuniform distribution of the flattening-induced longitudinal tensile stresses over the strip thickness and width. The application of tensile forces to a strip in a roller leveler decreases this nonuniformity and prevents loss of planeness in cutting.

  2. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    International Nuclear Information System (INIS)

    Gigase, Yves

    2007-01-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide as example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)

  3. Depth Estimates for Slingram Electromagnetic Anomalies from Dipping Sheet-like Bodies by the Normalized Full Gradient Method

    Science.gov (United States)

    Dondurur, Derman

    2005-11-01

    The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.

  4. Cutting Class Harms Grades

    Science.gov (United States)

    Taylor, Lewis A., III

    2012-01-01

    An accessible business school population of undergraduate students was investigated in three independent, but related studies to determine effects on grades due to cutting class and failing to take advantage of optional reviews and study quizzes. It was hypothesized that cutting classes harms exam scores, attending preexam reviews helps exam…

  5. A new plan-scoring method using normal tissue complication probability for personalized treatment plan decisions in prostate cancer

    Science.gov (United States)

    Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie; Chang, Kyung Hwan

    2018-01-01

    The aim of this study was to derive a new plan-scoring index using normal tissue complication probabilities to verify different plans in the selection of personalized treatment. Plans for 12 patients treated with tomotherapy were used to compare scoring for ranking. Dosimetric and biological indexes were analyzed for the plans for a clearly distinguishable group ( n = 7) and a similar group ( n = 12), using treatment plan verification software that we developed. The quality factor ( QF) of our support software for treatment decisions was consistent with the final treatment plan for the clearly distinguishable group (average QF = 1.202, 100% match rate, n = 7) and the similar group (average QF = 1.058, 33% match rate, n = 12). Therefore, we propose a normal tissue complication probability (NTCP) based on the plan scoring index for verification of different plans for personalized treatment-plan selection. Scoring using the new QF showed a 100% match rate (average NTCP QF = 1.0420). The NTCP-based new QF scoring method was adequate for obtaining biological verification quality and organ risk saving using the treatment-planning decision-support software we developed for prostate cancer.

  6. Fundamentals of cutting.

    Science.gov (United States)

    Williams, J G; Patel, Y

    2016-06-06

    The process of cutting is analysed in fracture mechanics terms with a view to quantifying the various parameters involved. The model used is that of orthogonal cutting with a wedge removing a layer of material or chip. The behaviour of the chip is governed by its thickness and for large radii of curvature the chip is elastic and smooth cutting occurs. For smaller thicknesses, there is a transition, first to plastic bending and then to plastic shear for small thicknesses and smooth chips are formed. The governing parameters are tool geometry, which is principally the wedge angle, and the material properties of elastic modulus, yield stress and fracture toughness. Friction can also be important. It is demonstrated that the cutting process may be quantified via these parameters, which could be useful in the study of cutting in biology.

  7. Laser circular cutting of Kevlar sheets: Analysis of thermal stress filed and assessment of cutting geometry

    Science.gov (United States)

    Yilbas, B. S.; Akhtar, S. S.; Karatas, C.

    2017-11-01

    A Kevlar laminate has negative thermal expansion coefficient, which makes it difficult to machine at room temperaures using the conventional cutting tools. Contararily, laser machining of a Kevlar laminate provides advantages over the conventional methods because of the non-mechanical contact between the cutting tool and the workpiece. In the present study, laser circular cutting of Kevlar laminate is considered. The experiment is carried out to examine and evaluate the cutting sections. Temperature and stress fields formed in the cutting section are simulated in line with the experimental study. The influence of hole diameters on temperature and stress fields are investigated incorporating two different hole diameters. It is found that the Kevlar laminate cutting section is free from large size asperities such as large scale sideways burnings and attachemnt of charred residues. The maximum temperature along the cutting circumference remains higher for the large diameter hole than that of the small diameter hole. Temperature decay is sharp around the cutting section in the region where the cutting terminates. This, in turn, results in high temperature gradients and the thermal strain in the cutting region. von Mises stress remains high in the region where temperature gradients are high. von Mises stress follows similar to the trend of temperature decay around the cutting edges.

  8. Electric arc, water jet cutting of metals

    International Nuclear Information System (INIS)

    Bruening, D.

    1991-01-01

    For thermal dismantling and cutting of metallic components, as electric arc, water jet cutting method was developed that can be used for underwater cutting work up to a depth of 20 m. Short-circuiting of a continuously fed electrode wire in contact with the metal generates an electric arc which induces partial melting of the metal, and the water jet surrounding the wire rinses away the molten material, thus making a continuous kerf in the material. The method was also tested and modified to allow larger area, surface cutting and removal of metallic surface coatings. This is achieved by melting parts of the surface with the electric arc and subsequent rinsing by the water jet. The cutting and melting depth for surface removal can be accurately controlled by the operating parameters chosen. (orig./DG) [de

  9. A New Quantitative Method for the Non-Invasive Documentation of Morphological Damage in Paintings Using RTI Surface Normals

    Directory of Open Access Journals (Sweden)

    Marcello Manfredi

    2014-07-01

    Full Text Available In this paper we propose a reliable surface imaging method for the non-invasive detection of morphological changes in paintings. Usually, the evaluation and quantification of changes and defects results mostly from an optical and subjective assessment, through the comparison of the previous and subsequent state of conservation and by means of condition reports. Using quantitative Reflectance Transformation Imaging (RTI we obtain detailed information on the geometry and morphology of the painting surface with a fast, precise and non-invasive method. Accurate and quantitative measurements of deterioration were acquired after the painting experienced artificial damage. Morphological changes were documented using normal vector images while the intensity map succeeded in highlighting, quantifying and describing the physical changes. We estimate that the technique can detect a morphological damage slightly smaller than 0.3 mm, which would be difficult to detect with the eye, considering the painting size. This non-invasive tool could be very useful, for example, to examine paintings and artwork before they travel on loan or during a restoration. The method lends itself to automated analysis of large images and datasets. Quantitative RTI thus eases the transition of extending human vision into the realm of measuring change over time.

  10. Spin-orbit coupling calculations with the two-component normalized elimination of the small component method

    Science.gov (United States)

    Filatov, Michael; Zou, Wenli; Cremer, Dieter

    2013-07-01

    A new algorithm for the two-component Normalized Elimination of the Small Component (2cNESC) method is presented and tested in the calculation of spin-orbit (SO) splittings for a series of heavy atoms and their molecules. The 2cNESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac SO splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000), 10.1103/PhysRevB.62.7809]. The use of the screened nucleus potential for the two-electron SO interaction leads to accurate spinor energy splittings, for which the deviations from the accurate Dirac Fock-Coulomb values are on the average far below the deviations observed for other effective one-electron SO operators. For hydrogen halides HX (X = F, Cl, Br, I, At, and Uus) and mercury dihalides HgX2 (X = F, Cl, Br, I) trends in spinor energies and SO splittings as obtained with the 2cNESC method are analyzed and discussed on the basis of coupling schemes and the electronegativity of X.

  11. Transformation of an empirical distribution to normal distribution by the use of Johnson system of translation and symmetrical quantile method

    OpenAIRE

    Ludvík Friebel; Jana Friebelová

    2006-01-01

    This article deals with approximation of empirical distribution to standard normal distribution using Johnson transformation. This transformation enables us to approximate wide spectrum of continuous distributions with a normal distribution. The estimation of parameters of transformation formulas is based on percentiles of empirical distribution. There are derived theoretical probability distribution functions of random variable obtained on the base of backward transformation standard normal ...

  12. Status on underwater plasma arc cutting in KHI, 3

    International Nuclear Information System (INIS)

    Abe, Tadashi; Aota, Toshiichi; Nishizaki, Tadashi; Nakayama, Shigeru; Yamashita, Seiji

    1983-01-01

    In Kawasaki Heavy Industries, Ltd., the development of a remote dismantling system by underwater plasma arc cutting process has been advanced, expecting its application to the dismantling and removal of nuclear reactor facilities. In the previous two reports, the fundamental experimental results such as the comparison of the cutting capability in air and in water were shown, but this time, the remote automatic cutting of wedge-shaped specimens was carried out, using a newly installed manipulator for underwater works, therefore its outline is reported. Also the cutting experiment by overhead position and vertical position was performed by using the same equipment, and comparison was made with the cutting capability by downhand and horizontal positions. It is important to grasp the cutting characteristics in the case of upward advancing and downward advancing cuttings by overhead and vertical positions when the cutting of pressure vessels and horizontal pipes into rings is supposed. The experimental apparatus, the cutting conditions, the testing method and the test results of the cutting capability test, the test of changing direction during cutting, and the remote cutting of pipes into rings are described. The underwater plasma arc cutting can cut all metals, the cutting speed is relatively high, and the apparatus is simple and compact. (Kako, I.)

  13. Advanced cutting techniques: laser and fissuration cutting

    International Nuclear Information System (INIS)

    Migliorati, B.; Gay, P.

    1984-01-01

    Experimental tests have been performed using CO 2 laser with output power 1 to 15 kW to evaluate the effect of varying the following parameters: material (carbon steel Fe 42 C, stainless steel AISI 304, concrete), laser power, beam characteristics, work piece velocity, gas type and distribution on the laser interaction zone. In the case of concrete, drilling depths of 80 mm were obtained in a few seconds using a 10 kW laser beam. Moreover pieces of 160 mm were cut at 0.01 meters per minute. Results with carbon steel indicated maximum thicknesses of 110 mm, cut at 0.01 meters per minute with 10 kW, depths about 20% lower were obtained with the AISI 304 stainless steel. A parallel investigation was aimed at characterizing particulate emission during the laser cutting process. At the end of the research it was possible to elaborate a preliminary proposal concerning a laser based dismantling system for the application to a typical Nuclear Power Station. (author)

  14. Laser Cutting, Development Trends

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove

    1999-01-01

    In this paper a short review of the development trends in laser cutting will be given.The technology, which is the fastest expanding industrial production technology will develop in both its core market segment: Flat bed cutting of sheet metal, as it will expand in heavy industry and in cutting...... of 3-dimensional shapes.The CO2-laser will also in the near future be the dominating laser source in the market, although the new developments in ND-YAG-lasers opens for new possibilities for this laser type....

  15. A novel mean-centering method for normalizing microRNA expression from high-throughput RT-qPCR data

    Directory of Open Access Journals (Sweden)

    Wylie Dennis

    2011-12-01

    Full Text Available Abstract Background Normalization is critical for accurate gene expression analysis. A significant challenge in the quantitation of gene expression from biofluids samples is the inability to quantify RNA concentration prior to analysis, underscoring the need for robust normalization tools for this sample type. In this investigation, we evaluated various methods of normalization to determine the optimal approach for quantifying microRNA (miRNA expression from biofluids and tissue samples when using the TaqMan® Megaplex™ high-throughput RT-qPCR platform with low RNA inputs. Findings We compared seven normalization methods in the analysis of variation of miRNA expression from biofluid and tissue samples. We developed a novel variant of the common mean-centering normalization strategy, herein referred to as mean-centering restricted (MCR normalization, which is adapted to the TaqMan Megaplex RT-qPCR platform, but is likely applicable to other high-throughput RT-qPCR-based platforms. Our results indicate that MCR normalization performs comparable to or better than both standard mean-centering and other normalization methods. We also propose an extension of this method to be used when migrating biomarker signatures from Megaplex to singleplex RT-qPCR platforms, based on the identification of a small number of normalizer miRNAs that closely track the mean of expressed miRNAs. Conclusions We developed the MCR method for normalizing miRNA expression from biofluids samples when using the TaqMan Megaplex RT-qPCR platform. Our results suggest that normalization based on the mean of all fully observed (fully detected miRNAs minimizes technical variance in normalized expression values, and that a small number of normalizer miRNAs can be selected when migrating from Megaplex to singleplex assays. In our study, we find that normalization methods that focus on a restricted set of miRNAs tend to perform better than methods that focus on all miRNAs, including

  16. Automated method to compute Evans index for diagnosis of idiopathic normal pressure hydrocephalus on brain CT images

    Science.gov (United States)

    Takahashi, Noriyuki; Kinoshita, Toshibumi; Ohmura, Tomomi; Matsuyama, Eri; Toyoshima, Hideto

    2017-03-01

    The early diagnosis of idiopathic normal pressure hydrocephalus (iNPH) considered as a treatable dementia is important. The iNPH causes enlargement of lateral ventricles (LVs). The degree of the enlargement of the LVs on CT or MR images is evaluated by using a diagnostic imaging criterion, Evans index. Evans index is defined as the ratio of the maximal width of frontal horns (FH) of the LVs to the maximal width of the inner skull (IS). Evans index is the most commonly used parameter for the evaluation of ventricular enlargement. However, manual measurement of Evans index is a time-consuming process. In this study, we present an automated method to compute Evans index on brain CT images. The algorithm of the method consisted of five major steps: standardization of CT data to an atlas, extraction of FH and IS regions, the search for the outmost points of bilateral FH regions, determination of the maximal widths of both the FH and the IS, and calculation of Evans index. The standardization to the atlas was performed by using linear affine transformation and non-linear wrapping techniques. The FH regions were segmented by using a three dimensional region growing technique. This scheme was applied to CT scans from 44 subjects, including 13 iNPH patients. The average difference in Evans index between the proposed method and manual measurement was 0.01 (1.6%), and the correlation coefficient of these data for the Evans index was 0.98. Therefore, this computerized method may have the potential to accurately compute Evans index for the diagnosis of iNPH on CT images.

  17. Regularities of dust formation during stone cutting for construction works

    Directory of Open Access Journals (Sweden)

    V.G. Lebedev

    2016-09-01

    Full Text Available When cutting stone, a large amount of dust release, which is a mixture of small, mostly sharp, mineral particles. Shallow dry dust with inhalation causes the pathological changes in organs that are a consequence of infiltration of acute and solids particles. Despite the importance of this problem, the questions of dust generation during the various working processes and its fractions distribution are practically not considered. This determines the time of dust standing in the air and its negative impact on a person. Aim: The aim of this research is to study the process of dusting during stones cutting and dust distribution on fractions regularities and quantification of dust formation process in order to improve the production equipment, staff individual and collective safety equipment. Materials and Methods: Many types of cutting can be divided into two types - a “dry” cutting and cutting with fluid. During “dry” cutting a dust represents a set of micro-chips which are cut off by the abrasive grains. The size of such chips very small: from a micrometer to a few micrometers fraction. Thus, the size of chips causes the possibility of creating dust slurry with low fall velocity, and which is located in the working space in large concentrations. Results: The following characteristic dependences were obtained as a result of research: dependence of the dust fall from the size of the dust particles, size of dust particles from minute feeding and grain range wheel, the specific amount of dust from the number of grit abrasive wheel and the temperature of the dust particles from the feeding at wheel turnover. It was shown that the distribution of chips (dust by size will request of a normal distribution low. Dimensions of chips during cut are in the range of 0.4...6 μm. Thus, dust slurry is formed with time of particles fall of several hours. This creates considerable minute dust concentration - within 0.28∙10^8...1.68∙10^8 units/m3.

  18. Experimental testing of exchangeable cutting inserts cutting ability

    OpenAIRE

    Čep, Robert; Janásek, Adam; Čepová, Lenka; Petrů, Jana; Hlavatý, Ivo; Car, Zlatan; Hatala, Michal

    2013-01-01

    The article deals with experimental testing of the cutting ability of exchangeable cutting inserts. Eleven types of exchangeable cutting inserts from five different manufacturers were tested. The tested cutting inserts were of the same shape and were different especially in material and coating types. The main aim was both to select a suitable test for determination of the cutting ability of exchangeable cutting inserts and to design such testing procedure that could make it possible...

  19. Water-Cut Sensor System

    KAUST Repository

    Karimi, Muhammad Akram

    2018-01-11

    Provided in some embodiments is a method of manufacturing a pipe conformable water-cut sensors system. Provided in some embodiments is method for manufacturing a water-cut sensor system that includes providing a helical T-resonator, a helical ground conductor, and a separator at an exterior of a cylindrical pipe. The helical T-resonator including a feed line, and a helical open shunt stub conductively coupled to the feed line. The helical ground conductor including a helical ground plane opposite the helical open shunt stub and a ground ring conductively coupled to the helical ground plane. The feed line overlapping at least a portion of the ground ring, and the separator disposed between the feed line and the portion of the ground ring overlapped by the feed line to electrically isolate the helical T-resonator from the helical ground conductor.

  20. Methods for identifying the cost effective case definition cut-off for sequential monitoring tests: an extension of Phelps and Mushlin

    Science.gov (United States)

    Longo, Roberta; Baxter, Paul; Hall, Peter; Hewison, Jenny; Afshar, Mehran; Hall, Geoff; McCabe, Christopher

    2015-01-01

    The arrival of personalized medicine in the clinic means that treatment decisions will increasingly rely on test results. The challenge of limited health care resources means that the dissemination of these technologies will be dependent on their value in relation to their cost; i.e. their cost effectiveness. Phelps and Mushlin have described how to optimize tests to meet cost effectiveness target. However, when tests are applied repeatedly the case mix of the patients tested changes with each administration, and this impacts upon the value of each subsequent test administration. In this paper we present a modification of Phelps and Mushlin’s framework for diagnostic tests; to identify the cost effective cut-off for monitoring tests. Using the Ca125 test monitoring for relapse in Ovarian Cancer, we show how the repeated use of the initial cut-off can lead to a substantially increased false negative rate compared to the monitoring cut-off – over 20% higher than in this example – with the associated harms for individual and population health. PMID:24488576

  1. Development of cutting machine for disposal of highly activated equipments

    International Nuclear Information System (INIS)

    Iimura, Katumichi; Kitajima, Toshio; Hosokawa, Jinsaku; Abe, Shinichi; Takahashi, Kiyoshi; Ogawa, Mituhiro; Iwai, Takashi

    1994-01-01

    JMTR (Japan Materials Testing Reactor) Project has developed a cutting machine which can cut a highly activated in-pile tube under water and its performance and safety have been confirmed. This machine is for the purpose of cutting a multiplet structure pipe and made possible to cut it under water by adopting under-water discharge method. Furthermore, contamination of canal water and atmosphere is prevented by combining a filter with this machine. This report describes the outline and performance of the developed cutting machine and also results of cutting highly activated in-pile tubes. (author)

  2. Fractal characteristic study of shearer cutter cutting resistance curves

    Energy Technology Data Exchange (ETDEWEB)

    Liu, C. [Heilongjiang Scientific and Technical Institute, Haerbin (China). Dept of Mechanical Engineering

    2004-02-01

    The cutting resistance curve is the most useful tool for reflecting the overall cutting performance of a cutting machine. The cutting resistance curve is influenced by many factors such as the pick structure and arrangement, the cutter operation parameters, coal quality and geologic conditions. This paper discusses the use of fractal geometry to study the properties of the cutting resistance curve, and the use of fractal dimensions to evaluate cutting performance. On the basis of fractal theory, the general form and calculation method of fractal characteristics are given. 4 refs., 3 figs., 1 tab.

  3. Cut without Killing.

    Science.gov (United States)

    Black, Susan

    1991-01-01

    The zero-based curriculum model can help school boards and administrators make decisions about what to keep and what to cut. All instructional programs are ranked and judged in categories ranging from required to optional. (MLF)

  4. Laser cutting system

    Science.gov (United States)

    Dougherty, Thomas J

    2015-03-03

    A workpiece cutting apparatus includes a laser source, a first suction system, and a first finger configured to guide a workpiece as it moves past the laser source. The first finger includes a first end provided adjacent a point where a laser from the laser source cuts the workpiece, and the first end of the first finger includes an aperture in fluid communication with the first suction system.

  5. How Can I Stop Cutting?

    Science.gov (United States)

    ... Educators Search English Español How Can I Stop Cutting? KidsHealth / For Teens / How Can I Stop Cutting? ... in a soft, cozy blanket Substitutes for the Cutting Sensation You'll notice that all the tips ...

  6. Cutting and Self-Harm

    Science.gov (United States)

    ... Your feelings Feeling sad Cutting and self-harm Cutting and self-harm Self-harm, sometimes called self- ... There are many types of self-injury, and cutting is one type that you may have heard ...

  7. Flux cutting in superconductors

    International Nuclear Information System (INIS)

    Campbell, A M

    2011-01-01

    This paper describes experiments and theories of flux cutting in superconductors. The use of the flux line picture in free space is discussed. In superconductors cutting can either be by means of flux at an angle to other layers of flux, as in longitudinal current experiments, or due to shearing of the vortex lattice as in grain boundaries in YBCO. Experiments on longitudinal currents can be interpreted in terms of flux rings penetrating axial lines. More physical models of flux cutting are discussed but all predict much larger flux cutting forces than are observed. Also, cutting is occurring at angles between vortices of about one millidegree which is hard to explain. The double critical state model and its developments are discussed in relation to experiments on crossed and rotating fields. A new experiment suggested by Clem gives more direct information. It shows that an elliptical yield surface of the critical state works well, but none of the theoretical proposals for determining the direction of E are universally applicable. It appears that, as soon as any flux flow takes place, cutting also occurs. The conclusion is that new theories are required. (perspective)

  8. PREP KITT, System Reliability by Fault Tree Analysis. PREP, Min Path Set and Min Cut Set for Fault Tree Analysis, Monte-Carlo Method. KITT, Component and System Reliability Information from Kinetic Fault Tree Theory

    International Nuclear Information System (INIS)

    Vesely, W.E.; Narum, R.E.

    1997-01-01

    1 - Description of problem or function: The PREP/KITT computer program package obtains system reliability information from a system fault tree. The PREP program finds the minimal cut sets and/or the minimal path sets of the system fault tree. (A minimal cut set is a smallest set of components such that if all the components are simultaneously failed the system is failed. A minimal path set is a smallest set of components such that if all of the components are simultaneously functioning the system is functioning.) The KITT programs determine reliability information for the components of each minimal cut or path set, for each minimal cut or path set, and for the system. Exact, time-dependent reliability information is determined for each component and for each minimal cut set or path set. For the system, reliability results are obtained by upper bound approximations or by a bracketing procedure in which various upper and lower bounds may be obtained as close to one another as desired. The KITT programs can handle independent components which are non-repairable or which have a constant repair time. Any assortment of non-repairable components and components having constant repair times can be considered. Any inhibit conditions having constant probabilities of occurrence can be handled. The failure intensity of each component is assumed to be constant with respect to time. The KITT2 program can also handle components which during different time intervals, called phases, may have different reliability properties. 2 - Method of solution: The PREP program obtains minimal cut sets by either direct deterministic testing or by an efficient Monte Carlo algorithm. The minimal path sets are obtained using the Monte Carlo algorithm. The reliability information is obtained by the KITT programs from numerical solution of the simple integral balance equations of kinetic tree theory. 3 - Restrictions on the complexity of the problem: The PREP program will obtain the minimal cut and

  9. On Permuting Cut with Contraction

    OpenAIRE

    Borisavljevic, Mirjana; Dosen, Kosta; Petric, Zoran

    1999-01-01

    The paper presents a cut-elimination procedure for intuitionistic propositional logic in which cut is eliminated directly, without introducing the multiple-cut rule mix, and in which pushing cut above contraction is one of the reduction steps. The presentation of this procedure is preceded by an analysis of Gentzen's mix-elimination procedure, made in the perspective of permuting cut with contraction. It is also shown that in the absence of implication, pushing cut above contraction doesn't p...

  10. A comparison of methods used to calculate normal background concentrations of potentially toxic elements for urban soil

    Energy Technology Data Exchange (ETDEWEB)

    Rothwell, Katherine A., E-mail: k.rothwell@ncl.ac.uk; Cooke, Martin P., E-mail: martin.cooke@ncl.ac.uk

    2015-11-01

    To meet the requirements of regulation and to provide realistic remedial targets there is a need for the background concentration of potentially toxic elements (PTEs) in soils to be considered when assessing contaminated land. In England, normal background concentrations (NBCs) have been published for several priority contaminants for a number of spatial domains however updated regulatory guidance places the responsibility on Local Authorities to set NBCs for their jurisdiction. Due to the unique geochemical nature of urban areas, Local Authorities need to define NBC values specific to their area, which the national data is unable to provide. This study aims to calculate NBC levels for Gateshead, an urban Metropolitan Borough in the North East of England, using freely available data. The ‘median + 2MAD’, boxplot upper whisker and English NBC (according to the method adopted by the British Geological Survey) methods were compared for test PTEs lead, arsenic and cadmium. Due to the lack of systematically collected data for Gateshead in the national soil chemistry database, the use of site investigation (SI) data collected during the planning process was investigated. 12,087 SI soil chemistry data points were incorporated into a database and 27 comparison samples were taken from undisturbed locations across Gateshead. The SI data gave high resolution coverage of the area and Mann–Whitney tests confirmed statistical similarity for the undisturbed comparison samples and the SI data. SI data was successfully used to calculate NBCs for Gateshead and the median + 2MAD method was selected as most appropriate by the Local Authority according to the precautionary principle as it consistently provided the most conservative NBC values. The use of this data set provides a freely available, high resolution source of data that can be used for a range of environmental applications. - Highlights: • The use of site investigation data is proposed for land contamination studies

  11. Benefits of explosive cutting for nuclear-facility applications

    International Nuclear Information System (INIS)

    Hazelton, R.F.; Lundgren, R.A.; Allen, R.P.

    1981-06-01

    The study discussed in this report was a cost/benefit analysis to determine: (1) whether explosive cutting is cost effective in comparison with alternative metal sectioning methods and (2) whether explosive cutting would reduce radiation exposure or provide other benefits. Two separate approaches were pursued. The first was to qualitatively assess cutting methods and factors involved in typical sectioning cases and then compare the results for the cutting methods. The second was to prepare estimates of work schedules and potential radiation exposures for candidate sectioning methods for two hypothetical, but typical, sectioning tasks. The analysis shows that explosive cutting would be cost effective and would also reduce radiation exposure when used for typical nuclear facility sectioning tasks. These results indicate that explosive cutting should be one of the principal cutting methods considered whenever steel or similar metal structures or equipment in a nuclear facility are to be sectioned for repair or decommissioning. 13 figures, 7 tables

  12. Experimental study on concrete cutting by CO2 laser beam

    International Nuclear Information System (INIS)

    Kutsumizu, Akira; Tomura, Hidemasa; Wakizaka, Tatsuya; Hishikawa, Kyoichi; Moriya, Masahiro

    1994-01-01

    Methods for dismantling nuclear reactor facilities must meet particularly exacting requirements imposed by heavily reinforced and radioactivated reactor shield walls. Conventional methods do not meet all such requirements, however. Intrigued by excellent characteristics of the laser cutting method relative to nuclear facility demolition, we carried out an experimental study to make a comprehensive evaluation of its characteristics, especially for deep cutting, with success in identifying main factors affecting the cutting depth of a laser and characterizing its cutting behavior. The study results indicate that a 50 kW class CO 2 laser has a potential to provide a practicable cutting speed and depth. (author)

  13. Calculations of atomic magnetic nuclear shielding constants based on the two-component normalized elimination of the small component method

    Science.gov (United States)

    Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter

    2017-04-01

    A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).

  14. Female genital cutting.

    Science.gov (United States)

    Perron, Liette; Senikas, Vyta; Burnett, Margaret; Davis, Victoria

    2013-11-01

    To strengthen the national framework for care of adolescents and women affected by female genital cutting (FGC) in Canada by providing health care professionals with: (1) information intended to strengthen their knowledge and understanding of the practice; (2) directions with regard to the legal issues related to the practice; (3) clinical guidelines for the management of obstetric and gynaecological care, including FGC related complications; and (4) guidance on the provision of culturally competent care to adolescents and women with FGC. Published literature was retrieved through searches of PubMed, CINAHL, and The Cochrane Library in September 2010 using appropriate controlled vocabulary (e.g., Circumcision, Female) and keywords (e.g., female genital mutilation, clitoridectomy, infibulation). We also searched Social Science Abstracts, Sociological Abstracts, Gender Studies Database, and ProQuest Dissertations and Theses in 2010 and 2011. There were no date or language restrictions. Searches were updated on a regular basis and incorporated in the guideline to December 2011. Grey (unpublished) literature was identified through searching the websites of health technology assessment and health technology-related agencies, clinical practice guideline collections, clinical trial registries, and national and international medical specialty societies. The quality of evidence in this document was rated using the criteria described in the Report of the Canadian Task Force on Preventive Health Care (Table 1). Summary Statements 1. Female genital cutting is internationally recognized as a harmful practice and a violation of girls' and women's rights to life, physical integrity, and health. (II-3) 2. The immediate and long-term health risks and complications of female genital cutting can be serious and life threatening. (II-3) 3. Female genital cutting continues to be practised in many countries, particularly in sub-Saharan Africa, Egypt, and Sudan. (II-3) 4. Global migration

  15. Normalized Tritium Quantification Approach (NoTQA) a Method for Quantifying Tritium Contaminated Trash and Debris at LLNL

    International Nuclear Information System (INIS)

    Dominick, J.L.; Rasmussen, C.L.

    2008-01-01

    Several facilities and many projects at LLNL work exclusively with tritium. These operations have the potential to generate large quantities of Low-Level Radioactive Waste (LLW) with the same or similar radiological characteristics. A standardized documented approach to characterizing these waste materials for disposal as radioactive waste will enhance the ability of the Laboratory to manage them in an efficient and timely manner while ensuring compliance with all applicable regulatory requirements. This standardized characterization approach couples documented process knowledge with analytical verification and is very conservative, overestimating the radioactivity concentration of the waste. The characterization approach documented here is the Normalized Tritium Quantification Approach (NoTQA). This document will serve as a Technical Basis Document which can be referenced in radioactive waste characterization documentation packages such as the Information Gathering Document. In general, radiological characterization of waste consists of both developing an isotopic breakdown (distribution) of radionuclides contaminating the waste and using an appropriate method to quantify the radionuclides in the waste. Characterization approaches require varying degrees of rigor depending upon the radionuclides contaminating the waste and the concentration of the radionuclide contaminants as related to regulatory thresholds. Generally, as activity levels in the waste approach a regulatory or disposal facility threshold the degree of required precision and accuracy, and therefore the level of rigor, increases. In the case of tritium, thresholds of concern for control, contamination, transportation, and waste acceptance are relatively high. Due to the benign nature of tritium and the resulting higher regulatory thresholds, this less rigorous yet conservative characterization approach is appropriate. The scope of this document is to define an appropriate and acceptable

  16. DEVELOPMENT OF THE METHOD AND U.S. NORMALIZATION DATABASE FOR LIFE CYCLE IMPACT ASSESSMENT AND SUSTAINABILITY METRICS

    Science.gov (United States)

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as, life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relati...

  17. A Study on the Allowable Safety Factor of Cut-Slopes for Nuclear Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myung Soo; Yee, Eric [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2015-10-15

    In this study, the issues of allowable safety factor design criteria for cut-slopes in nuclear facilities is derived through case analysis, a proposed construction work slope design criteria that provides relatively detailed conditions can be applied in case of the dry season and some unclear parts of slope design criteria be modified in case of the rainy season. This safety factor can be further subdivided into two; normal and earthquake factors, a factor of 1.5 is applied for normal conditions and a factor of 1.2 is applied for seismic conditions. This safety factor takes into consideration the effect of ground water and rainfall conditions. However, no criteria for the case of cut-slope in nuclear facilities and its response to seismic conditions is clearly defined, this can cause uncertainty in design. Therefore, this paper investigates the allowable safety factor for cut-slopes in nuclear facilities, reviews conditions of both local and international cut-slope models and finally suggests an alternative method of analysis. It is expected that the new design criteria adequately ensures the stability of the cut-slope to reflect clear conditions for both the supervising and design engineers.

  18. Control of the kerf size and microstructure in Inconel 738 superalloy by femtosecond laser beam cutting

    Energy Technology Data Exchange (ETDEWEB)

    Wei, J.; Ye, Y.; Sun, Z. [Department of Mechanical Engineering, Tsinghua University, Beijing (China); Liu, L., E-mail: liulei@tsinghua.edu.cn [The State Key Laboratory of Tribology, Tsinghua University, Beijing (China); Zou, G., E-mail: sunzhg@tsinghua.edu.cn [Department of Mechanical Engineering, Tsinghua University, Beijing (China)

    2016-05-01

    Highlights: • Effects of processing parameters on the kerf size in Inconel 738 are investigated. • Defocus is a key parameter affecting the kerf width due to the intensity clamping. • The internal surface microstructures with different scanning speed are presented. • The material removal mechanism contains normal vaporization and phase explosion. • Oxidation mechanism is attributed to the trapping effect of the dangling bonds. - Abstract: Femtosecond laser beam cutting is becoming widely used to meet demands for increasing accuracy in micro-machining. In this paper, the effects of processing parameters in femtosecond laser beam cutting on the kerf size and microstructure in Inconel 738 have been investigated. The defocus, pulse width and scanning speed were selected to study the controllability of the cutting process. Adjusting and matching the processing parameters was a basic enhancement method to acquire well defined kerf size and the high-quality ablation of microstructures, which has contributed to the intensity clamping effect. The morphology and chemical compositions of these microstructures on the cut surface have been characterized by a scanning electron microscopy equipped with an energy dispersive X-ray spectroscopy, X-ray diffraction and X-ray photoelectron spectroscopy. Additionally, the material removal mechanism and oxidation mechanism on the Inconel 738 cut surface have also been discussed on the basis of the femtosecond laser induced normal vaporization or phase explosion, and trapping effect of the dangling bonds.

  19. KCUT, code to generate minimal cut sets for fault trees

    International Nuclear Information System (INIS)

    Han, Sang Hoon

    2008-01-01

    1 - Description of program or function: KCUT is a software to generate minimal cut sets for fault trees. 2 - Methods: Expand a fault tree into cut sets and delete non minimal cut sets. 3 - Restrictions on the complexity of the problem: Size and complexity of the fault tree

  20. Functional anatomy of the water transport system in cut chrysanthemum

    NARCIS (Netherlands)

    Nijsse, J.

    2001-01-01

    Cut flowers show a wide variance of keepability. The market demands more and more a guaranteed quality. Therefore, methods must be developed to predict vase life of cut flowers. Chrysanthemum ( Dendranthema x grandiflorum Tzvelev) and some other cut flowers suffer from

  1. Improved cutting performance in high power laser cutting

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove

    2003-01-01

    Recent results in high power laser cutting especially with focus on cutting of mild grade steel types for shipbuilding are described.......Recent results in high power laser cutting especially with focus on cutting of mild grade steel types for shipbuilding are described....

  2. Determination of cut front position in laser cutting

    International Nuclear Information System (INIS)

    Pereira, M; Thombansen, U

    2016-01-01

    Laser cutting has a huge importance to manufacturing industry. Laser cutting machines operate with fixed technological parameters and this does not guarantee the best productivity. The adjustment of the cutting parameters during operation can improve the machine performance. Based on a coaxial measuring device it is possible to identify the cut front position during the cutting process. This paper describes the data analysis approach used to determine the cut front position for different feed rates. The cut front position was determined with good resolution, but improvements are needed to make the whole process more stable. (paper)

  3. Determination of cut front position in laser cutting

    Science.gov (United States)

    Pereira, M.; Thombansen, U.

    2016-07-01

    Laser cutting has a huge importance to manufacturing industry. Laser cutting machines operate with fixed technological parameters and this does not guarantee the best productivity. The adjustment of the cutting parameters during operation can improve the machine performance. Based on a coaxial measuring device it is possible to identify the cut front position during the cutting process. This paper describes the data analysis approach used to determine the cut front position for different feed rates. The cut front position was determined with good resolution, but improvements are needed to make the whole process more stable.

  4. Design and Selection of Machine Learning Methods Using Radiomics and Dosiomics for Normal Tissue Complication Probability Modeling of Xerostomia

    Directory of Open Access Journals (Sweden)

    Hubert S. Gabryś

    2018-03-01

    Full Text Available PurposeThe purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP models based on the mean radiation dose to parotid glands.Material and methodsA cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0–6 months (early, 6–15 months (late, 15–24 months (long-term, and at any time (a longitudinal model after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis.ResultsNTCP models based on the parotid mean dose failed to predict xerostomia (AUCs < 0.60. The most informative predictors were found for late and long-term xerostomia. Late xerostomia correlated with the contralateral dose gradient in the anterior–posterior (AUC = 0.72 and the right–left (AUC = 0.68 direction, whereas long-term xerostomia was associated with parotid volumes (AUCs > 0.85, dose gradients in the right–left (AUCs > 0.78, and the anterior–posterior (AUCs > 0.72 direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose–volume histogram (DVH spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing

  5. Probability distribution of atmospheric pollutants: comparison among four methods for the determination of the log-normal distribution parameters; La distribuzione di probabilita` degli inquinanti atmosferici: confronto tra quattro metodi per la determinazione dei parametri della distribuzione log-normale

    Energy Technology Data Exchange (ETDEWEB)

    Bellasio, R [Enviroware s.r.l., Agrate Brianza, Milan (Italy). Centro Direzionale Colleoni; Lanzani, G; Ripamonti, M; Valore, M [Amministrazione Provinciale, Como (Italy)

    1998-04-01

    This work illustrates the possibility to interpolate the measured concentrations of CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} during one year (1995) at the 13 stations of the air quality monitoring station network of the Provinces of Como and Lecco (Italy) by means of a log-normal distribution. Particular attention was given in choosing the method for the determination of the log-normal distribution parameters among four possible methods: I natural, II percentiles, III moments, IV maximum likelihood. In order to evaluate the goodness of fit a ranking procedure was carried out over the values of four indices: absolute deviation, weighted absolute deviation, Kolmogorov-Smirnov index and Cramer-von Mises-Smirnov index. The capability of the log-normal distribution to fit the measured data is then discussed as a function of the pollutant and of the monitoring station. Finally an example of application is given: the effect of an emission reduction strategy in Lombardy Region (the so called `bollino blu`) is evaluated using a log-normal distribution. [Italiano] In questo lavoro si discute la possibilita` di interpolare le concentrazioni misurate di CO, NO, NO{sub 2}, O{sub 3}, SO{sub 2} durante un anno solare (il 1995) nelle 13 stazioni della Rete di Rilevamento della qualita` dell`aria delle Provincie di Como e di Lecco mediante una funzione log-normale. In particolare si discute quale metodo e` meglio usare per l`individuazione dei 2 parametri caratteristici della log-normale, tra 4 teoreticamente possibili: I naturale, II dei percentili, III dei momenti, IV della massima verosimiglianza. Per valutare i risultati ottenuti si usano: la deviazione assoluta, la deviazione pesata, il parametro di Kolmogorov-Smirnov e quello di Cramer-von Mises-Smirnov effettuando un ranking tra i metodi in funzione degli inquinanti e della stazione di misura. Ancora in funzione degli inquinanti e delle diverse stazioni di misura si discute poi la capacita` della funzione log-normale di

  6. Investigation of cutting-induced damage in CMC bend bars

    Directory of Open Access Journals (Sweden)

    Neubrand A.

    2015-01-01

    Full Text Available Ceramic matrix composites (“CMC” with a strong fibre-matrix interface can be made damage-tolerant by introducing a highly porous matrix. Such composites typically have only a low interlaminar shear strength, which can potentially promote damage when preparing specimens or components by cutting. In order to investigate the damage induced by different cutting methods, waterjet cutting with and without abrasives, laser-cutting, wire eroding and cutoff grinding were used to cut plates of two different CMCs with a matrix porosity up to 35 vol.-%. For each combination of cutting method and composite, the flexural and interlaminar shear strength of the resulting specimens was determined. Additionally, the integrity of the regions near the cut surfaces was investigated by high-resolution x-ray computer tomography. It could be shown that the geometrical quality of the cut is strongly affected by the cutting method employed. Laser cut and waterjet cut specimens showed damage and delaminations near the cut surface leading to a reduced interlaminar shear strength of short bend bars in extreme cases.

  7. Tension Behaviour on the Connection of the Cold-Formed Cut-Curved Steel Channel Section

    Science.gov (United States)

    Sani, Mohd Syahrul Hisyam Mohd; Muftah, Fadhluhartini; Fakri Muda, Mohd; Siang Tan, Cher

    2017-08-01

    Cold-formed steel (CFS) are utilised as a non-structural and structural element in construction activity especially a residential house and small building roof truss system. CFS with a lot of advantages and some of disadvantages such as buckling that must be prevented for roof truss production are being studied equally. CFS was used as a top chord of the roof truss system which normally a slender section is dramatically influenced to buckling failure and instability of the structure. So, the curved section is produced for a top chord for solving the compression member of the roof truss. Besides, there are lacked of design and production information about the CFS curved channel section. In the study, the CFS is bent by using a cut-curved method because of ease of production, without the use of skilled labour and high cost machine. The tension behaviour of the strengthening method of cut-curved or could be recognised as a connection of the cut-curved section was tested and analysed. There are seven types of connection was selected. From the testing and observation, it is shown the specimen with full weld along the cut section and adds with flange element plate with two self-drilling screws (F7A) was noted to have a higher value of ultimate load. Finally, there are three alternative methods of connection for CFS cut-curved that could be a reference for a contractor and further design.

  8. Sequence analysis of annually normalized citation counts: an empirical analysis based on the characteristic scores and scales (CSS) method.

    Science.gov (United States)

    Bornmann, Lutz; Ye, Adam Y; Ye, Fred Y

    2017-01-01

    In bibliometrics, only a few publications have focused on the citation histories of publications, where the citations for each citing year are assessed. In this study, therefore, annual categories of field- and time-normalized citation scores (based on the characteristic scores and scales method: 0 = poorly cited, 1 = fairly cited, 2 = remarkably cited, and 3 = outstandingly cited) are used to study the citation histories of papers. As our dataset, we used all articles published in 2000 and their annual citation scores until 2015. We generated annual sequences of citation scores (e.g., [Formula: see text]) and compared the sequences of annual citation scores of six broader fields (natural sciences, engineering and technology, medical and health sciences, agricultural sciences, social sciences, and humanities). In agreement with previous studies, our results demonstrate that sequences with poorly cited (0) and fairly cited (1) elements dominate the publication set; sequences with remarkably cited (3) and outstandingly cited (4) periods are rare. The highest percentages of constantly poorly cited papers can be found in the social sciences; the lowest percentages are in the agricultural sciences and humanities. The largest group of papers with remarkably cited (3) and/or outstandingly cited (4) periods shows an increasing impact over the citing years with the following orders of sequences: [Formula: see text] (6.01%), which is followed by [Formula: see text] (1.62%). Only 0.11% of the papers ( n  = 909) are constantly on the outstandingly cited level.

  9. Underwater plasma arc cutting

    International Nuclear Information System (INIS)

    Leautier, R.; Pilot, G.

    1991-01-01

    This report describes the work done to develop underwater plasma arc cutting techniques, to characterise aerosols from cutting operations on radioactive and non-radioactive work-pieces, and to develop suitable ventilation and filtration techniques. The work has been carried out in the framework of a contract between CEA-CEN Cadarache and the Commission of European Communities. Furthermore, this work has been carried out in close cooperation with CEA-CEN Saclay mainly for secondary emissions and radioactive analysis. The contract started in May 1986 and was completed in December 1988 by a supplementary agreement. This report has been compiled from several progress reports submitted during the work period, contains the main findings of the work and encloses the results of comparative tests on plasma arc cutting

  10. Evaluation of Normalization Methods on GeLC-MS/MS Label-Free Spectral Counting Data to Correct for Variation during Proteomic Workflows

    Science.gov (United States)

    Gokce, Emine; Shuford, Christopher M.; Franck, William L.; Dean, Ralph A.; Muddiman, David C.

    2011-12-01

    Normalization of spectral counts (SpCs) in label-free shotgun proteomic approaches is important to achieve reliable relative quantification. Three different SpC normalization methods, total spectral count (TSpC) normalization, normalized spectral abundance factor (NSAF) normalization, and normalization to selected proteins (NSP) were evaluated based on their ability to correct for day-to-day variation between gel-based sample preparation and chromatographic performance. Three spectral counting data sets obtained from the same biological conidia sample of the rice blast fungus Magnaporthe oryzae were analyzed by 1D gel and liquid chromatography-tandem mass spectrometry (GeLC-MS/MS). Equine myoglobin and chicken ovalbumin were spiked into the protein extracts prior to 1D-SDS- PAGE as internal protein standards for NSP. The correlation between SpCs of the same proteins across the different data sets was investigated. We report that TSpC normalization and NSAF normalization yielded almost ideal slopes of unity for normalized SpC versus average normalized SpC plots, while NSP did not afford effective corrections of the unnormalized data. Furthermore, when utilizing TSpC normalization prior to relative protein quantification, t-testing and fold-change revealed the cutoff limits for determining real biological change to be a function of the absolute number of SpCs. For instance, we observed the variance decreased as the number of SpCs increased, which resulted in a higher propensity for detecting statistically significant, yet artificial, change for highly abundant proteins. Thus, we suggest applying higher confidence level and lower fold-change cutoffs for proteins with higher SpCs, rather than using a single criterion for the entire data set. By choosing appropriate cutoff values to maintain a constant false positive rate across different protein levels (i.e., SpC levels), it is expected this will reduce the overall false negative rate, particularly for proteins with

  11. Which are the cut-off values of 2D-Shear Wave Elastography (2D-SWE) liver stiffness measurements predicting different stages of liver fibrosis, considering Transient Elastography (TE) as the reference method?

    Energy Technology Data Exchange (ETDEWEB)

    Sporea, Ioan, E-mail: isporea@umft.ro; Bota, Simona, E-mail: bota_simona1982@yahoo.com; Gradinaru-Taşcău, Oana, E-mail: bluonmyown@yahoo.com; Şirli, Roxana, E-mail: roxanasirli@gmail.com; Popescu, Alina, E-mail: alinamircea.popescu@gmail.com; Jurchiş, Ana, E-mail: ana.jurchis@yahoo.com

    2014-03-15

    Introduction: To identify liver stiffness (LS) cut-off values assessed by means of 2D-Shear Wave Elastography (2D-SWE) for predicting different stages of liver fibrosis, considering Transient Elastography (TE) as the reference method. Methods: Our prospective study included 383 consecutive subjects, with or without hepatopathies, in which LS was evaluated by means of TE and 2D-SWE. To discriminate between various stages of fibrosis by TE we used the following LS cut-offs (kPa): F1-6, F2-7.2, F3-9.6 and F4-14.5. Results: The rate of reliable LS measurements was similar for TE and 2D-SWE: 73.9% vs. 79.9%, p = 0.06. Older age and higher BMI were associated for both TE and 2D-SWE with the impossibility to obtain reliable LS measurements. Reliable LS measurements by both elastographic methods were obtained in 65.2% of patients. A significant correlation was found between TE and 2D-SWE measurements (r = 0.68). The best LS cut-off values assessed by 2D-SWE for predicting different stages of liver fibrosis were: F ≥ 1: >7.1 kPa (AUROC = 0.825); F ≥ 2: >7.8 kPa (AUROC = 0.859); F ≥ 3: >8 kPa (AUROC = 0.897) and for F = 4: >11.5 kPa (AUROC = 0.914). Conclusions: 2D-SWE is a reliable method for the non-invasive evaluation of liver fibrosis, considering TE as the reference method. The accuracy of 2D-SWE measurements increased with the severity of liver fibrosis.

  12. ADVANCED CUTTINGS TRANSPORT STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Troy Reed; Stefan Miska; Nicholas Takach; Kaveh Ashenayi; Gerald Kane; Mark Pickell; Len Volk; Mike Volk; Barkim Demirdal; Affonso Lourenco; Evren Ozbayoglu; Paco Vieira; Lei Zhou

    2000-01-30

    This is the second quarterly progress report for Year 2 of the ACTS project. It includes a review of progress made in Flow Loop development and research during the period of time between Oct 1, 2000 and December 31, 2000. This report presents a review of progress on the following specific tasks: (a) Design and development of an Advanced Cuttings Transport Facility (Task 2: Addition of a foam generation and breaker system), (b) Research project (Task 6): ''Study of Cuttings Transport with Foam Under LPAT Conditions (Joint Project with TUDRP)'', (c) Research project (Task 7): ''Study of Cuttings Transport with Aerated Muds Under LPAT Conditions (Joint Project with TUDRP)'', (d) Research project (Task 8): ''Study of Flow of Synthetic Drilling Fluids Under Elevated Pressure and Temperature Conditions'', (e) Research project (Task 9): ''Study of Foam Flow Behavior Under EPET Conditions'', (f) Research project (Task 10): ''Study of Cuttings Transport with Aerated Mud Under Elevated Pressure and Temperature Conditions'', (g) Research on instrumentation tasks to measure: Cuttings concentration and distribution in a flowing slurry (Task 11), and Foam properties while transporting cuttings. (Task 12), (h) Development of a Safety program for the ACTS Flow Loop. Progress on a comprehensive safety review of all flow-loop components and operational procedures. (Task 1S). (i) Activities towards technology transfer and developing contacts with Petroleum and service company members, and increasing the number of JIP members. The tasks Completed During This Quarter are Task 7 and Task 8.

  13. Theoretical Models for Orthogonal Cutting

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This review of simple models for orthogonal cutting was extracted from: “L. De Chiffre: Metal Cutting Mechanics and Applications, D.Sc. Thesis, Technical University of Denmark, 1990.”......This review of simple models for orthogonal cutting was extracted from: “L. De Chiffre: Metal Cutting Mechanics and Applications, D.Sc. Thesis, Technical University of Denmark, 1990.”...

  14. Shroud cutting techniques and collection systems for secondary radioactivity release

    International Nuclear Information System (INIS)

    Yokoi, H.; Watanabe, A.; Uetake, N.; Shimura, T.; Omote, T.; Adachi, H.; Murakami, S.; Kobayashi, H.; Gotoh, M.

    2001-01-01

    Replacement of in-core shroud has been conducted as part of the preventive maintenance program in Tsuruga-1. The EDM (electric discharged machining) and plasma cutting methods were applied to in-core shroud cutting and secondary cutting in the DSP (dryer/separator pool), respectively. The cutting systems were improved in order to decrease radioactive secondary products. 1) Fundamental EDM cutting tests: fundamental EDM cutting tests were carried out in order to study secondary products. It could be presumed that volatile Co-carbonyl compound was generated by using a carbon electrode. The Ag/W electrode was effective as EDM electrode for in-core shroud cutting to prevent generation of Co-carbonyl compound and to decrease the total amount of secondary products. 2) In-core shroud cutting in RPV (reactor pressure vessel): EDM cutting system with the Ag/W electrode and collection system could keep a good environment during in-core shroud cutting in Tsuruga-1. Activity concentration was lower value than limitation of mask charge level, 4E-6 Bq/cm 3 , even near the water surface. 3) Secondary plasma cutting in DSP: the secondary cutting work was successful in the point of reduction of working period and radiation exposure. The amount of radiation exposure was reduced to 60% of the planned value, because of adequate decontamination of the working environment and reduction of number of torch maintenance tasks by improvements of the underwater cutting device

  15. Design and Selection of Machine Learning Methods Using Radiomics and Dosiomics for Normal Tissue Complication Probability Modeling of Xerostomia.

    Science.gov (United States)

    Gabryś, Hubert S; Buettner, Florian; Sterzing, Florian; Hauswald, Henrik; Bangert, Mark

    2018-01-01

    The purpose of this study is to investigate whether machine learning with dosiomic, radiomic, and demographic features allows for xerostomia risk assessment more precise than normal tissue complication probability (NTCP) models based on the mean radiation dose to parotid glands. A cohort of 153 head-and-neck cancer patients was used to model xerostomia at 0-6 months (early), 6-15 months (late), 15-24 months (long-term), and at any time (a longitudinal model) after radiotherapy. Predictive power of the features was evaluated by the area under the receiver operating characteristic curve (AUC) of univariate logistic regression models. The multivariate NTCP models were tuned and tested with single and nested cross-validation, respectively. We compared predictive performance of seven classification algorithms, six feature selection methods, and ten data cleaning/class balancing techniques using the Friedman test and the Nemenyi post hoc analysis. NTCP models based on the parotid mean dose failed to predict xerostomia (AUCs  0.85), dose gradients in the right-left (AUCs > 0.78), and the anterior-posterior (AUCs > 0.72) direction. Multivariate models of long-term xerostomia were typically based on the parotid volume, the parotid eccentricity, and the dose-volume histogram (DVH) spread with the generalization AUCs ranging from 0.74 to 0.88. On average, support vector machines and extra-trees were the top performing classifiers, whereas the algorithms based on logistic regression were the best choice for feature selection. We found no advantage in using data cleaning or class balancing methods. We demonstrated that incorporation of organ- and dose-shape descriptors is beneficial for xerostomia prediction in highly conformal radiotherapy treatments. Due to strong reliance on patient-specific, dose-independent factors, our results underscore the need for development of personalized data-driven risk profiles for NTCP models of xerostomia. The facilitated

  16. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties

    OpenAIRE

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2014-01-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results s...

  17. Assessment of a novel multi-array normalization method based on spike-in control probes suitable for microRNA datasets with global decreases in expression.

    Science.gov (United States)

    Sewer, Alain; Gubian, Sylvain; Kogel, Ulrike; Veljkovic, Emilija; Han, Wanjiang; Hengstermann, Arnd; Peitsch, Manuel C; Hoeng, Julia

    2014-05-17

    High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the "common reference design" and processed as "pseudo-single-channel". They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription-polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data

  18. Innovations: laser-cutting nickel-titanium

    Energy Technology Data Exchange (ETDEWEB)

    Dickson, T.R.; Moore, B.; Toyama, N. [LPL Systems, Inc., Mountain View, CA (United States)

    2002-07-01

    Laser-cutting is well established as the preferred method for manufacturing many endovascular medical devices. Sometimes laser processing has been poorly understood by nickel-titanium (NiTi) material suppliers, medical device manufacturers, and device designers, but the field has made important strides in the past several years. A variety of sample, nonspecific applications are presented for cutting tubing and sheet stock. Limiting constraints, key considerations, and areas for future development are identified. (orig.)

  19. ADVANCED CUTTINGS TRANSPORT STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Stefan Miska; Troy Reed; Ergun Kuru

    2004-09-30

    The Advanced Cuttings Transport Study (ACTS) was a 5-year JIP project undertaken at the University of Tulsa (TU). The project was sponsored by the U.S. Department of Energy (DOE) and JIP member companies. The objectives of the project were: (1) to develop and construct a new research facility that would allow three-phase (gas, liquid and cuttings) flow experiments under ambient and EPET (elevated pressure and temperature) conditions, and at different angle of inclinations and drill pipe rotation speeds; (2) to conduct experiments and develop a data base for the industry and academia; and (3) to develop mechanistic models for optimization of drilling hydraulics and cuttings transport. This project consisted of research studies, flow loop construction and instrumentation development. Following a one-year period for basic flow loop construction, a proposal was submitted by TU to the DOE for a five-year project that was organized in such a manner as to provide a logical progression of research experiments as well as additions to the basic flow loop. The flow loop additions and improvements included: (1) elevated temperature capability; (2) two-phase (gas and liquid, foam etc.) capability; (3) cuttings injection and removal system; (4) drill pipe rotation system; and (5) drilling section elevation system. In parallel with the flow loop construction, hydraulics and cuttings transport studies were preformed using drilling foams and aerated muds. In addition, hydraulics and rheology of synthetic drilling fluids were investigated. The studies were performed under ambient and EPET conditions. The effects of temperature and pressure on the hydraulics and cuttings transport were investigated. Mechanistic models were developed to predict frictional pressure loss and cuttings transport in horizontal and near-horizontal configurations. Model predictions were compared with the measured data. Predominantly, model predictions show satisfactory agreements with the measured data. As a

  20. Maintaining knife sharpness in industrial meat cutting: A matter of knife or meat cutter ability.

    Science.gov (United States)

    Karltun, J; Vogel, K; Bergstrand, M; Eklund, J

    2016-09-01

    Knife sharpness is imperative in meat cutting. The aim of this study was to compare the impact of knife blade steel quality with meat cutters' individual ability to maintain the cutting edge sharp in an industrial production setting. Twelve meat cutters in two different companies using three different knives during normal production were studied in this quasi-experimental study. Methods included were measuring knife cutting force before and after knife use, time knives were used, ratings of sharpness and discomfort and interviews. Results showed that the meat cutters' skill of maintaining sharpness during work had a much larger effect on knife sharpness during work than the knife steel differences. The ability was also related to feelings of discomfort and to physical exertion. It was found that meat cutters using more knives were more likely to suffer from discomfort in the upper limbs, which is a risk for developing MSD. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Optimizing the parameters of heat transmission in a small heat exchanger with spiral tapes cut as triangles and Aluminum oxide nanofluid using central composite design method

    Science.gov (United States)

    Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar

    2018-02-01

    The present study aims at optimizing the heat transmission parameters such as Nusselt number and friction factor in a small double pipe heat exchanger equipped with rotating spiral tapes cut as triangles and filled with aluminum oxide nanofluid. The effects of Reynolds number, twist ratio (y/w), rotating twisted tape and concentration (w%) on the Nusselt number and friction factor are also investigated. The central composite design and the response surface methodology are used for evaluating the responses necessary for optimization. According to the optimal curves, the most optimized value obtained for Nusselt number and friction factor was 146.6675 and 0.06020, respectively. Finally, an appropriate correlation is also provided to achieve the optimal model of the minimum cost. Optimization results showed that the cost has decreased in the best case.

  2. Skin Cut Construction

    DEFF Research Database (Denmark)

    2017-01-01

    of the exhibition is to create a connection between the artistic and technological development through Danish rms and researchers who represent the newest technology in concrete treatment. The rst part exhibition (skin) will focus on the surface treatment of concrete (’graphical concrete’), the second (cut...

  3. Cutting Cakes Carefully

    Science.gov (United States)

    Hill, Theodore P.; Morrison, Kent E.

    2010-01-01

    This paper surveys the fascinating mathematics of fair division, and provides a suite of examples using basic ideas from algebra, calculus, and probability which can be used to examine and test new and sometimes complex mathematical theories and claims involving fair division. Conversely, the classical cut-and-choose and moving-knife algorithms…

  4. Classroom Cut Ups

    Science.gov (United States)

    Lord, Stacy

    2011-01-01

    Discovering identity can be a lifelong challenge for some people, while others seem to figure it out right away. During the middle school years, finding one's identity can be a daunting task. Most students will spend a considerable amount of time during these middle years looking for it. This lesson on cut-paper self-portraits lets students delve…

  5. Abrasive water jet cutting

    International Nuclear Information System (INIS)

    Leist, K.J.; Funnell, G.J.

    1988-01-01

    In the process of selecting a failed equipment cut-up tool for the process facility modifications (PFM) project, a system using an abrasive water jet (AWJ) was developed and tested for remote disassembly of failed equipment. It is presented in this paper

  6. After the Ribbon Cutting

    DEFF Research Database (Denmark)

    Hodge, Graeme A.; Boulot, Emille; Duffield, Colin

    2017-01-01

    Much attention has gone towards ‘up-front’ processes when delivering infrastructure public–private partnerships (PPPs), but less on how to best govern after the ribbon is cut and the infrastructure built. This paper identifies the primary contractual and institutional governance challenges arising...

  7. Simultaneous Cake Cutting

    DEFF Research Database (Denmark)

    Balkanski, Eric; Branzei, Simina; Kurokawa, David

    2014-01-01

    We introduce the simultaneous model for cake cutting (the fair allocation of a divisible good), in which agents simultaneously send messages containing a sketch of their preferences over the cake. We show that this model enables the computation of divisions that satisfy proportionality — a popular...

  8. Methods for Reducing Normal Tissue Complication Probabilities in Oropharyngeal Cancer: Dose Reduction or Planning Target Volume Elimination

    Energy Technology Data Exchange (ETDEWEB)

    Samuels, Stuart E.; Eisbruch, Avraham; Vineberg, Karen; Lee, Jae; Lee, Choonik; Matuszak, Martha M.; Ten Haken, Randall K.; Brock, Kristy K., E-mail: kbrock@med.umich.edu

    2016-11-01

    Purpose: Strategies to reduce the toxicities of head and neck radiation (ie, dysphagia [difficulty swallowing] and xerostomia [dry mouth]) are currently underway. However, the predicted benefit of dose and planning target volume (PTV) reduction strategies is unknown. The purpose of the present study was to compare the normal tissue complication probabilities (NTCP) for swallowing and salivary structures in standard plans (70 Gy [P70]), dose-reduced plans (60 Gy [P60]), and plans eliminating the PTV margin. Methods and Materials: A total of 38 oropharyngeal cancer (OPC) plans were analyzed. Standard organ-sparing volumetric modulated arc therapy plans (P70) were created and then modified by eliminating the PTVs and treating the clinical tumor volumes (CTVs) only (C70) or maintaining the PTV but reducing the dose to 60 Gy (P60). NTCP dose models for the pharyngeal constrictors, glottis/supraglottic larynx, parotid glands (PGs), and submandibular glands (SMGs) were analyzed. The minimal clinically important benefit was defined as a mean change in NTCP of >5%. The P70 NTCP thresholds and overlap percentages of the organs at risk with the PTVs (56-59 Gy, vPTV{sub 56}) were evaluated to identify the predictors for NTCP improvement. Results: With the P60 plans, only the ipsilateral PG (iPG) benefited (23.9% vs 16.2%; P<.01). With the C70 plans, only the iPG (23.9% vs 17.5%; P<.01) and contralateral SMG (cSMG) (NTCP 32.1% vs 22.9%; P<.01) benefited. An iPG NTCP threshold of 20% and 30% predicted NTCP benefits for the P60 and C70 plans, respectively (P<.001). A cSMG NTCP threshold of 30% predicted for an NTCP benefit with the C70 plans (P<.001). Furthermore, for the iPG, a vPTV{sub 56} >13% predicted benefit with P60 (P<.001) and C70 (P=.002). For the cSMG, a vPTV{sub 56} >22% predicted benefit with C70 (P<.01). Conclusions: PTV elimination and dose-reduction lowered the NTCP of the iPG, and PTV elimination lowered the NTCP of the cSMG. NTCP thresholds and the

  9. Pursuing Normality

    DEFF Research Database (Denmark)

    Madsen, Louise Sofia; Handberg, Charlotte

    2018-01-01

    implying an influence on whether to participate in cancer survivorship care programs. Because of "pursuing normality," 8 of 9 participants opted out of cancer survivorship care programming due to prospects of "being cured" and perceptions of cancer survivorship care as "a continuation of the disease......BACKGROUND: The present study explored the reflections on cancer survivorship care of lymphoma survivors in active treatment. Lymphoma survivors have survivorship care needs, yet their participation in cancer survivorship care programs is still reported as low. OBJECTIVE: The aim of this study...... was to understand the reflections on cancer survivorship care of lymphoma survivors to aid the future planning of cancer survivorship care and overcome barriers to participation. METHODS: Data were generated in a hematological ward during 4 months of ethnographic fieldwork, including participant observation and 46...

  10. Epidemiological cut-off values for Flavobacterium psychrophilum MIC data generated by a standard test protocol.

    Science.gov (United States)

    Smith, P; Endris, R; Kronvall, G; Thomas, V; Verner-Jeffreys, D; Wilhelm, C; Dalsgaard, I

    2016-02-01

    Epidemiological cut-off values were developed for application to antibiotic susceptibility data for Flavobacterium psychrophilum generated by standard CLSI test protocols. The MIC values for ten antibiotic agents against Flavobacterium psychrophilum were determined in two laboratories. For five antibiotics, the data sets were of sufficient quality and quantity to allow the setting of valid epidemiological cut-off values. For these agents, the cut-off values, calculated by the application of the statistically based normalized resistance interpretation method, were ≤16 mg L(-1) for erythromycin, ≤2 mg L(-1) for florfenicol, ≤0.025 mg L(-1) for oxolinic acid (OXO), ≤0.125 mg L(-1) for oxytetracycline and ≤20 (1/19) mg L(-1) for trimethoprim/sulphamethoxazole. For ampicillin and amoxicillin, the majority of putative wild-type observations were 'off scale', and therefore, statistically valid cut-off values could not be calculated. For ormetoprim/sulphadimethoxine, the data were excessively diverse and a valid cut-off could not be determined. For flumequine, the putative wild-type data were extremely skewed, and for enrofloxacin, there was inadequate separation in the MIC values for putative wild-type and non-wild-type strains. It is argued that the adoption of OXO as a class representative for the quinolone group would be a valid method of determining susceptibilities to these agents. © 2014 John Wiley & Sons Ltd.

  11. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties

    Directory of Open Access Journals (Sweden)

    Babak Mehmandoust

    2014-03-01

    Full Text Available The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3–722 K.

  12. An efficient reliable method to estimate the vaporization enthalpy of pure substances according to the normal boiling temperature and critical properties.

    Science.gov (United States)

    Mehmandoust, Babak; Sanjari, Ehsan; Vatani, Mostafa

    2014-03-01

    The heat of vaporization of a pure substance at its normal boiling temperature is a very important property in many chemical processes. In this work, a new empirical method was developed to predict vaporization enthalpy of pure substances. This equation is a function of normal boiling temperature, critical temperature, and critical pressure. The presented model is simple to use and provides an improvement over the existing equations for 452 pure substances in wide boiling range. The results showed that the proposed correlation is more accurate than the literature methods for pure substances in a wide boiling range (20.3-722 K).

  13. Clarifying Normalization

    Science.gov (United States)

    Carpenter, Donald A.

    2008-01-01

    Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…

  14. Algorithm for finding minimal cut sets in a fault tree

    International Nuclear Information System (INIS)

    Rosenberg, Ladislav

    1996-01-01

    This paper presents several algorithms that have been used in a computer code for fault-tree analysing by the minimal cut sets method. The main algorithm is the more efficient version of the new CARA algorithm, which finds minimal cut sets with an auxiliary dynamical structure. The presented algorithm for finding the minimal cut sets enables one to do so by defined requirements - according to the order of minimal cut sets, or to the number of minimal cut sets, or both. This algorithm is from three to six times faster when compared with the primary version of the CARA algorithm

  15. Automated Normalized Cut Segmentation of Aortic Root in CT Angiography

    NARCIS (Netherlands)

    Elattar, Mustafa; Wiegerinck, Esther; Planken, Nils; VanBavel, Ed; van Assen, Hans; Baan, Jan Jr; Marquering, Henk

    2014-01-01

    Transcatheter Aortic Valve Implantation (TAVI) is a new minimal-invasive intervention for implanting prosthetic valves in patients with aortic stenosis. This procedure is associated with adverse effects like paravalvular leakage, stroke, and coronary obstruction. Accurate automated sizing for

  16. An improved method for sacro-iliac joint imaging: a study of normal subjects, patients with sacro-iliitis and patients with low back pain

    International Nuclear Information System (INIS)

    Ayres, J.; Hilson, A.J.W.; Maisey, M.N.; Laurent, R.; Panayi, G.S.; Saunders, A.J.

    1981-01-01

    A new method is described for quantitative measurement of the uptake of sup(99m)Tc-methylene diphosphonate (MDP) by the sacro-iliac joints. The method uses 'regions of interest' providing advantages over the previously described 'slice' method; the two methods are compared in normal subjects, patients with known sacro-iliitis and patients with low back pain. Sacro-iliac activity, as calculated by the sacro-iliac index (SII) in normal patients, was shown to decrease with age in females but not in males. The SII was compared with radiographs of the sacro-iliac joints in the patients with known sacro-iliac joint disease and in those with low back pain. The method is useful for the exclusion of sacro-iliitis as a specific cause of back pain. (author)

  17. Thickness-Independent Ultrasonic Imaging Applied to Abrasive Cut-Off Wheels: An Advanced Aerospace Materials Characterization Method for the Abrasives Industry. A NASA Lewis Research Center Technology Transfer Case History

    Science.gov (United States)

    Roth, Don J.; Farmer, Donald A.

    1998-01-01

    Abrasive cut-off wheels are at times unintentionally manufactured with nonuniformity that is difficult to identify and sufficiently characterize without time-consuming, destructive examination. One particular nonuniformity is a density variation condition occurring around the wheel circumference or along the radius, or both. This density variation, depending on its severity, can cause wheel warpage and wheel vibration resulting in unacceptable performance and perhaps premature failure of the wheel. Conventional nondestructive evaluation methods such as ultrasonic c-scan imaging and film radiography are inaccurate in their attempts at characterizing the density variation because a superimposing thickness variation exists as well in the wheel. In this article, the single transducer thickness-independent ultrasonic imaging method, developed specifically to allow more accurate characterization of aerospace components, is shown to precisely characterize the extent of the density variation in a cut-off wheel having a superimposing thickness variation. The method thereby has potential as an effective quality control tool in the abrasives industry for the wheel manufacturer.

  18. Development and verification test on remote plasma cutting of large metallic waste

    International Nuclear Information System (INIS)

    Ozawa, Tamotsu; Yamada, Kunitaka; Abe, Tadashi

    1979-01-01

    Plasma cutting is the cutting method to melt and scatter cut objects by generating arc between an electrode in a nozzle and the cut objects and making working gas fed to surround it into high temperature, high speed plasma jet. In case of remote plasma cutting, a torch for the plasma cutting is operated remotely with a manipulator from the outside of a cell. At the time of planning the method of breaking up solid wastes, the type of cutting machines and the method of remote operation of the cutting machines and cut objects were examined. It was decided to adopt plasma cutting machines, because their cutting capability such as materials, thickness and cutting speed is excellent, and the construction and handling are simple. The form of the solid wastes to be cut is not uniform, accordingly the method of manipulator operation was adopted to respond to various forms flexibly. Cut objects are placed on a turntable to change the position successively. In case of remote plasma cutting, the controls of torch speed and gap must be made with a manipulator. The use of light-shielding glasses reduces largely the watchability of cut objects and becomes hindrance in the operation. As for the safety aspect, the suitable gas for cutting, which does not contain hydrogen, must be selected. The tests carried out for two years since November, 1977, are reported in this paper, and most of the problems have been solved. (Kako, I.)

  19. The optimization of the cutting process of diamonds with a YAG laser

    Directory of Open Access Journals (Sweden)

    A. J. Lubbe

    1993-07-01

    Full Text Available A laser cannot, as generally assumed by the layman, cut right through a diamond with a single cut. A couple of hundred cuts may be necessary to "chip carve" through a diamond. There are several parameters, for example cutting speed, focus point, overlapping of cuts, etc., that influence the cutting process. With a view to optimizing the cutting process, laser cuts in diamonds were studied in a systematic way with the aid of an electron microscope. The method, technique and the results of the research are discussed in this article.

  20. Some possibilities for determining cutting data when using laser cutting:

    OpenAIRE

    Radovanović, Miroslav

    2006-01-01

    The technological problems faced in the field of the application of laser-cutting machines lie in insufficient knowledge of the laser technique and the absence of both sufficiently reliable practical data and knowledge about the parameters affecting the work process itself. A significant parameter that is necessary to determine and to enter in an NC-program is the cutting speed. Various authors analyze the laser-cutting process and give mathematical models where laser cutting is modeled by us...

  1. A Simple and Effective Image Normalization Method to Monitor Boreal Forest Change in a Siberian Burn Chronosequence across Sensors and across Time

    Science.gov (United States)

    Chen, X.; Vierling, L. A.; Deering, D. W.

    2004-12-01

    Satellite data offer unique perspectives for monitoring and quantifying land cover change, however, the radiometric consistency among co-located multi-temporal images is difficult to maintain due to variations in sensors and atmosphere. To detect accurate landscape change using multi-temporal images, we developed a new relative radiometric normalization scheme: the temporally invariant cluster (TIC) method. Image data were acquired on 9 June 1990 (Landsat 4), 20 June 2000, and 26 August 2001 (Landsat 7) for analyses over boreal forests near the Siberian city of Krasnoyarsk. Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Reduced Simple Ratio (RSR) were investigated in the normalization study. The temporally invariant cluster (TIC) centers were identified through a point density map of the base image and the target image and a normalization regression line was created through all TIC centers. The target image digital data were then converted using the regression function so that the two images could be compared using the resulting common radiometric scale. We found that EVI was very sensitive to vegetation structure and could thus be used to separate conifer forests from deciduous forests and grass/crop lands. NDVI was a very effective vegetation index to reduce the influence of shadow, while EVI was very sensitive to shadowing. After normalization, correlations of NDVI and EVI with field collected total Leaf Area Index (LAI) data in 2000 and 2001 were significantly improved; the r-square values in these regressions increased from 0.49 to 0.69 and from 0.46 to 0.61, respectively. An EVI ¡°cancellation effect¡± where EVI was positively related to understory greenness but negatively related to forest canopy coverage was evident across a post fire chronosequence. These findings indicate that the TIC method provides a simple, effective and repeatable method to create radiometrically comparable data sets for remote detection of

  2. Cutting to the chase

    International Nuclear Information System (INIS)

    Snieckus, D.

    2001-01-01

    This article reports on the development of the cost effective abrasive cutting Sabre system which came as a result of UWG's work on the decommissioning of the Phillips' Maureen wells and adds to UWG's 'total severance solution' tools. The advantages of the system are highlighted and include the ability to operate from a platform or diving support vessel, to cut internal cases, and to eliminate the use of environmentally damaging explosives and the need to operate from a rig. The new Mark II version of the Sabre designed to work at greater depths of water, the range of the severance tools, UWG's well abandonment hole assembly system, and its aim to enter the Gulf of Mexico market are discussed. Details are given of the decommissioning of the Schwedeneck-See platforms in Kiel Bay off Germany and the Phillips' UK decommissioning plans for the Maureen platform

  3. Hemoglobin cut-off values in healthy Turkish infants

    Institute of Scientific and Technical Information of China (English)

    Ahmet Arvas; Emel Gür; DurmuşDoğan

    2014-01-01

    Background: Anemia is a widespread public health problem associated with an increased risk of morbidity and mortality. This study was undertaken to determine the cut-off value of hemoglobin for infant anemia. Methods: A cross-sectional retrospective study was carried out at well-baby clinics of a tertiary care hospital. A total of 1484 healthy infants aged between 4 to 24 months were included in the study. The relationship of hemoglobin (Hb) levels with mother age, birth weight, weight gain rate, feeding, and gender was evaluated. Results: The Hb levels were assessed in four age groups (4 months, 6 months, 9-12 months, and 15-24 months) and the cut-off values of Hb were determined. Hb cut-off values (5th percentile for age) were detected as 97 g/L and 93 g/L at 4 months and 6 months, respectively. In older infants, the 5th percentile was 90.5 g/L and 93.4 g/L at 9-12 months and 15-24 months, respectively. The two values were lower than the World Health Organization criteria for anemia, which could partly due to the lack of information on iron status in our population. However, this difference highlights the need for further studies on normal Hb levels in healthy infants in developing countries. Hb levels of females were higher in all age groups; however, a statistically significant difference was found in gender in only 6 month-old infants. No statistically significant difference was found among Hb levels, mother's age, birth weight, weight gain rate, and nutritional status. Conclusion: Hb cut-off values in infants should be re-evaluated and be compatible with growth and development of children in that community.

  4. ADVANCED CUTTINGS TRANSPORT STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Ergun Kuru; Stefan Miska; Nicholas Takach; Kaveh Ashenayi; Gerald Kane; Mark Pickell; Len Volk; Mike Volk; Barkim Demirdal; Affonso Lourenco; Evren Ozbayoglu; Paco Vieira; Neelima Godugu

    2000-07-30

    ACTS flow loop is now operational under elevated pressure and temperature. Currently, experiments with synthetic based drilling fluids under pressure and temperature are being conducted. Based on the analysis of Fann 70 data, empirical correlations defining the shear stress as a function of temperature, pressure and the shear rate have been developed for Petrobras synthetic drilling fluids. PVT equipment has been modified for testing Synthetic oil base drilling fluids. PVT tests with Petrobras Synthetic base mud have been conducted and results are being analyzed Foam flow experiments have been conducted and the analysis of the data has been carried out to characterize the rheology of the foam. Comparison of pressure loss prediction from the available foam hydraulic models and the test results has been made. Cuttings transport experiments in horizontal annulus section have been conducted using air, water and cuttings. Currently, cuttings transport tests in inclined test section are being conducted. Foam PVT analysis tests have been conducted. Foam stability experiments have also been conducted. Effects of salt and oil concentration on the foam stability have been investigated. Design of ACTS flow loop modification for foam and aerated mud flow has been completed. A flow loop operation procedure for conducting foam flow experiments under EPET conditions has been prepared Design of the lab-scale flow loop for dynamic foam characterization and cuttings monitoring instrumentation tests has been completed. The construction of the test loop is underway. As part of the technology transport efforts, Advisory Board Meeting with ACTS-JIP industry members has been organized on May 13, 2000.

  5. Making the cut

    Energy Technology Data Exchange (ETDEWEB)

    Mcshannon, G. [Hydra Mining Tools International Ltd. (United Kingdom)

    2006-04-15

    The paper explains how coal mines around the world can benefit from the use of cowless, radial shearer drums. Hydra Mining has designed and manufactured a range of shearer drums to combat problems ranging from dust, frictional ignitions, geological problems or low production rates. This allows the mine operator to maximise production efficiency. The company tailor-makes shearer drums for each longwall face to optimise the cutting performance of every installation. 8 figs.

  6. Cutting Out Continuations

    DEFF Research Database (Denmark)

    Bahr, Patrick; Hutton, Graham

    2016-01-01

    In the field of program transformation, one often transforms programs into continuation-passing style to make their flow of control explicit, and then immediately removes the resulting continuations using defunctionalisation to make the programs first-order. In this article, we show how these two...... transformations can be fused together into a single transformation step that cuts out the need to first introduce and then eliminate continuations. Our approach is calculational, uses standard equational reasoning techniques, and is widely applicable....

  7. A design of a mode converter for electron cyclotron heating by the method of normal mode expansion

    International Nuclear Information System (INIS)

    Hoshino, Katsumichi; Kawashima, Hisato; Hata, Kenichiro; Yamamoto, Takumi

    1983-09-01

    Mode conversion of electromagnetic wave propagating in the over-size circular waveguide is attained by giving a periodical perturbation in the guide wall. Mode coupling equation is expressed by ''generalized telegraphist's equations'' which are derived from the Maxwell's equations using a normal mode expansion. A computer code to solve the coupling equations is developed and mode amplitude, conversion efficiency, etc. of a particular type of mode converter for the 60 GHz electron cyclotron heating are obtained. (author)

  8. Cutting forces during turning with variable depth of cut

    Directory of Open Access Journals (Sweden)

    M. Sadílek

    2016-03-01

    The proposed research for the paper is an experimental work – measuring cutting forces and monitoring of the tool wear on the cutting edge. It compares the turning where standard roughing cycle is used and the turning where the proposed roughing cycle with variable depth of cut is applied.

  9. Comparison of different methods of spatial normalization of FDG-PET brain images in the voxel-wise analysis of MCI patients and controls

    International Nuclear Information System (INIS)

    Martino, M.E.; Villoria, J.G. de; Lacalle-Aurioles, M.; Olazaran, J.; Navarro, E.; Desco, M.; Cruz, I.; Garcia-Vazquez, V.; Carreras, J.L.

    2013-01-01

    One of the most interesting clinical applications of 18F-fluorodexyglucose (FDG) positron emission tomography (PET) imaging in neurodegenerative pathologies is that of establishing the prognosis of patients with mild cognitive impairment (MCI), some of whom have a high risk of progressing to Alzheimer's disease (AD). One method of analyzing these images is to perform statistical parametric mapping (SPM) analysis. Spatial normalization is a critical step in such an analysis. The purpose of this study was to assess the effect of using different methods of spatial normalization on the results of SPM analysis of 18F-FDG PET images by comparing patients with MCI and controls. We evaluated the results of three spatial normalization methods in an SPM analysis by comparing patients diagnosed with MCI with a group of control subjects. We tested three methods of spatial normalization: MRI-diffeomorphic anatomical registration through exponentiated lie algebra (DARTEL) and MRI-SPM8, which combine structural and functional images, and FDG-SPM8, which is based on the functional images only. The results obtained with the three methods were consistent in terms of the main pattern of functional alterations detected; namely, a bilateral reduction in glucose metabolism in the frontal and parietal cortices in the patient group. However, MRI-SPM8 also revealed differences in the left temporal cortex, and MRI-DARTEL revealed further differences in the left temporal cortex, precuneus, and left posterior cingulate. The results obtained with MRI-DARTEL were the most consistent with the pattern of changes in AD. When we compared our observations with those of previous reports, MRI-SPM8 and FDG-SPM8 seemed to show an incomplete pattern. Our results suggest that basing the spatial normalization method on functional images only can considerably impair the results of SPM analysis of 18F-FDG PET studies. (author)

  10. A general native-state method for determination of proliferation capacity of human normal and tumor tissues in vitro

    International Nuclear Information System (INIS)

    Hoffman, R.M.; Connors, K.M.; Meerson-Monosov, A.Z.; Herrera, H.; Price, J.H.

    1989-01-01

    An important need in cancer research and treatment is a physiological means in vitro by which to assess the proliferation capacity of human tumors and corresponding normal tissue for comparison. The authors have recently developed a native-state, three-dimensional, gel-supported primary culture system that allows every type of human cancer to grow in vitro at more than 90% frequency, with maintenance of tissue architecture, tumor-stromal interaction, and differentiated functions. Here they demonstrate that the native-state culture system allows proliferation indices to be determined for all solid cancer types explanted directly from surgery into long-term culture. Normal tissues also proliferate readily in this system. The degree of resolution of measurement of cell proliferation by histological autoradiography within the cultured tissues is greatly enhanced with the use of epi-illumination polarization microscopy. The histological status of the cultured tissues can be assessed simultaneously with the proliferation status. Carcinomas generally have areas of high epithelial proliferation with quiescent stromal cells. Sarcomas have high proliferation of cells of mesenchymal organ. Normal tissues can also proliferate at high rates. An image analysis system has been developed to automate proliferation determination. The high-resolution physiological means described here to measure the proliferation capacity of tissues will be important in further understanding of the deregulation of cell proliferation in cancer as well as in cancer prognosis and treatment

  11. ADVANCED CUTTINGS TRANSPORT STUDY

    Energy Technology Data Exchange (ETDEWEB)

    Ergun Kuru; Stefan Miska; Nicholas Takach; Kaveh Ashenayi; Gerald Kane; Len Volk; Mark Pickell; Evren Ozbayoglu; Barkim Demirdal; Paco Vieira; Affonso Lourenco

    1999-10-15

    This report includes a review of the progress made in ACTF Flow Loop development and research during 90 days pre-award period (May 15-July 14, 1999) and the following three months after the project approval date (July15-October 15, 1999) The report presents information on the following specific subjects; (a) Progress in Advanced Cuttings Transport Facility design and development, (b) Progress report on the research project ''Study of Flow of Synthetic Drilling Fluids Under Elevated Pressure and Temperature Conditions'', (c) Progress report on the research project ''Study of Cuttings Transport with Foam Under LPAT Conditions (Joint Project with TUDRP)'', (d) Progress report on the research project ''Study of Cuttings Transport with Aerated Muds Under LPAT Conditions (Joint Project with TUDRP)'', (e) Progress report on the research project ''Study of Foam Flow Behavior Under EPET Conditions'', (f) Progress report on the instrumentation tasks (Tasks 11 and 12) (g) Activities towards technology transfer and developing contacts with oil and service company members.

  12. Vortex cutting in superconductors

    Science.gov (United States)

    Vlasko-Vlasov, Vitalii K.; Koshelev, Alexei E.; Glatz, Andreas; Welp, Ulrich; Kwok, Wai-K.

    2015-03-01

    Unlike illusive magnetic field lines in vacuum, magnetic vortices in superconductors are real physical strings, which interact with the sample surface, crystal structure defects, and with each other. We address the complex and poorly understood process of vortex cutting via a comprehensive set of magneto-optic experiments which allow us to visualize vortex patterns at magnetization of a nearly twin-free YBCO crystal by crossing magnetic fields of different orientations. We observe a pronounced anisotropy in the flux dynamics under crossing fields and the filamentation of induced supercurrents associated with the staircase vortex structure expected in layered cuprates, flux cutting effects, and angular vortex instabilities predicted for anisotropic superconductors. At some field angles, we find formation of the vortex domains following a type-I phase transition in the vortex state accompanied by an abrupt change in the vortex orientation. To clarify the vortex cutting scenario we performed time-dependent Ginzburg-Landau simulations, which confirmed formation of sharp vortex fronts observed in the experiment and revealed a left-handed helical instability responsible for the rotation of vortices. This work was supported by the U.S. Department of Energy, Office of Science, Materials Sciences and Engineering Division.

  13. An on-line normal-phase high performance liquid chromatography method for the rapid detection of radical scavengers in non-polar food matrixes

    NARCIS (Netherlands)

    Zhang, Q.; Klift, van der E.J.C.; Janssen, H.G.; Beek, van T.A.

    2009-01-01

    An on-line method for the rapid pinpointing of radical scavengers in non-polar mixtures like vegetable oils was developed. To avoid problems with dissolving the sample, normal-phase chromatography on bare silica gel was used with mixtures of hexane and methyl tert-butyl ether as the eluent. The high

  14. Evaluation of the standard normal variate method for Laser-Induced Breakdown Spectroscopy data treatment applied to the discrimination of painting layers

    Science.gov (United States)

    Syvilay, D.; Wilkie-Chancellier, N.; Trichereau, B.; Texier, A.; Martinez, L.; Serfaty, S.; Detalle, V.

    2015-12-01

    Nowadays, Laser-Induced Breakdown Spectroscopy (LIBS) is frequently used for in situ analyses to identify pigments from mural paintings. Nonetheless, in situ analyses require a robust instrumentation in order to face to hard experimental conditions. This may imply variation of fluencies and thus inducing variation of LIBS signal, which degrades spectra and then results. Usually, to overcome these experimental errors, LIBS signal is processed. Signal processing methods most commonly used are the baseline subtraction and the normalization by using a spectral line. However, the latter suggests that this chosen element is a constant component of the material, which may not be the case in paint layers organized in stratigraphic layers. For this reason, it is sometimes difficult to apply this normalization. In this study, another normalization will be carried out to throw off these signal variations. Standard normal variate (SNV) is a normalization designed for these conditions. It is sometimes implemented in Diffuse Reflectance Infrared Fourier Transform Spectroscopy and in Raman Spectroscopy but rarely in LIBS. The SNV transformation is not newly applied on LIBS data, but for the first time the effect of SNV on LIBS spectra was evaluated in details (energy of laser, shot by shot, quantification). The aim of this paper is the quick visualization of the different layers of a stratigraphic painting sample by simple data representations (3D or 2D) after SNV normalization. In this investigation, we showed the potential power of SNV transformation to overcome undesired LIBS signal variations but also its limit of application. This method appears as a promising way to normalize LIBS data, which may be interesting for in-situ depth analyses.

  15. Automated Quantification of Optic Nerve Axons in Primate Glaucomatous and Normal Eyes—Method and Comparison to Semi-Automated Manual Quantification

    Science.gov (United States)

    Reynaud, Juan; Cull, Grant; Wang, Lin; Fortune, Brad; Gardiner, Stuart; Burgoyne, Claude F; Cioffi, George A

    2012-01-01

    Purpose. To describe an algorithm and software application (APP) for 100% optic nerve axon counting and to compare its performance with a semi-automated manual (SAM) method in optic nerve cross-section images (images) from normal and experimental glaucoma (EG) nonhuman primate (NHP) eyes. Methods. ON cross sections from eight EG eyes from eight NHPs, five EG and five normal eyes from five NHPs, and 12 normal eyes from 12 NHPs were imaged at 100×. Calibration (n = 500) and validation (n = 50) image sets ranging from normal to end-stage damage were assembled. Correlation between APP and SAM axon counts was assessed by Deming regression within the calibration set and a compensation formula was generated to account for the subtle, systematic differences. Then, compensated APP counts for each validation image were compared with the mean and 95% confidence interval of five SAM counts of the validation set performed by a single observer. Results. Calibration set APP counts linearly correlated to SAM counts (APP = 10.77 + 1.03 [SAM]; R2 = 0.94, P < 0.0001) in normal to end-stage damage images. In the validation set, compensated APP counts fell within the 95% confidence interval of the SAM counts in 42 of the 50 images and were within 12 axons of the confidence intervals in six of the eight remaining images. Uncompensated axon density maps for the normal and EG eyes of a representative NHP were generated. Conclusions. An APP for 100% ON axon counts has been calibrated and validated relative to SAM counts in normal and EG NHP eyes. PMID:22467571

  16. A comparative study of deficit pattern in theory of mind and emotion regulation methods in evaluating patients with bipolar disorder and normal individuals

    OpenAIRE

    Ali Fakhari; Khalegh Minashiri; Abolfazl Fallahi; Mohammad Taher Panah

    2013-01-01

    BACKGROUND: This study compared patterns of deficit in "theory of mind" and "emotion regulation" in patientswith bipolar disorder and normal individuals. METHODS: In this causal-comparative study, subjects were 20 patients with bipolar disorder and 20 normalindividuals. Patients were selected via convenience sampling method among hospitalized patients at Razi hospital ofTabriz, Iran. The data was collected through two scales: Reading the Mind in the Eyes Test and Emotion RegulationQuestionnai...

  17. Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    The problem in optimising the laser cutting process is outlined. Basic optimisation criteria and principles for adapting an optimisation method, the simplex method, are presented. The results of implementing a response function in the optimisation are discussed with respect to the quality as well...

  18. Quarter-Sweep Iteration Concept on Conjugate Gradient Normal Residual Method via Second Order Quadrature - Finite Difference Schemes for Solving Fredholm Integro-Differential Equations

    International Nuclear Information System (INIS)

    Aruchunan, E.

    2015-01-01

    In this paper, we have examined the effectiveness of the quarter-sweep iteration concept on conjugate gradient normal residual (CGNR) iterative method by using composite Simpson's (CS) and finite difference (FD) discretization schemes in solving Fredholm integro-differential equations. For comparison purposes, Gauss- Seidel (GS) and the standard or full- and half-sweep CGNR methods namely FSCGNR and HSCGNR are also presented. To validate the efficacy of the proposed method, several analyses were carried out such as computational complexity and percentage reduction on the proposed and existing methods. (author)

  19. A review on ductile mode cutting of brittle materials

    Science.gov (United States)

    Antwi, Elijah Kwabena; Liu, Kui; Wang, Hao

    2018-06-01

    Brittle materials have been widely employed for industrial applications due to their excellent mechanical, optical, physical and chemical properties. But obtaining smooth and damage-free surface on brittle materials by traditional machining methods like grinding, lapping and polishing is very costly and extremely time consuming. Ductile mode cutting is a very promising way to achieve high quality and crack-free surfaces of brittle materials. Thus the study of ductile mode cutting of brittle materials has been attracting more and more efforts. This paper provides an overview of ductile mode cutting of brittle materials including ductile nature and plasticity of brittle materials, cutting mechanism, cutting characteristics, molecular dynamic simulation, critical undeformed chip thickness, brittle-ductile transition, subsurface damage, as well as a detailed discussion of ductile mode cutting enhancement. It is believed that ductile mode cutting of brittle materials could be achieved when both crack-free and no subsurface damage are obtained simultaneously.

  20. Nanometric mechanical cutting of metallic glass investigated using atomistic simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Cheng-Da, E-mail: nanowu@cycu.edu.tw [Department of Mechanical Engineering, Chung Yuan Christian University, 200, Chung Pei Rd., Chung Li District, Taoyuan City 32023, Taiwan (China); Fang, Te-Hua, E-mail: fang.tehua@msa.hinet.net [Department of Mechanical Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung 807, Taiwan (China); Su, Jih-Kai, E-mail: yummy_2468@yahoo.com.tw [Department of Mechanical Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung 807, Taiwan (China)

    2017-02-28

    Highlights: • A nanoscale chip with a shear plane of 135° is extruded by the tool. • Tangential force and normal force increase with increasing tool nose radius. • Resistance factor increases with increasing cutting depth and temperature. - Abstract: The effects of cutting depth, tool nose radius, and temperature on the cutting mechanism and mechanics of amorphous NiAl workpieces are studied using molecular dynamics simulations based on the second-moment approximation of the many-body tight-binding potential. These effects are investigated in terms of atomic trajectories and flow field, shear strain, cutting force, resistance factor, cutting ratio, and pile-up characteristics. The simulation results show that a nanoscale chip with a shear plane of 135° is extruded by the tool from a workpiece surface during the cutting process. The workpiece atoms underneath the tool flow upward due to the adhesion force and elastic recovery. The required tangential force and normal force increase with increasing cutting depth and tool nose radius; both forces also increase with decreasing temperature. The resistance factor increases with increasing cutting depth and temperature, and decreases with increasing tool nose radius.

  1. A strand specific high resolution normalization method for chip-sequencing data employing multiple experimental control measurements

    DEFF Research Database (Denmark)

    Enroth, Stefan; Andersson, Claes; Andersson, Robin

    2012-01-01

    High-throughput sequencing is becoming the standard tool for investigating protein-DNA interactions or epigenetic modifications. However, the data generated will always contain noise due to e.g. repetitive regions or non-specific antibody interactions. The noise will appear in the form of a backg......, the background is only used to adjust peak calling and not as a pre-processing step that aims at discerning the signal from the background noise. A normalization procedure that extracts the signal of interest would be of universal use when investigating genomic patterns....

  2. Computational Methods for Quality Check, Preprocessing and Normalization of RNA-Seq Data for Systems Biology and Analysis

    DEFF Research Database (Denmark)

    Mazzoni, Gianluca; Kadarmideen, Haja N.

    2016-01-01

    quality control, trimming and filtering procedures, alignment, postmapping quality control, counting, normalization and differential expression test. For each step, we present the most common tools and we give a complete description of their main characteristics and advantages focusing on the statistics......The use of RNA sequencing (RNA-Seq) technologies is increasing mainly due to the development of new next-generation sequencing machines that have reduced the costs and the time needed for data generation. Nevertheless, microarrays are still the more common choice and one of the reasons...

  3. A Study on Ultrasonic Elliptical Vibration Cutting of Inconel 718

    Directory of Open Access Journals (Sweden)

    Zhao Haidong

    2016-01-01

    Full Text Available Inconel 718 is a kind of nickel-based alloys that are widely used in the aerospace and nuclear industry owing to their high temperature mechanical properties. Cutting of Inconel 718 in conventional cutting (CC is a big challenge in modern industry. Few researches have been studied on cutting of Inconel 718 using single point diamond tool applying the UEVC method. This paper shows an experimental study on UEVC of Inconel 718 by using polycrystalline diamond (PCD coated tools. Firstly, cutting tests have been carried out to study the effect of machining parameters in the UEVC in terms of surface finish and flank wear during machining of Inconel 718. The tests have clearly shown that the PCD coated tools in cutting of Inconel 718 by the UEVC have better performance at 0.1 mm depth of cut as compared to the lower 0.05 mm depth of cut and the higher 0.12 or 0.15 mm depth of cut. Secondly, like CC method, the cutting performance in UEVC increases with the decrease of the feed rate and cutting speed. The CC tests have also been carried out to compare performance of CC with UEVC method.

  4. Development of plasma cutting process at observation of environmental requirements

    International Nuclear Information System (INIS)

    Czech, J.; Matusiak, J.; Pasek-Siurek, H.

    1997-01-01

    Plasma cutting is one of the basic methods for thermal cutting of metals. It is characterized by high productivity and quality of the cut surface. However, the plasma cutting process is one of the most harmful processes for environment and human health. It results from many agents being a potential environmental risk The large amount of dust and gases emitted during the process as well as an intensive radiation of electric arc and excessive noise are considered as the most harmful hazards. The existing ventilation and filtration systems are not able to solve all problems resulting from the process. Plasma cutting under water is worthy of notice, especially during an advancement of plasma cutting process, because of human safety and environment protection. Such a solution allows to reduce considerably the emission of dust and gases, as well as to decrease the noise level and ultraviolet radiation. An additional advantage of underwater plasma cutting is a reduction in the width of material heating zone and a decrease in strains of elements being cut. However, the productivity of this process is a little lower what results in an increase in cutting cost. In the paper, it has been presented the results of the investigations made at the Institute of Welding in Gliwice on the area of plasma cutting equipment with energy-saving inverter power supplies used in automated processes of underwater plasma cutting as well as the results of testing of welding environment contamination and safety hazards. (author)

  5. Birkhoff normalization

    NARCIS (Netherlands)

    Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.

    2003-01-01

    The Birkhoff normal form procedure is a widely used tool for approximating a Hamiltonian systems by a simpler one. This chapter starts out with an introduction to Hamiltonian mechanics, followed by an explanation of the Birkhoff normal form procedure. Finally we discuss several algorithms for

  6. The usefulness and the problems of attenuation correction using simultaneous transmission and emission data acquisition method. Studies on normal volunteers and phantom

    International Nuclear Information System (INIS)

    Kijima, Tetsuji; Kumita, Shin-ichiro; Mizumura, Sunao; Cho, Keiichi; Ishihara, Makiko; Toba, Masahiro; Kumazaki, Tatsuo; Takahashi, Munehiro.

    1997-01-01

    Attenuation correction using simultaneous transmission data (TCT) and emission data (ECT) acquisition method was applied to 201 Tl myocardial SPECT with ten normal adults and the phantom in order to validate the efficacy of attenuation correction using this method. Normal adults study demonstrated improved 201 Tl accumulation to the septal wall and the posterior wall of the left ventricle and relative decreased activities in the lateral wall with attenuation correction (p 201 Tl uptake organs such as the liver and the stomach pushed up the activities in the septal wall and the posterior wall. Cardiac dynamic phantom studies showed partial volume effect due to cardiac motion contributed to under-correction of the apex, which might be overcome using gated SPECT. Although simultaneous TCT and ECT acquisition was conceived of the advantageous method for attenuation correction, miss-correction of the special myocardial segments should be taken into account in assessment of attenuation correction compensated images. (author)

  7. Método de aplicação do ácido indolbutírico na estaquia de cultivares de pessegueiro Application methods of indolebutiric acid in cutting rooting on peach cultivars

    Directory of Open Access Journals (Sweden)

    Mauro Brasil Dias Tofanelli

    2003-10-01

    Full Text Available Conduziu-se este trabalho com o objetivo de avaliar o potencial de propagação vegetativa de estacas semilenhosas de cultivares de pessegueiro tratadas com ácido indolbutírico (AIB em dois métodos de aplicação. O experimento foi desenvolvido no Departamento de Botânica do Instituto de Biociências, Universidade Estadual Paulista “Júlio de Mesquita Filho" (UNESP, campus de Botucatu (SP. De ramos coletados de plantas-matrizes das cultivares Delicioso Precoce, Jóia 1 e Okinawa, estacas semilenhosas foram preparadas com 10 a 15 cm de comprimento, diâmetro de 5 mm e desprovidas de folhas e foram submetidas aos tratamentos de imersão rápida por 5 segundos em soluções concentradas (0, 1250, 2500 e 3750mg.L-1 de AIB e imersão lenta por 24 horas em soluções diluídas (0, 100, 200 e 300mg.L-1 de AIB. Posteriormente, foram plantadas em bandejas de poliestireno expandido, utilizando-se vermiculita de granulometria fina como substrato e colocadas em estufa sob nebulização intermitente por 45 dias. A cultivar Okinawa (29% e o método de imersão rápida em AIB (9% proporcionaram os melhores resultados de enraizamento. Não se recomenda a propagação das cultivares de pessegueiro Delicioso Precoce, Jóia 1 e Okinawa em estacas semilenhosas.This work had as objective to evaluate the vegetative propagation potential of semihardwood cuttings of peach cultivars treated with indolebutyric acid (IBA at two different application methods. The trial was carried out at the Departamento de Botânica/Instituto de Biociências of the Universidade Estadual Paulista “Júlio de Mesquita Filho" (UNESP, located in Botucatu (SP, Brazil. The cuttings were prepared of stems taken from Delicioso Precoce, Jóia 1 and Okinawa peach cultivars in December, 2001. The cuttings were treated with 0; 1,250; 2,500 and 3,750mg L-1 IBA for 5 seconds (concentrated solutions and 0, 100, 200, 300mg L-1 IBA for 24 hours (diluted solutions, and were planted in polystyrene

  8. CO 2 laser cutting of MDF . 2. Estimation of power distribution

    Science.gov (United States)

    Ng, S. L.; Lum, K. C. P.; Black, I.

    2000-02-01

    Part 2 of this paper details an experimentally-based method to evaluate the power distribution for both CW and PM cutting. Variations in power distribution with different cutting speeds, material thickness and pulse ratios are presented. The paper also provides information on both the cutting efficiency and absorptivity index for MDF, and comments on the beam dispersion characteristics after the cutting process.

  9. Determination of the stresses and displacements in the cut off curtain body executed by the << Wall-in-the ground >> method.; Opredelenie napryazhenij i peremeshchenij v tele protivofil`tratsionnoj zavesy, vypolnennoj metodom << stena v grunte >>.

    Energy Technology Data Exchange (ETDEWEB)

    Snisarenko, V I; Mel` nikov, A I [Myinyisterstvo Budyivel` noyi Arkhyitekturi, Kyiv (Ukraine); [Myizhgaluzevij Naukovo-Tekhnyichnij Tsentr ` ` Ukrittya ` ` , Natsyional` na Akademyiya Nauk Ukrayini, Chornobil` (Ukraine)

    1994-12-31

    Construction of the cut-off-curtain (COC) is analyzed as a possible variant to reduce the rate of radioactive horizontal migration. Such constructions can be executed by the << wall-in-the ground >> method. The theoretical analysis of the stress-strained state of the carried out using the methods of the theory of elasticity and of the limit equilibrium of the strewing medium. Theoretical dependences are obtained and formulas for practical calculations of the COC-body stress-strained state in the depth intervals which are of practical interest are suggested. The dependences obtained may be used to calculate the consolidation parameters and filtration coefficients, to choose materials for the COC body, geometrical size and film elements included.

  10. A scalable platform for biomechanical studies of tissue cutting forces

    International Nuclear Information System (INIS)

    Valdastri, P; Tognarelli, S; Menciassi, A; Dario, P

    2009-01-01

    This paper presents a novel and scalable experimental platform for biomechanical analysis of tissue cutting that exploits a triaxial force-sensitive scalpel and a high resolution vision system. Real-time measurements of cutting forces can be used simultaneously with accurate visual information in order to extract important biomechanical clues in real time that would aid the surgeon during minimally invasive intervention in preserving healthy tissues. Furthermore, the in vivo data gathered can be used for modeling the viscoelastic behavior of soft tissues, which is an important issue in surgical simulator development. Thanks to a modular approach, this platform can be scaled down, thus enabling in vivo real-time robotic applications. Several cutting experiments were conducted with soft porcine tissues (lung, liver and kidney) chosen as ideal candidates for biopsy procedures. The cutting force curves show repeated self-similar units of localized loading followed by unloading. With regards to tissue properties, the depth of cut plays a significant role in the magnitude of the cutting force acting on the blade. Image processing techniques and dedicated algorithms were used to outline the surface of the tissues and estimate the time variation of the depth of cut. The depth of cut was finally used to obtain the normalized cutting force, thus allowing comparative biomechanical analysis

  11. Melt Flow and Energy Limitation of Laser Cutting

    Directory of Open Access Journals (Sweden)

    Pavel Hudeček

    2016-01-01

    Full Text Available Laser technology is a convertible technology for plenty of parts in most materials. Laser material processing for industrial manufacturing applications is today a widespread procedure for welding, cutting, marking and micro machining of metal and plastic parts and components. Involvement and support this huge mass-production industry of laser cutting, new technology and dry-process using lasers were and are being actively developed. Fundamentally, industrial laser cutting or other applications on industry should satisfy the four key practical application issues including “Quality or Performance”, “Throughput or Speed”, “Cost or Total Ownership Cost”, and “Reliability”. Laser requires for examples several complicated physical factors to be resolved including die strength to be enable good wire-bonding and survival of severe cycling test, clean cutting wall surface, good cutting of direct attach film, and proper speed of cutting for achieving economy of throughput. Some example of maximum cutting rate, wherewith is normally limited laser energy, cutting speed is depend on type laser, different of cutting with one laser beam and beam pattern and applied laser power/material thickness will be introduced in this paper.

  12. Automated quantification of optic nerve axons in primate glaucomatous and normal eyes--method and comparison to semi-automated manual quantification.

    Science.gov (United States)

    Reynaud, Juan; Cull, Grant; Wang, Lin; Fortune, Brad; Gardiner, Stuart; Burgoyne, Claude F; Cioffi, George A

    2012-05-01

    To describe an algorithm and software application (APP) for 100% optic nerve axon counting and to compare its performance with a semi-automated manual (SAM) method in optic nerve cross-section images (images) from normal and experimental glaucoma (EG) nonhuman primate (NHP) eyes. ON cross sections from eight EG eyes from eight NHPs, five EG and five normal eyes from five NHPs, and 12 normal eyes from 12 NHPs were imaged at 100×. Calibration (n = 500) and validation (n = 50) image sets ranging from normal to end-stage damage were assembled. Correlation between APP and SAM axon counts was assessed by Deming regression within the calibration set and a compensation formula was generated to account for the subtle, systematic differences. Then, compensated APP counts for each validation image were compared with the mean and 95% confidence interval of five SAM counts of the validation set performed by a single observer. Calibration set APP counts linearly correlated to SAM counts (APP = 10.77 + 1.03 [SAM]; R(2) = 0.94, P < 0.0001) in normal to end-stage damage images. In the validation set, compensated APP counts fell within the 95% confidence interval of the SAM counts in 42 of the 50 images and were within 12 axons of the confidence intervals in six of the eight remaining images. Uncompensated axon density maps for the normal and EG eyes of a representative NHP were generated. An APP for 100% ON axon counts has been calibrated and validated relative to SAM counts in normal and EG NHP eyes.

  13. Feasibility Study of Cryogenic Cutting Technology by Using a Computer Simulation and Manufacture of Main Components for Cryogenic Cutting System

    International Nuclear Information System (INIS)

    Kim, Sung Kyun; Lee, Dong Gyu; Lee, Kune Woo; Song, Oh Seop

    2009-01-01

    Cryogenic cutting technology is one of the most suitable technologies for dismantling nuclear facilities due to the fact that a secondary waste is not generated during the cutting process. In this paper, the feasibility of cryogenic cutting technology was investigated by using a computer simulation. In the computer simulation, a hybrid method combined with the SPH (smoothed particle hydrodynamics) method and the FE (finite element) method was used. And also, a penetration depth equation, for the design of the cryogenic cutting system, was used and the design variables and operation conditions to cut a 10 mm thickness for steel were determined. Finally, the main components of the cryogenic cutting system were manufactures on the basis of the obtained design variables and operation conditions.

  14. Can You Cut It?

    DEFF Research Database (Denmark)

    Kjær, Tina; Lillelund, Christoffer Bredo; Moth-Poulsen, Mie

    2017-01-01

    The advent of affordable virtual reality (VR) displays and 360◦ video cameras has sparked an interest in bringing cinematic experiences from the screen and into VR. However, it remains uncertain whether traditional approaches to filmmaking can be directly applied to cinematic VR. Historically......’ sense of disorientation and their ability to follow the story, during exposure to fictional 360◦ films experienced using a head-mounted display. The results revealed no effects of increased cut frequency which leads us to conclude that editing need not pose a problem in relation to cinematic VR, as long...

  15. Cutting the Cord-2

    Science.gov (United States)

    2004-01-01

    This animation shows the view from the rear hazard avoidance cameras on the Mars Exploration Rover Spirit as the rover turns 45 degrees clockwise. This maneuver is the first step in a 3-point turn that will rotate the rover 115 degrees to face west. The rover must make this turn before rolling off the lander because airbags are blocking it from exiting from the front lander petal. Before this crucial turn took place, engineers instructed the rover to cut the final cord linking it to the lander. The turn took around 30 minutes to complete.

  16. Cutting the Cord

    Science.gov (United States)

    2004-01-01

    This animation shows the view from the front hazard avoidance cameras on the Mars Exploration Rover Spirit as the rover turns 45 degrees clockwise. This maneuver is the first step in a 3-point turn that will rotate the rover 115 degrees to face west. The rover must make this turn before rolling off the lander because airbags are blocking it from exiting off the front lander petal. Before this crucial turn could take place, engineers instructed the rover to cut the final cord linking it to the lander. The turn took around 30 minutes to complete.

  17. A methodology for generating normal and pathological brain perfusion SPECT images for evaluation of MRI/SPECT fusion methods: application in epilepsy

    Energy Technology Data Exchange (ETDEWEB)

    Grova, C [Laboratoire IDM, Faculte de Medecine, Universite de Rennes 1, Rennes (France); Jannin, P [Laboratoire IDM, Faculte de Medecine, Universite de Rennes 1, Rennes (France); Biraben, A [Laboratoire IDM, Faculte de Medecine, Universite de Rennes 1, Rennes (France); Buvat, I [INSERM U494, CHU Pitie Salpetriere, Paris (France); Benali, H [INSERM U494, CHU Pitie Salpetriere, Paris (France); Bernard, A M [Service de Medecine Nucleaire, Centre Eugene Marquis, Rennes (France); Scarabin, J M [Laboratoire IDM, Faculte de Medecine, Universite de Rennes 1, Rennes (France); Gibaud, B [Laboratoire IDM, Faculte de Medecine, Universite de Rennes 1, Rennes (France)

    2003-12-21

    Quantitative evaluation of brain MRI/SPECT fusion methods for normal and in particular pathological datasets is difficult, due to the frequent lack of relevant ground truth. We propose a methodology to generate MRI and SPECT datasets dedicated to the evaluation of MRI/SPECT fusion methods and illustrate the method when dealing with ictal SPECT. The method consists in generating normal or pathological SPECT data perfectly aligned with a high-resolution 3D T1-weighted MRI using realistic Monte Carlo simulations that closely reproduce the response of a SPECT imaging system. Anatomical input data for the SPECT simulations are obtained from this 3D T1-weighted MRI, while functional input data result from an inter-individual analysis of anatomically standardized SPECT data. The method makes it possible to control the 'brain perfusion' function by proposing a theoretical model of brain perfusion from measurements performed on real SPECT images. Our method provides an absolute gold standard for assessing MRI/SPECT registration method accuracy since, by construction, the SPECT data are perfectly registered with the MRI data. The proposed methodology has been applied to create a theoretical model of normal brain perfusion and ictal brain perfusion characteristic of mesial temporal lobe epilepsy. To approach realistic and unbiased perfusion models, real SPECT data were corrected for uniform attenuation, scatter and partial volume effect. An anatomic standardization was used to account for anatomic variability between subjects. Realistic simulations of normal and ictal SPECT deduced from these perfusion models are presented. The comparison of real and simulated SPECT images showed relative differences in regional activity concentration of less than 20% in most anatomical structures, for both normal and ictal data, suggesting realistic models of perfusion distributions for evaluation purposes. Inter-hemispheric asymmetry coefficients measured on simulated data were

  18. ENVIRONMENTALLY REDUCING OF COOLANTS IN METAL CUTTING

    Directory of Open Access Journals (Sweden)

    Veijo KAUPPINEN

    2012-11-01

    Full Text Available Strained environment is a global problem. In metal industries the use of coolant has become more problematic in terms of both employee health and environmental pollution. It is said that the use of coolant forms approximately 8 - 16 % of the total production costs.The traditional methods that use coolants are now obviously becoming obsolete. Hence, it is clear that using a dry cutting system has great implications for resource preservation and waste reduction. For this purpose, a new cooling system is designed for dry cutting. This paper presents the new eco-friendly cooling innovation and the benefits gained by using this method. The new cooling system relies on a unit for ionising ejected air. In order to compare the performance of using this system, cutting experiments were carried out. A series of tests were performed on a horizontal turning machine and on a horizontal machining centre.

  19. A comparison of two methods of measuring static coefficient of friction at low normal forces: a pilot study.

    Science.gov (United States)

    Seo, Na Jin; Armstrong, Thomas J; Drinkaus, Philip

    2009-01-01

    This study compares two methods for estimating static friction coefficients for skin. In the first method, referred to as the 'tilt method', a hand supporting a flat object is tilted until the object slides. The friction coefficient is estimated as the tangent of the angle of the object at the slip. The second method estimates the friction coefficient as the pull force required to begin moving a flat object over the surface of the hand, divided by object weight. Both methods were used to estimate friction coefficients for 12 subjects and three materials (cardboard, aluminium, rubber) against a flat hand and against fingertips. No differences in static friction coefficients were found between the two methods, except for that of rubber, where friction coefficient was 11% greater for the tilt method. As with previous studies, the friction coefficients varied with contact force and contact area. Static friction coefficient data are needed for analysis and design of objects that are grasped or manipulated with the hand. The tilt method described in this study can easily be used by ergonomic practitioners to estimate static friction coefficients in the field in a timely manner.

  20. CELL AVERAGING CFAR DETECTOR WITH SCALE FACTOR CORRECTION THROUGH THE METHOD OF MOMENTS FOR THE LOG-NORMAL DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    José Raúl Machado Fernández

    2018-01-01

    Full Text Available Se presenta el nuevo detector LN-MoM-CA-CFAR que tiene una desviación reducida en la tasa de probabilidad de falsa alarma operacional con respecto al valor concebido de diseño. La solución corrige un problema fundamental de los procesadores CFAR que ha sido ignora-do en múltiples desarrollos. En efecto, la mayoría de los esquemas previamente propuestos tratan con los cambios bruscos del nivel del clutter mientras que la presente solución corrige los cambios lentos estadísticos de la señal de fondo. Se ha demostrado que estos tienen una influencia marcada en la selección del factor de ajuste multiplicativo CFAR, y consecuen-temente en el mantenimiento de la probabilidad de falsa alarma. Los autores aprovecharon la alta precisión que se alcanza en la estimación del parámetro de forma Log-Normal con el MoM, y la amplia aplicación de esta distribución en la modelación del clutter, para crear una arquitectura que ofrece resultados precisos y con bajo costo computacional. Luego de un procesamiento intensivo de 100 millones de muestras Log-Normal, se creó un esquema que, mejorando el desempeño del clásico CA-CFAR a través de la corrección continua de su fac-tor de ajuste, opera con una excelente estabilidad alcanzando una desviación de solamente 0,2884 % para la probabilidad de falsa alarma de 0,01.