WorldWideScience

Sample records for cloud point preconcentration

  1. Cloud point extraction, preconcentration and spectrophotometric determination of nickel in water samples using dimethylglyoxime

    Directory of Open Access Journals (Sweden)

    Morteza Bahram

    2013-01-01

    Full Text Available A new and simple method for the preconcentration and spectrophotometric determination of trace amounts of nickel was developed by cloud point extraction (CPE. In the proposed work, dimethylglyoxime (DMG was used as the chelating agent and Triton X-114 was selected as a non-ionic surfactant for CPE. The parameters affecting the cloud point extraction including the pH of sample solution, concentration of the chelating agent and surfactant, equilibration temperature and time were optimized. Under the optimum conditions, the calibration graph was linear in the range of 10-150 ng mL-1 with a detection limit of 4 ng mL-1. The relative standard deviation for 9 replicates of 100 ng mL-1 Ni(II was 1.04%. The interference effect of some anions and cations was studied. The method was applied to the determination of Ni(II in water samples with satisfactory results.

  2. Cloud-point extraction and spectrophotometric determination of ...

    African Journals Online (AJOL)

    Aneco-friendly, simple and very sensitive method was developed for preconcentration and determination of clonazepam (CLO) in pharmaceutical dosage forms using cloud point extraction (CPE) technique. The method is based on cloud point extraction of product from oxidative coupling between reduced CLO and ...

  3. Lead preconcentration in synthetic samples with triton x-114 in the cloud point extraction and analysis by atomic absorption (EAAF)

    International Nuclear Information System (INIS)

    Zegarra Pisconti, Marixa; Cjuno Huanca, Jesus

    2015-01-01

    A methodology was developed about lead preconcentration in water samples that were added dithizone as complexing agent, previously dissolved in the nonionic surfactant Triton X-114, until the formation of the critical micelle concentration and the cloud point temperature. The centrifuged system gave a precipitate with high concentrations of Pb (II) that was measured by atomic absorption spectroscopy with flame (EAAF). The method has proved feasible to be implemented as a method of preconcentration and analysis of Pb in aqueous samples with concentrations less than 1 ppm. Several parameters were evaluated to obtain a percentage recovery of 89.8%. (author)

  4. Cloud point extraction-flame atomic absorption spectrometry for pre-concentration and determination of trace amounts of silver ions in water samples.

    Science.gov (United States)

    Yang, Xiupei; Jia, Zhihui; Yang, Xiaocui; Li, Gu; Liao, Xiangjun

    2017-03-01

    A cloud point extraction (CPE) method was used as a pre-concentration strategy prior to the determination of trace levels of silver in water by flame atomic absorption spectrometry (FAAS) The pre-concentration is based on the clouding phenomena of non-ionic surfactant, triton X-114, with Ag (I)/diethyldithiocarbamate (DDTC) complexes in which the latter is soluble in a micellar phase composed by the former. When the temperature increases above its cloud point, the Ag (I)/DDTC complexes are extracted into the surfactant-rich phase. The factors affecting the extraction efficiency including pH of the aqueous solution, concentration of the DDTC, amount of the surfactant, incubation temperature and time were investigated and optimized. Under the optimal experimental conditions, no interference was observed for the determination of 100 ng·mL -1 Ag + in the presence of various cations below their maximum concentrations allowed in this method, for instance, 50 μg·mL -1 for both Zn 2+ and Cu 2+ , 80 μg·mL -1 for Pb 2+ , 1000 μg·mL -1 for Mn 2+ , and 100 μg·mL -1 for both Cd 2+ and Ni 2+ . The calibration curve was linear in the range of 1-500 ng·mL -1 with a limit of detection (LOD) at 0.3 ng·mL -1 . The developed method was successfully applied for the determination of trace levels of silver in water samples such as river water and tap water.

  5. Cloud point extraction-flame atomic absorption spectrometry for pre-concentration and determination of trace amounts of silver ions in water samples

    Directory of Open Access Journals (Sweden)

    Xiupei Yang

    2017-03-01

    Full Text Available A cloud point extraction (CPE method was used as a pre-concentration strategy prior to the determination of trace levels of silver in water by flame atomic absorption spectrometry (FAAS The pre-concentration is based on the clouding phenomena of non-ionic surfactant, triton X-114, with Ag (I/diethyldithiocarbamate (DDTC complexes in which the latter is soluble in a micellar phase composed by the former. When the temperature increases above its cloud point, the Ag (I/DDTC complexes are extracted into the surfactant-rich phase. The factors affecting the extraction efficiency including pH of the aqueous solution, concentration of the DDTC, amount of the surfactant, incubation temperature and time were investigated and optimized. Under the optimal experimental conditions, no interference was observed for the determination of 100 ng·mL−1 Ag+ in the presence of various cations below their maximum concentrations allowed in this method, for instance, 50 μg·mL−1 for both Zn2+ and Cu2+, 80 μg·mL−1 for Pb2+, 1000 μg·mL−1 for Mn2+, and 100 μg·mL−1 for both Cd2+ and Ni2+. The calibration curve was linear in the range of 1–500 ng·mL−1 with a limit of detection (LOD at 0.3 ng·mL−1. The developed method was successfully applied for the determination of trace levels of silver in water samples such as river water and tap water.

  6. On-line complexation/cloud point preconcentration for the sensitive determination of dysprosium in urine by flow injection inductively coupled plasma-optical emission spectrometry

    International Nuclear Information System (INIS)

    Ortega, Claudia; Cerutti, Soledad; Silva, Maria F.; Olsina, Roberto A.; Martinez, Luis D.

    2003-01-01

    An on-line dysprosium preconcentration and determination system based on the hyphenation of cloud point extraction (CPE) to flow injection analysis (FIA) associated with ICP-OES was studied. For the preconcentration of dysprosium, a Dy(III)-2-(5-bromo-2-pyridylazo)-5-diethylaminophenol complex was formed on-line at pH 9.22 in the presence of nonionic micelles of PONPE-7.5. The micellar system containing the complex was thermostated at 30 C in order to promote phase separation, and the surfactant-rich phase was retained in a microcolumn packed with cotton at pH 9.2. The surfactant-rich phase was eluted with 4 mol L -1 nitric acid at a flow rate of 1.5 mL min -1 , directly in the nebulizer of the plasma. An enhancement factor of 50 was obtained for the preconcentration of 50 mL of sample solution. The detection limit value for the preconcentration of 50 mL of aqueous solution of Dy was 0.03 μg L -1 . The precision for 10 replicate determinations at the 2.0 μg L -1 Dy level was 2.2% relative standard deviation (RSD), calculated from the peak heights obtained. The calibration graph using the preconcentration system for dysprosium was linear with a correlation coefficient of 0.9994 at levels near the detection limits up to at least 100 μg L -1 . The method was successfully applied to the determination of dysprosium in urine. (orig.)

  7. Species selective preconcentration and quantification of gold nanoparticles using cloud point extraction and electrothermal atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Hartmann, Georg, E-mail: georg.hartmann@tum.de [Department of Chemistry, Technische Universitaet Muenchen, 85748 Garching (Germany); Schuster, Michael, E-mail: michael.schuster@tum.de [Department of Chemistry, Technische Universitaet Muenchen, 85748 Garching (Germany)

    2013-01-25

    Highlights: Black-Right-Pointing-Pointer We optimized cloud point extraction and ET-AAS parameters for Au-NPs measurement. Black-Right-Pointing-Pointer A selective ligand (sodium thiosulphate) is introduced for species separation. Black-Right-Pointing-Pointer A limit of detection of 5 ng Au-NP per L is achieved for aqueous samples. Black-Right-Pointing-Pointer Measurement of samples with high natural organic mater content is possible. Black-Right-Pointing-Pointer Real water samples including wastewater treatment plant effluent were analyzed. - Abstract: The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 {+-} 0.06 (particle size 2 nm) to 0.52 {+-} 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L{sup -1} is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L{sup -1}. The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L{sup -1} is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples.

  8. A New Spectrophotometric Method for Determination of Selenium in Cosmetic and Pharmaceutical Preparations after Preconcentration with Cloud Point Extraction

    Directory of Open Access Journals (Sweden)

    Mohammad Hosein Soruraddin

    2011-01-01

    Full Text Available A simple, rapid, and sensitive spectrophotometric method for the determination of trace amounts of selenium (IV was described. In this method, all selenium spices reduced to selenium (IV using 6 M HCl. Cloud point extraction was applied as a preconcentration method for spectrophotometric determination of selenium (IV in aqueous solution. The proposed method is based on the complexation of Selenium (IV with dithizone at pH < 1 in micellar medium (Triton X-100. After complexation with dithizone, the analyte was quantitatively extracted to the surfactant-rich phase by centrifugation and diluted to 5 mL with methanol. Since the absorption maxima of the complex (424 nm and dithizone (434 nm overlap, hence, the corrected absorbance, Acorr, was used to overcome the problem. With regard to the preconcentration, the tested parameters were the pH of the extraction, the concentration of the surfactant, the concentration of dithizone, and equilibration temperature and time. The detection limit is 4.4 ng mL-1; the relative standard deviation for six replicate measurements is 2.18% for 50 ng mL-1 of selenium. The procedure was applied successfully to the determination of selenium in two kinds of pharmaceutical samples.

  9. Novel analytical reagent for the application of cloud-point preconcentration and flame atomic absorption spectrometric determination of nickel in natural water samples

    International Nuclear Information System (INIS)

    Suvardhan, K.; Rekha, D.; Kumar, K. Suresh; Prasad, P. Reddy; Kumar, J. Dilip; Jayaraj, B.; Chiranjeevi, P.

    2007-01-01

    Cloud-point extraction was applied as a preconcentration of nickel after formation of complex with newly synthesized N-quino[8,7-b]azin-5-yl-2,3,5,6,8,9,11,12octahydrobenzo[b][1,4,7,10,13] pentaoxacyclopentadecin-15-yl-methanimine, and later determined by flame atomic absorption spectrometry (FAAS) using octyl phenoxy polyethoxy ethanol (Triton X-114) as surfactant. Nickel was complexed with N-quino[8,7-b]azin-5-yl-2,3,5,6,8,9,11,12 octahydrobenzo[b][1,4,7,10,13]pentaoxacyclopentadecin-15-yl-methanimine in an aqueous phase and was kept for 15 min in a thermo-stated bath at 40 deg. C. Separation of the two phases was accomplished by centrifugation for 15 min at 4000 rpm. The chemical variables affecting the cloud-point extraction were evaluated, optimized and successfully applied to the nickel determination in various water samples. Under the optimized conditions, the preconcentration system of 100 ml sample permitted an enhancement factor of 50-fold. The detailed study of various interferences made the method more selective. The detection limits obtained under optimal condition was 0.042 ng ml -1 . The extraction efficiency was investigated at different nickel concentrations (20-80 ng ml -1 ) and good recoveries (99.05-99.93%) were obtained using present method. The proposed method has been applied successfully for the determination of nickel in various water samples and compared with reported method in terms of Student's t-test and variance ratio f-test which indicate the significance of present method over reported and spectrophotometric methods at 95% confidence level

  10. Species selective preconcentration and quantification of gold nanoparticles using cloud point extraction and electrothermal atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Hartmann, Georg; Schuster, Michael

    2013-01-01

    Highlights: ► We optimized cloud point extraction and ET-AAS parameters for Au-NPs measurement. ► A selective ligand (sodium thiosulphate) is introduced for species separation. ► A limit of detection of 5 ng Au-NP per L is achieved for aqueous samples. ► Measurement of samples with high natural organic mater content is possible. ► Real water samples including wastewater treatment plant effluent were analyzed. - Abstract: The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 ± 0.06 (particle size 2 nm) to 0.52 ± 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L −1 is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L −1 . The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L −1 is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples.

  11. Simultaneous cloud point extraction of low levels of Cd, Cr and Hg in ...

    African Journals Online (AJOL)

    A one-step preconcentration cloud point extraction (CPE) method has been developed for the simultaneous determination of Cd, Cr, and Hg using a mixture of 1-(2-pyridylazo)-2-naphthol (PAN) and 1-(2-thiazolylazo)-2-naphthol (TAN) chelating agents and polyoxyethylene nonylphenylether-20 (PONPE-20) surfactant.

  12. Determination of isoquercitrin in rat plasma by high performance liquid chromatography coupled with a novel synergistic cloud point extraction.

    Science.gov (United States)

    Zhou, Jun; Sun, Jiang Bing; Wang, Qiao Feng

    2018-01-01

    A novel improved preconcentration method known as synergistic cloud point extraction was established for isoquercitrin preconcentration and determination in rat plasma prior to its determination by high performance liquid chromatography. Synergistic cloud point extraction greatly simplified isoquercitrin extraction and detection. This method was accomplished at room temperature (about 22°C) in 1min with the nonionic surfactant Tergitol TMN-6 as the extractant, n-octanol as cloud point revulsant and synergic reagent. Parameters that affect the synergistic cloud point extraction processes, such as the concentrations of Tergitol TMN-6, volume of n-octanol, sample pH, salt content and extraction time were investigated and optimized. Under the optimum conditions, the calibration curve for the analyte was linear in the range from 5 to 500ngmL -1 with the correlation coefficients greater than 0.9996. Meanwhile, limit of detection (S/N=3) was less than 1.6ngmL -1 and limit of quantification (S/N=10) was less than 5ngmL -1 . It demonstrated that the method can be successfully applied to the pharmacokinetic investigation of isoquercitrin. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Species selective preconcentration and quantification of gold nanoparticles using cloud point extraction and electrothermal atomic absorption spectrometry.

    Science.gov (United States)

    Hartmann, Georg; Schuster, Michael

    2013-01-25

    The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 ± 0.06 (particle size 2 nm) to 0.52 ± 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L(-1) is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L(-1). The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L(-1) is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Preconcentrative separation of chromium(III) species from chromium(VI) by cloud point extraction and determination by flame atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Yildiz, Z.; Arslan, G.; Tor, A.

    2011-01-01

    We describe a high-throughput technique for the determination of chromium species in water samples by flame atomic absorption spectrometry (FAAS) after preconcentrative separation of Cr(III) species from Cr(VI) by cloud point extraction (CPE) using diethyldithiocarbamate (DDTC) as the chelating agent and the nonionic surfactant Triton X-100 as the extractant. The Cr(III)-DDTC complex is extracted if the temperature is higher than the CPE temperature of Triton X-100, while Cr(VI) remains in the aqueous phase. The Cr(III) in the surfactant phase was analyzed by FAAS, and the concentration of Cr(VI) was calculated by subtraction of Cr(III) from total chromium which was directly determined by FAAS. The effect of pH, concentration of chelating agent, surfactant, and equilibration temperature were investigated. The detection limit for Cr(III) was 0. 08 μg L -1 with an enrichment factor of 98, and the relative standard deviation was 1. 2% (n = 3, c = 100 μg L -1 ). A certified reference material and several water samples were analyzed with satisfactory results. (author)

  15. Application of cloud point preconcentration and flame atomic absorption spectrometry for the determination of cadmium and zinc ions in urine, blood serum and water samples

    Directory of Open Access Journals (Sweden)

    Ardeshir Shokrollahi

    2013-01-01

    Full Text Available A simple, sensitive and selective cloud point extraction procedure is described for the preconcentration and atomic absorption spectrometric determination of Zn2+ and Cd2+ ions in water and biological samples, after complexation with 3,3',3",3'"-tetraindolyl (terephthaloyl dimethane (TTDM in basic medium, using Triton X-114 as nonionic surfactant. Detection limits of 3.0 and 2.0 µg L-1 and quantification limits 10.0 and 7.0 µg L-1were obtained for Zn2+ and Cd2+ ions, respectively. Relative standard deviation was 2.9 and 3.3, and enrichment factors 23.9 and 25.6, for Zn2+ and Cd2+ ions, respectively. The method enabled determination of low levels of Zn2+ and Cd2+ ions in urine, blood serum and water samples.

  16. Preconcentration and determination of zinc and lead ions by a combination of cloud point extraction and flame atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Tavallali, H. [Chemistry Department, Payamenore University, Shiraz (Iran); Shokrollahi, A.; Zahedi, M. [Chemistry Department, Yasouj University, Yasouj (Iran); Niknam, K. [Chemistry Department, Persian Gulf University, Bushehr (Iran); Soylak, M. [Chemistry Department, University of Erciyes, Kayseri (Turkey); Ghaedi, M.

    2009-04-15

    The phase-separation phenomenon of non-ionic surfactants occurring in aqueous solution was used for the extraction of lead(II) and zinc(II). After complexation with 3-[(4-bromophenyl) (1-H-inden-3-yl)methyl]-1 H-indene (BPIMI), the analytes were quantitatively extracted to a phase rich in Triton X-114 after centrifugation. Methanol acidified with 1 mol/L HNO{sub 3} was added to the surfactant rich phase prior to its analysis by flame atomic absorption spectrometry (FAAS). The concentration of bis((1H-benzo [d] imidazol-2yl)ethyl)sulfane, Triton X-114, pH and amount of surfactant were all optimized. Detection limits (3 SDb/m) of 2.5 and 1.6 ng/mL for Pb{sup 2+} and Zn{sup 2+} along with preconcentration factors of 30 and an enrichment factor of 32 and 48 for Pb{sup 2+}and Zn {sup 2+} ions were obtained, respectively. The proposed cloud point extraction was been successfully applied for the determination of these ions in real samples with complicated matrices such as food and soil samples, with high efficiency. (Abstract Copyright [2009], Wiley Periodicals, Inc.)

  17. Simultaneous preconcentration of copper, zinc, cadmium, and nickel in water samples by cloud point extraction using 4-(2-pyridylazo)-resorcinol and their determination by inductively coupled plasma optic emission spectrometry

    International Nuclear Information System (INIS)

    Silva, Edson Luiz; Santos Roldan, Paulo dos; Gine, Maria Fernanda

    2009-01-01

    A procedure for simultaneous separation/preconcentration of copper, zinc, cadmium, and nickel in water samples, based on cloud point extraction (CPE) as a prior step to their determination by inductively coupled plasma optic emission spectrometry (ICP-OES), has been developed. The analytes reacted with 4-(2-pyridylazo)-resorcinol (PAR) at pH 5 to form hydrophobic chelates, which were separated and preconcentrated in a surfactant-rich phase of octylphenoxypolyethoxyethanol (Triton X-114). The parameters affecting the extraction efficiency of the proposed method, such as sample pH, complexing agent concentration, buffer amount, surfactant concentration, temperature, kinetics of complexation reaction, and incubation time were optimized and their respective values were 5, 0.6 mmol L -1 , 0.3 mL, 0.15% (w/v), 50 deg. C, 40 min, and 10 min for 15 mL of preconcentrated solution. The method presented precision (R.S.D.) between 1.3% and 2.6% (n = 9). The concentration factors with and without dilution of the surfactant-rich phase for the analytes ranged from 9.4 to 10.1 and from 94.0 to 100.1, respectively. The limits of detection (L.O.D.) obtained for copper, zinc, cadmium, and nickel were 1.2, 1.1, 1.0, and 6.3 μg L -1 , respectively. The accuracy of the procedure was evaluated through recovery experiments on aqueous samples.

  18. Determination of trace inorganic mercury species in water samples by cloud point extraction and UV-vis spectrophotometry.

    Science.gov (United States)

    Ulusoy, Halil Ibrahim

    2014-01-01

    A new micelle-mediated extraction method was developed for preconcentration of ultratrace Hg(II) ions prior to spectrophotometric determination. 2-(2'-Thiazolylazo)-p-cresol (TAC) and Ponpe 7.5 were used as the chelating agent and nonionic surfactant, respectively. Hg(II) ions form a hydrophobic complex with TAC in a micelle medium. The main factors affecting cloud point extraction efficiency, such as pH of the medium, concentrations of TAC and Ponpe 7.5, and equilibration temperature and time, were investigated in detail. An overall preconcentration factor of 33.3 was obtained upon preconcentration of a 50 mL sample. The LOD obtained under the optimal conditions was 0.86 microg/L, and the RSD for five replicate measurements of 100 microg/L Hg(II) was 3.12%. The method was successfully applied to the determination of Hg in environmental water samples.

  19. Determination of ultra-trace aluminum in human albumin by cloud point extraction and graphite furnace atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Sun Mei; Wu Qianghua

    2010-01-01

    A cloud point extraction (CPE) method for the preconcentration of ultra-trace aluminum in human albumin prior to its determination by graphite furnace atomic absorption spectrometry (GFAAS) had been developed in this paper. The CPE method was based on the complex of Al(III) with 1-(2-pyridylazo)-2-naphthol (PAN) and Triton X-114 was used as non-ionic surfactant. The main factors affecting cloud point extraction efficiency, such as pH of solution, concentration and kind of complexing agent, concentration of non-ionic surfactant, equilibration temperature and time, were investigated in detail. An enrichment factor of 34.8 was obtained for the preconcentration of Al(III) with 10 mL solution. Under the optimal conditions, the detection limit of Al(III) was 0.06 ng mL -1 . The relative standard deviation (n = 7) of sample was 3.6%, values of recovery of aluminum were changed from 92.3% to 94.7% for three samples. This method is simple, accurate, sensitive and can be applied to the determination of ultra-trace aluminum in human albumin.

  20. Determination of ultra-trace aluminum in human albumin by cloud point extraction and graphite furnace atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Sun Mei, E-mail: sunmei@ustc.edu.cn [Hefei National Laboratory for Physical Sciences on Microscale, University of Science and Technology of China, No. 96, Jinzhai Road, Hefei 230026 (China); Wu Qianghua [Department of Polymer Science and Engineering, University of Science and Technology of China, Hefei 230026 (China)

    2010-04-15

    A cloud point extraction (CPE) method for the preconcentration of ultra-trace aluminum in human albumin prior to its determination by graphite furnace atomic absorption spectrometry (GFAAS) had been developed in this paper. The CPE method was based on the complex of Al(III) with 1-(2-pyridylazo)-2-naphthol (PAN) and Triton X-114 was used as non-ionic surfactant. The main factors affecting cloud point extraction efficiency, such as pH of solution, concentration and kind of complexing agent, concentration of non-ionic surfactant, equilibration temperature and time, were investigated in detail. An enrichment factor of 34.8 was obtained for the preconcentration of Al(III) with 10 mL solution. Under the optimal conditions, the detection limit of Al(III) was 0.06 ng mL{sup -1}. The relative standard deviation (n = 7) of sample was 3.6%, values of recovery of aluminum were changed from 92.3% to 94.7% for three samples. This method is simple, accurate, sensitive and can be applied to the determination of ultra-trace aluminum in human albumin.

  1. Determination of synthetic phenolic antioxidants in cake by hplc/dad after mixed micelle-mediated cloud point extraction

    International Nuclear Information System (INIS)

    Wang, P.; Liu, C.

    2013-01-01

    A mixed micelle-mediated cloud point extraction (MMCPE) system was developed for the extraction and preconcentration of four synthetic phenolic antioxidants (SPAs) (propyl gallate (PG), tert-butylhydroquinone (TBHQ), butylated hydroxyanisole (BHA) and octyl gallate (OG) ) in cake. The mixture of two kinds of non-ionic surfactants polyoxy ethylene nonyl phenyl ether (NP-7) and polyoxy ethylene nonyl phenyl ether (NP-9) was utilized as a suitable micellar medium for preconcentration and extraction of SPAs. The surfactant-rich phase was then analyzed by high performance liquid chromatography-diode array detection (HPLC-DAD). The effect of different parameters such as concentration of surfactants, proportion of NP-7 and NP-9, equilibration time and temperature on the cloud point extraction (CPE) was carefully optimized. Under the studied conditions, four SPAs were successfully separated within 12 min. The relative standard deviations (RSD, n=6) were 1.2-2.0% and the limits of detection (LOD) were 1.5 ng mL-1 for PG, 3.6 ng mL-1 for TBHQ, 2.9 ng mL-1 for BHA, and 0.8 ng mL-1 for OG, respectively. Recoveries of the SPAs in spiked cake samples were in the range of 92% to 99%. The MMCPE method showed potential advantage for the preconcentration of the SPAs, with enrichment factor of 25. Moreover, the method is simple, has high sensitivity, consumes much less solvent, and has significant advantage in extraction efficiency compared to traditional CPE methods. (author)

  2. Preconcentration and determination of iron and copper in spice samples by cloud point extraction and flow injection flame atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, Cigdem Arpa, E-mail: carpa@hacettepe.edu.tr [Hacettepe University, Chemistry Department, 06800 Beytepe, Ankara (Turkey); Tokgoez, Ilknur; Bektas, Sema [Hacettepe University, Chemistry Department, 06800 Beytepe, Ankara (Turkey)

    2010-09-15

    A flow injection (FI) cloud point extraction (CPE) method for the determination of iron and copper by flame atomic absorption spectrometer (FAAS) has been improved. The analytes were complexed with 3-amino-7-dimethylamino-2-methylphenazine (Neutral Red, NR) and octylphenoxypolyethoxyethanol (Triton X-114) was added as a surfactant. The micellar solution was heated above 50 {sup o}C and loaded through a column packed with cotton for phase separation. Then the surfactant-rich phase was eluted using 0.05 mol L{sup -1} H{sub 2}SO{sub 4} and the analytes were determined by FAAS. Chemical and flow variables influencing the instrumental and extraction conditions were optimized. Under optimized conditions for 25 mL of preconcentrated solution, the enrichment factors were 98 and 69, the limits of detection (3s) were 0.7 and 0.3 ng mL{sup -1}, the limits of quantification (10s) were 2.2 and 1.0 ng mL{sup -1} for iron and copper, respectively. The relative standard deviation (RSD) for ten replicate measurements of 10 ng mL{sup -1} iron and copper were 2.1% and 1.8%, respectively. The proposed method was successfully applied to determination of iron and copper in spice samples.

  3. Determination of Cd in urine by cloud point extraction-tungsten coil atomic absorption spectrometry.

    Science.gov (United States)

    Donati, George L; Pharr, Kathryn E; Calloway, Clifton P; Nóbrega, Joaquim A; Jones, Bradley T

    2008-09-15

    Cadmium concentrations in human urine are typically at or below the 1 microgL(-1) level, so only a handful of techniques may be appropriate for this application. These include sophisticated methods such as graphite furnace atomic absorption spectrometry and inductively coupled plasma mass spectrometry. While tungsten coil atomic absorption spectrometry is a simpler and less expensive technique, its practical detection limits often prohibit the detection of Cd in normal urine samples. In addition, the nature of the urine matrix often necessitates accurate background correction techniques, which would add expense and complexity to the tungsten coil instrument. This manuscript describes a cloud point extraction method that reduces matrix interference while preconcentrating Cd by a factor of 15. Ammonium pyrrolidinedithiocarbamate and Triton X-114 are used as complexing agent and surfactant, respectively, in the extraction procedure. Triton X-114 forms an extractant coacervate surfactant-rich phase that is denser than water, so the aqueous supernatant is easily removed leaving the metal-containing surfactant layer intact. A 25 microL aliquot of this preconcentrated sample is placed directly onto the tungsten coil for analysis. The cloud point extraction procedure allows for simple background correction based either on the measurement of absorption at a nearby wavelength, or measurement of absorption at a time in the atomization step immediately prior to the onset of the Cd signal. Seven human urine samples are analyzed by this technique and the results are compared to those found by the inductively coupled plasma mass spectrometry analysis of the same samples performed at a different institution. The limit of detection for Cd in urine is 5 ngL(-1) for cloud point extraction tungsten coil atomic absorption spectrometry. The accuracy of the method is determined with a standard reference material (toxic metals in freeze-dried urine) and the determined values agree with

  4. Determination of cadmium in real water samples by flame atomic absorption spectrometry after cloud point extraction

    International Nuclear Information System (INIS)

    Naeemullah, A.; Kazi, T.G.

    2011-01-01

    Water pollution is a global threat and it is the leading world wide cause of death and diseases. The awareness of the potential danger posed by heavy metals to the ecosystems and in particular to human health has grown tremendously in the past decades. Separation and preconcentration procedures are considered of great importance in analytical and environmental chemistry. Cloud point is one of the most reliable and sophisticated separation methods for determination of traces quantities of heavy metals. Cloud point methodology was successfully employed for preconcentration of trace quantities of cadmium prior to their determination by flame atomic absorption spectrometry (FAAS). The metals react with 8-hydroxquinoline in a surfactant Triton X-114 medium. The following parameters such as pH, concentration of the reagent and Triton X-114, equilibrating temperature and centrifuging time were evaluated and optimized to enhance the sensitivity and extraction efficiency of the proposed method. Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation and the cadmium content was measured by FAAS. The validation of the procedure was carried out by spiking addition methods. The method was applied for determination of Cd in water samples of different ecosystems (lake and river). (author)

  5. Georeferenced Point Clouds: A Survey of Features and Point Cloud Management

    Directory of Open Access Journals (Sweden)

    Johannes Otepka

    2013-10-01

    Full Text Available This paper presents a survey of georeferenced point clouds. Concentration is, on the one hand, put on features, which originate in the measurement process themselves, and features derived by processing the point cloud. On the other hand, approaches for the processing of georeferenced point clouds are reviewed. This includes the data structures, but also spatial processing concepts. We suggest a categorization of features into levels that reflect the amount of processing. Point clouds are found across many disciplines, which is reflected in the versatility of the literature suggesting specific features.

  6. Determination of ultra-trace aluminum in human albumin by cloud point extraction and graphite furnace atomic absorption spectrometry.

    Science.gov (United States)

    Sun, Mei; Wu, Qianghua

    2010-04-15

    A cloud point extraction (CPE) method for the preconcentration of ultra-trace aluminum in human albumin prior to its determination by graphite furnace atomic absorption spectrometry (GFAAS) had been developed in this paper. The CPE method was based on the complex of Al(III) with 1-(2-pyridylazo)-2-naphthol (PAN) and Triton X-114 was used as non-ionic surfactant. The main factors affecting cloud point extraction efficiency, such as pH of solution, concentration and kind of complexing agent, concentration of non-ionic surfactant, equilibration temperature and time, were investigated in detail. An enrichment factor of 34.8 was obtained for the preconcentration of Al(III) with 10 mL solution. Under the optimal conditions, the detection limit of Al(III) was 0.06 ng mL(-1). The relative standard deviation (n=7) of sample was 3.6%, values of recovery of aluminum were changed from 92.3% to 94.7% for three samples. This method is simple, accurate, sensitive and can be applied to the determination of ultra-trace aluminum in human albumin. 2009 Elsevier B.V. All rights reserved.

  7. Micelle-mediated methodology for the preconcentration of uranium prior to its determination by flow injection

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez Laespada, M E; Perez Pavon, J L; Moreno Cordero, B [Univ. de Salamanca (Spain). Dept. de Quimica Analitica, Nutricion y Bromatologia

    1993-02-01

    Cloud point extraction has been used for the preconcentration of uranium, prior to its determination by flow injection. The non-ionic surfactant employed was Triton X-114 and the reagent chosen to form a hydrophobic chelate of uranium was 1-(2-pyridylazo)-2-naphthol. The optimum conditions for the preconcentration and determination of uranium have been studied. This methodology has been applied to the determination of trace amounts of uranium in tap and river waters from Salamanca. (Author).

  8. Cloud point extraction of palladium in water samples and alloy mixtures using new synthesized reagent with flame atomic absorption spectrometry (FAAS)

    International Nuclear Information System (INIS)

    Priya, B. Krishna; Subrahmanayam, P.; Suvardhan, K.; Kumar, K. Suresh; Rekha, D.; Rao, A. Venkata; Rao, G.C.; Chiranjeevi, P.

    2007-01-01

    The present paper outlines novel, simple and sensitive method for the determination of palladium by flame atomic absorption spectrometry (FAAS) after separation and preconcentration by cloud point extraction (CPE). The cloud point methodology was successfully applied for palladium determination by using new reagent 4-(2-naphthalenyl)thiozol-2yl azo chromotropic acid (NTACA) and hydrophobic ligand Triton X-114 as chelating agent and nonionic surfactant respectively in the water samples and alloys. The following parameters such as pH, concentration of the reagent and Triton X-114, equilibrating temperature and centrifuging time were evaluated and optimized to enhance the sensitivity and extraction efficiency of the proposed method. The preconcentration factor was found to be (50-fold) for 250 ml of water sample. Under optimum condition the detection limit was found as 0.067 ng ml -1 for palladium in various environmental matrices. The present method was applied for the determination of palladium in various water samples, alloys and the result shows good agreement with reported method and the recoveries are in the range of 96.7-99.4%

  9. Preconcentration and determination of vanadium and molybdenum in milk, vegetables and foodstuffs by ultrasonic-thermostatic-assisted cloud point extraction coupled to flame atomic absorption spectrometry.

    Science.gov (United States)

    Gürkan, Ramazan; Korkmaz, Sema; Altunay, Nail

    2016-08-01

    A new ultrasonic-thermostatic-assisted cloud point extraction procedure (UTA-CPE) was developed for preconcentration at the trace levels of vanadium (V) and molybdenum (Mo) in milk, vegetables and foodstuffs prior to determination via flame atomic absorption spectrometry (FAAS). The method is based on the ion-association of stable anionic oxalate complexes of V(V) and Mo(VI) with [9-(diethylamino)benzo[a]phenoxazin-5-ylidene]azanium; sulfate (Nile blue A) at pH 4.5, and then extraction of the formed ion-association complexes into micellar phase of polyoxyethylene(7.5)nonylphenyl ether (PONPE 7.5). The UTA-CPE is greatly simplified and accelerated compared to traditional cloud point extraction (CPE). The analytical parameters optimized are solution pH, the concentrations of complexing reagents (oxalate and Nile blue A), the PONPE 7.5 concentration, electrolyte concentration, sample volume, temperature and ultrasonic power. Under the optimum conditions, the calibration curves for Mo(VI) and V(V) are obtained in the concentration range of 3-340µgL(-1) and 5-250µgL(-1) with high sensitivity enhancement factors (EFs) of 145 and 115, respectively. The limits of detection (LODs) for Mo(VI) and V(V) are 0.86 and 1.55µgL(-1), respectively. The proposed method demonstrated good performances such as relative standard deviations (as RSD %) (≤3.5%) and spiked recoveries (95.7-102.3%). The accuracy of the method was assessed by analysis of two standard reference materials (SRMs) and recoveries of spiked solutions. The method was successfully applied into the determination of trace amounts of Mo(VI) and V(V) in milk, vegetables and foodstuffs with satisfactory results. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Environmentally safe separation and pre-concentration of rhodium and ruthenium from spent nuclear fuel using mixed-micelle cloud point extraction and determination by ICP-MS

    International Nuclear Information System (INIS)

    Ranjit, M.; Meeravali, N.N.; Kumar, S.J.

    2010-01-01

    Full text: Recently, spend nuclear fuel waste of thermal and fast reactors are emerging as an alternative valuable resource for Rh, Ru and Pd. In addition, its presence causes the difficulty in the vitrification process. Hence, its safe extraction from these wastes has to be carried out by using the environmental friendly extraction procedure. In this study, we have reported the simple mixed-micelle cloud point extraction (MM-CPE) procedure for separation as well as pre-concentration of Rh, Ru and Pd. This MM-CPE is carried out preliminarily from aqueous chloride medium with Aliquat-336/Triton X-114 mixed-micelles in the absence and presence of tin(II) chloride. In presence of chloride medium alone, only Pd get extracted quantitatively, while extraction of Rh and Ru are negligible. In presence of tin chloride, the extraction of Rh and Ru increases and becomes quantitative, without affecting the extraction of Pd. The MM-CPE conditions are optimized under influence of variables such as HCI, Aliquat-336, Triton X-114 and tin chloride concentrations and incubation time and temperature. Under the optimized conditions, the accuracy of the procedure is verified by using recovery study carried out from real water samples. This work is under progress to apply real nuclear fuel waste samples

  11. Environmental monitoring of phenolic pollutants in water by cloud point extraction prior to micellar electrokinetic chromatography.

    Science.gov (United States)

    Stege, Patricia W; Sombra, Lorena L; Messina, Germán A; Martinez, Luis D; Silva, María F

    2009-05-01

    Many aromatic compounds can be found in the environment as a result of anthropogenic activities and some of them are highly toxic. The need to determine low concentrations of pollutants requires analytical methods with high sensitivity, selectivity, and resolution for application to soil, sediment, water, and other environmental samples. Complex sample preparation involving analyte isolation and enrichment is generally necessary before the final analysis. The present paper outlines a novel, simple, low-cost, and environmentally friendly method for the simultaneous determination of p-nitrophenol (PNP), p-aminophenol (PAP), and hydroquinone (HQ) by micellar electrokinetic capillary chromatography after preconcentration by cloud point extraction. Enrichment factors of 180 to 200 were achieved. The limits of detection of the analytes for the preconcentration of 50-ml sample volume were 0.10 microg L(-1) for PNP, 0.20 microg L(-1) for PAP, and 0.16 microg L(-1) for HQ. The optimized procedure was applied to the determination of phenolic pollutants in natural waters from San Luis, Argentina.

  12. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    International Nuclear Information System (INIS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-01-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct

  13. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium and nickel in drinking and wastewater samples.

    Science.gov (United States)

    Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar

    2013-01-01

    A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.

  14. MOVING WINDOW SEGMENTATION FRAMEWORK FOR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2012-07-01

    Full Text Available As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first component segments points within a window shifting over the point cloud. The second component stitches the segments within the windows together. In this fashion a point cloud can be streamed through these two components in sequence, thus producing a segmentation. The algorithm has been tested on airborne lidar point cloud and some results of the performance of the framework are presented.

  15. Applicability of cloud point extraction for the separation trace amount of lead ion in environmental and biological samples prior to determination by flame atomic absorption spectrometry

    Directory of Open Access Journals (Sweden)

    Sayed Zia Mohammadi

    2016-09-01

    Full Text Available A sensitive cloud point extraction procedure(CPE for the preconcentration of trace lead prior to its determination by flame atomic absorption spectrometry (FAAS has been developed. The CPE method is based on the complex of Pb(II ion with 1-(2-pyridylazo-2-naphthol (PAN, and then entrapped in the non-ionic surfactant Triton X-114. The main factors affecting CPE efficiency, such as pH of sample solution, concentration of PAN and Triton X-114, equilibration temperature and time, were investigated in detail. A preconcentration factor of 30 was obtained for the preconcentration of Pb(II ion with 15.0 mL solution. Under the optimal conditions, the calibration curve was linear in the range of 7.5 ng mL−1–3.5 μg mL−1 of lead with R2 = 0.9998 (n = 10. Detection limit based on three times the standard deviation of the blank (3Sb was 5.27 ng mL−1. Eight replicate determinations of 1.0 μg mL−1 lead gave a mean absorbance of 0.275 with a relative standard deviation of 1.6%. The high efficiency of cloud point extraction to carry out the determination of analytes in complex matrices was demonstrated. The proposed method has been applied for determination of trace amounts of lead in biological and water samples with satisfactory results.

  16. Cloud Point Extraction and Determination of Silver Ion in Real Sample using Bis((1H-benzo[d ]imidazol-2ylmethylsulfane

    Directory of Open Access Journals (Sweden)

    Farshid Ahmadi

    2011-01-01

    Full Text Available Bis((1H-benzo[d]imidazol-2ylmethylsulfane (BHIS was used as a complexing agent in cloud point extraction for the first time and applied for selective pre-concentration of trace amounts of silver. The method is based on the extraction of silver at pH 8.0 by using non-ionic surfactant T-X114 and bis((1H-benzo[d]imidazol-2ylmethylsulfane as a chelating agent. The adopted concentrations for BHIS, Triton X-114 and HNO3, bath temperature, centrifuge rate and time were optimized. Detection limits (3SDb/m of 1.7 along with enrichment factor of 39 for silver ion was achieved. The high efficiency of cloud point extraction to carry out the determination of analytes in complex matrices was demonstrated. The proposed method was successfully applied to the ultra-trace determination of silver in real samples.

  17. A reliable method of quantification of trace copper in beverages with and without alcohol by spectrophotometry after cloud point extraction

    Directory of Open Access Journals (Sweden)

    Ramazan Gürkan

    2013-01-01

    Full Text Available A new cloud point extraction (CPE method was developed for the separation and preconcentration of copper (II prior to spectrophotometric analysis. For this purpose, 1-(2,4-dimethylphenyl azonapthalen-2-ol (Sudan II was used as a chelating agent and the solution pH was adjusted to 10.0 with borate buffer. Polyethylene glycol tert-octylphenyl ether (Triton X-114 was used as an extracting agent in the presence of sodium dodecylsulphate (SDS. After phase separation, based on the cloud point of the mixture, the surfactant-rich phase was diluted with acetone, and the enriched analyte was spectrophotometrically determined at 537 nm. The variables affecting CPE efficiency were optimized. The calibration curve was linear within the range 0.285-20 µg L-1 with a detection limit of 0.085 µg L-1. The method was successfully applied to the quantification of copper in different beverage samples.

  18. Review of procedures involving separation and preconcentration for the determination of cadmium using spectrometric techniques

    International Nuclear Information System (INIS)

    Ferreira, Sergio L.C.; Andrade, Jailson B. de; Korn, Maria das Gracas A.; Pereira, Madson de G.; Lemos, Valfredo A.; Santos, Walter N.L. dos; Rodrigues, Frederico de Medeiros; Souza, Anderson S.; Ferreira, Hadla S.; Silva, Erik G.P. da

    2007-01-01

    Spectrometric techniques for the analysis of trace cadmium have developed rapidly due to the increasing need for accurate measurements at extremely low levels of this element in diverse matrices. This review covers separation and preconcentration procedures, such as electrochemical deposition, precipitation, coprecipitation, solid phase extraction, liquid-liquid extraction (LLE) and cloud point extraction (CPE), and consider the features of the their application with several spectrometric techniques

  19. Development of a cloud-point extraction method for copper and nickel determination in food samples

    International Nuclear Information System (INIS)

    Azevedo Lemos, Valfredo; Selis Santos, Moacy; Teixeira David, Graciete; Vasconcelos Maciel, Mardson; Almeida Bezerra, Marcos de

    2008-01-01

    A new, simple and versatile cloud-point extraction (CPE) methodology has been developed for the separation and preconcentration of copper and nickel. The metals in the initial aqueous solution were complexed with 2-(2'-benzothiazolylazo)-5-(N,N-diethyl)aminophenol (BDAP) and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified methanol was performed after phase separation, and the copper and nickel contents were measured by flame atomic absorption spectrometry. The variables affecting the cloud-point extraction were optimized using a Box-Behnken design. Under the optimum experimental conditions, enrichment factors of 29 and 25 were achieved for copper and nickel, respectively. The accuracy of the method was evaluated and confirmed by analysis of the followings certified reference materials: Apple Leaves, Spinach Leaves and Tomato Leaves. The limits of detection expressed to solid sample analysis were 0.1 μg g -1 (Cu) and 0.4 μg g -1 (Ni). The precision for 10 replicate measurements of 75 μg L -1 Cu or Ni was 6.4 and 1.0, respectively. The method has been successfully applied to the analysis of food samples

  20. An improved approach for flow-based cloud point extraction.

    Science.gov (United States)

    Frizzarin, Rejane M; Rocha, Fábio R P

    2014-04-11

    Novel strategies are proposed to circumvent the main drawbacks of flow-based cloud point extraction (CPE). The surfactant-rich phase (SRP) was directly retained into the optical path of the spectrophotometric cell, thus avoiding its dilution previously to the measurement and yielding higher sensitivity. Solenoid micro-pumps were exploited to improve mixing by the pulsed flow and also to modulate the flow-rate for retention and removal of the SRP, thus avoiding the elution step, often carried out with organic solvents. The heat released and the increase of the salt concentration provided by an on-line neutralization reaction were exploited to induce the cloud point without an external heating device. These innovations were demonstrated by the spectrophotometric determination of iron, yielding a linear response from 10 to 200 μg L(-1) with a coefficient of variation of 2.3% (n=7). Detection limit and sampling rate were estimated at 5 μg L(-1) (95% confidence level) and 26 samples per hour, respectively. The enrichment factor was 8.9 and the procedure consumed only 6 μg of TAN and 390 μg of Triton X-114 per determination. At the 95% confidence level, the results obtained for freshwater samples agreed with the reference procedure and those obtained for digests of bovine muscle, rice flour, brown bread and tort lobster agreed with the certified reference values. The proposed procedure thus shows advantages in relation to previously proposed approaches for flow-based CPE, being a fast and environmental friendly alternative for on-line separation and pre-concentration. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Electrolytic preconcentration in instrumental analysis.

    Science.gov (United States)

    Sioda, R E; Batley, G E; Lund, W; Wang, J; Leach, S C

    1986-05-01

    The use of electrolytic deposition as a separation and preconcentration step in trace metal analysis is reviewed. Both the principles and applications of the technique are dealt with in some detail. Electrolytic preconcentration can be combined with a variety of instrumental techniques. Special attention is given to stripping voltammetry, potentiometric stripping analysis, different combinations with atomic-absorption spectrometry, and the use of flow-through porous electrodes. It is pointed out that the electrolytic preconcentration technique deserves more extensive use as well as fundamental investigation.

  2. Moving window segmentation framework for point clouds

    NARCIS (Netherlands)

    Sithole, G.; Gorte, B.G.H.

    2012-01-01

    As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first

  3. The registration of non-cooperative moving targets laser point cloud in different view point

    Science.gov (United States)

    Wang, Shuai; Sun, Huayan; Guo, Huichao

    2018-01-01

    Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.

  4. Cloud Point Extraction for Electroanalysis: Anodic Stripping Voltammetry of Cadmium.

    Science.gov (United States)

    Rusinek, Cory A; Bange, Adam; Papautsky, Ian; Heineman, William R

    2015-06-16

    Cloud point extraction (CPE) is a well-established technique for the preconcentration of hydrophobic species from water without the use of organic solvents. Subsequent analysis is then typically performed via atomic absorption spectroscopy (AAS), UV-vis spectroscopy, or high performance liquid chromatography (HPLC). However, the suitability of CPE for electroanalytical methods such as stripping voltammetry has not been reported. We demonstrate the use of CPE for electroanalysis using the determination of cadmium (Cd(2+)) by anodic stripping voltammetry (ASV). Rather than using the chelating agents which are commonly used in CPE to form a hydrophobic, extractable metal complex, we used iodide and sulfuric acid to neutralize the charge on Cd(2+) to form an extractable ion pair. This offers good selectivity for Cd(2+) as no interferences were observed from other heavy metal ions. Triton X-114 was chosen as the surfactant for the extraction because its cloud point temperature is near room temperature (22-25 °C). Bare glassy carbon (GC), bismuth-coated glassy carbon (Bi-GC), and mercury-coated glassy carbon (Hg-GC) electrodes were compared for the CPE-ASV. A detection limit for Cd(2+) of 1.7 nM (0.2 ppb) was obtained with the Hg-GC electrode. ASV with CPE gave a 20x decrease (4.0 ppb) in the detection limit compared to ASV without CPE. The suitability of this procedure for the analysis of tap and river water samples was demonstrated. This simple, versatile, environmentally friendly, and cost-effective extraction method is potentially applicable to a wide variety of transition metals and organic compounds that are amenable to detection by electroanalytical methods.

  5. A scalable and multi-purpose point cloud server (PCS) for easier and faster point cloud data management and processing

    Science.gov (United States)

    Cura, Rémi; Perret, Julien; Paparoditis, Nicolas

    2017-05-01

    In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

  6. Application of dual-cloud point extraction for the trace levels of copper in serum of different viral hepatitis patients by flame atomic absorption spectrometry: A multivariate study

    Science.gov (United States)

    Arain, Salma Aslam; Kazi, Tasneem G.; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal

    2014-12-01

    An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu2+) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu2+ using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046 μg L-1 and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu2+ in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu2+ in serum samples of different viral hepatitis patients and healthy controls.

  7. Simultaneous determination of antimony and boron in beverage and dairy products by flame atomic absorption spectrometry after separation and pre-concentration by cloud-point extraction.

    Science.gov (United States)

    Altunay, Nail; Gürkan, Ramazan

    2016-01-01

    A new cloud-point extraction (CPE) method was developed for the pre-concentration and simultaneous determination of Sb(III) and B(III) by flame atomic absorption spectrometry (FAAS). The method was based on complexation of Sb(III) and B(III) with azomethine-H in the presence of cetylpyridinium chloride (CPC) as a signal-enhancing agent, and then extraction into the micellar phase of Triton X-114. Under optimised conditions, linear calibration was obtained for Sb(III) and B(III) in the concentration ranges of 0.5-180 and 2.5-600 μg l(-1) with LODs of 0.15 and 0.75 μg l(-1), respectively. Relative standard deviations (RSDs) (25 and 100 μg l(-1) of Sb(III) and B(III), n = 6) were in a range of 2.1-3.8% and 1.9-2.3%, respectively. Recoveries of spiked samples of Sb(III) and B(III) were in the range of 98-103% and 99-102%, respectively. Measured values for Sb and B in three standard reference materials were within the 95% confidence limit of the certified values. Also, the method was used for the speciation of inorganic antimony. Sb(III), Sb(V) and total Sb were measured in the presence of excess boron before and after pre-reduction with an acidic mixture of KI-ascorbic acid. The method was successfully applied to the simultaneous determination of total Sb and B in selected beverage and dairy products.

  8. Model for Semantically Rich Point Cloud Data

    Science.gov (United States)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  9. MODEL FOR SEMANTICALLY RICH POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    F. Poux

    2017-10-01

    Full Text Available This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  10. Point cloud processing for smart systems

    Directory of Open Access Journals (Sweden)

    Jaromír Landa

    2013-01-01

    Full Text Available High population as well as the economical tension emphasises the necessity of effective city management – from land use planning to urban green maintenance. The management effectiveness is based on precise knowledge of the city environment. Point clouds generated by mobile and terrestrial laser scanners provide precise data about objects in the scanner vicinity. From these data pieces the state of the roads, buildings, trees and other objects important for this decision-making process can be obtained. Generally, they can support the idea of “smart” or at least “smarter” cities.Unfortunately the point clouds do not provide this type of information automatically. It has to be extracted. This extraction is done by expert personnel or by object recognition software. As the point clouds can represent large areas (streets or even cities, usage of expert personnel to identify the required objects can be very time-consuming, therefore cost ineffective. Object recognition software allows us to detect and identify required objects semi-automatically or automatically.The first part of the article reviews and analyses the state of current art point cloud object recognition techniques. The following part presents common formats used for point cloud storage and frequently used software tools for point cloud processing. Further, a method for extraction of geospatial information about detected objects is proposed. Therefore, the method can be used not only to recognize the existence and shape of certain objects, but also to retrieve their geospatial properties. These objects can be later directly used in various GIS systems for further analyses.

  11. Parametric Architectural Design with Point-clouds

    DEFF Research Database (Denmark)

    Zwierzycki, Mateusz; Evers, Henrik Leander; Tamke, Martin

    2016-01-01

    This paper investigates the efforts and benefits of the implementation of point clouds into architectural design processes and tools. Based on a study on the principal work processes of designers with point clouds the prototypical plugin/library - Volvox - was developed for the parametric modelling...

  12. Point cloud data management (extended abstract)

    NARCIS (Netherlands)

    Van Oosterom, P.J.M.; Ravada, S.; Horhammer, M.; Martinez Rubi, O.; Ivanova, M.; Kodde, M.; Tijssen, T.P.M.

    2014-01-01

    Point cloud data are important sources for 3D geo-information. The point cloud data sets are growing in popularity and in size. Modern Big Data acquisition and processing technologies, such as laser scanning from airborne, mobile, or static platforms, dense image matching from photos, multi-beam

  13. Professional SharePoint 2010 Cloud-Based Solutions

    CERN Document Server

    Fox, Steve; Stubbs, Paul; Follette, Donovan

    2011-01-01

    An authoritative guide to extending SharePoint's power with cloud-based services If you want to be part of the next major shift in the IT industry, you'll want this book. Melding two of the hottest trends in the industry—the widespread popularity of the SharePoint collaboration platform and the rapid rise of cloud computing—this practical guide shows developers how to extend their SharePoint solutions with the cloud's almost limitless capabilities. See how to get started, discover smart ways to leverage cloud data and services through Azure, start incorporating Twitter or LinkedIn

  14. SMART POINT CLOUD: DEFINITION AND REMAINING CHALLENGES

    Directory of Open Access Journals (Sweden)

    F. Poux

    2016-10-01

    Full Text Available Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  15. A new ultrasonic-assisted cloud-point-extraction procedure for pre-concentration and determination of ultra-trace levels of copper in selected beverages and foods by flame atomic absorption spectrometry.

    Science.gov (United States)

    Altunay, Nail; Gürkan, Ramazan; Orhan, Ulaş

    2015-01-01

    A new ultrasonic-assisted cloud-point-extraction (UA-CPE) method was developed for the pre-concentration of Cu(II) in selected beverage and food samples prior to flame atomic absorption spectrometric (FAAS) analysis. For this purpose, Safranin T was used as an ion-pairing reagent based on charge transfer in the presence of oxalate as the primary chelating agent at pH 10. Non-ionic surfactant, poly(ethyleneglycol-mono-p-nonylphenylether) (PONPE 7.5) was used as an extracting agent in the presence of NH4Cl as the salting out agent. The variables affecting UA-CPE efficiency were optimised in detail. The linear range for Cu(II) at pH 10 was 0.02-70 µg l(-)(1) with a very low detection limit of 6.10 ng l(-)(1), while the linear range for Cu(I) at pH 8.5 was 0.08-125 µg l(-)(1) with a detection limit of 24.4 ng l(-)(1). The relative standard deviation (RSD %) was in the range of 2.15-4.80% (n = 5). The method was successfully applied to the quantification of Cu(II), Cu(I) and total Cu in selected beverage and food samples. The accuracy of the developed method was demonstrated by the analysis of two standard reference materials (SRMs) as well as recoveries of spiked samples.

  16. Self-Similar Spin Images for Point Cloud Matching

    Science.gov (United States)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor

  17. Point Cloud Management Through the Realization of the Intelligent Cloud Viewer Software

    Science.gov (United States)

    Costantino, D.; Angelini, M. G.; Settembrini, F.

    2017-05-01

    The paper presents a software dedicated to the elaboration of point clouds, called Intelligent Cloud Viewer (ICV), made in-house by AESEI software (Spin-Off of Politecnico di Bari), allowing to view point cloud of several tens of millions of points, also on of "no" very high performance systems. The elaborations are carried out on the whole point cloud and managed by means of the display only part of it in order to speed up rendering. It is designed for 64-bit Windows and is fully written in C ++ and integrates different specialized modules for computer graphics (Open Inventor by SGI, Silicon Graphics Inc), maths (BLAS, EIGEN), computational geometry (CGAL, Computational Geometry Algorithms Library), registration and advanced algorithms for point clouds (PCL, Point Cloud Library), advanced data structures (BOOST, Basic Object Oriented Supporting Tools), etc. ICV incorporates a number of features such as, for example, cropping, transformation and georeferencing, matching, registration, decimation, sections, distances calculation between clouds, etc. It has been tested on photographic and TLS (Terrestrial Laser Scanner) data, obtaining satisfactory results. The potentialities of the software have been tested by carrying out the photogrammetric survey of the Castel del Monte which was already available in previous laser scanner survey made from the ground by the same authors. For the aerophotogrammetric survey has been adopted a flight height of approximately 1000ft AGL (Above Ground Level) and, overall, have been acquired over 800 photos in just over 15 minutes, with a covering not less than 80%, the planned speed of about 90 knots.

  18. POINT CLOUD MANAGEMENT THROUGH THE REALIZATION OF THE INTELLIGENT CLOUD VIEWER SOFTWARE

    Directory of Open Access Journals (Sweden)

    D. Costantino

    2017-05-01

    Full Text Available The paper presents a software dedicated to the elaboration of point clouds, called Intelligent Cloud Viewer (ICV, made in-house by AESEI software (Spin-Off of Politecnico di Bari, allowing to view point cloud of several tens of millions of points, also on of “no” very high performance systems. The elaborations are carried out on the whole point cloud and managed by means of the display only part of it in order to speed up rendering. It is designed for 64-bit Windows and is fully written in C ++ and integrates different specialized modules for computer graphics (Open Inventor by SGI, Silicon Graphics Inc, maths (BLAS, EIGEN, computational geometry (CGAL, Computational Geometry Algorithms Library, registration and advanced algorithms for point clouds (PCL, Point Cloud Library, advanced data structures (BOOST, Basic Object Oriented Supporting Tools, etc. ICV incorporates a number of features such as, for example, cropping, transformation and georeferencing, matching, registration, decimation, sections, distances calculation between clouds, etc. It has been tested on photographic and TLS (Terrestrial Laser Scanner data, obtaining satisfactory results. The potentialities of the software have been tested by carrying out the photogrammetric survey of the Castel del Monte which was already available in previous laser scanner survey made from the ground by the same authors. For the aerophotogrammetric survey has been adopted a flight height of approximately 1000ft AGL (Above Ground Level and, overall, have been acquired over 800 photos in just over 15 minutes, with a covering not less than 80%, the planned speed of about 90 knots.

  19. Application of dual-cloud point extraction for the trace levels of copper in serum of different viral hepatitis patients by flame atomic absorption spectrometry: a multivariate study.

    Science.gov (United States)

    Arain, Salma Aslam; Kazi, Tasneem G; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal

    2014-12-10

    An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu(2+)) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu(2+) using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046μgL(-1) and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu(2+) in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu(2+) in serum samples of different viral hepatitis patients and healthy controls. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. MICELLE-MEDIATED EXTRACTION AS A TOOL FOR SEPARATION AND PRECONCENTRATION IN COPPER ANALYSIS

    Directory of Open Access Journals (Sweden)

    F. Ahmadia

    2010-06-01

    Full Text Available A cloud point extraction method was presented for preconcentration of copper in various samples. After complexation with 4-Amino-2,3-dimethyl-1-phenyl-3-pyrazoline-5-one (ADPP or N-Benzoyl-N-phenylhydroxylamine (BPA  in water, analyte ions are quantitatively extracted to the phase rich in Triton X-114 after centrifugation. 2.0 mol L-1 HNO3 solution in methanol was added to the surfactant-rich phase prior to its analysis by flame atomic absorption spectrometry (FAAS. The adopted concentrations for ADPP, Triton X-114, HNO3 and parameters such as bath temperature, centrifuge rate and time were optimized. Detection limits (3SDb/m of 1.3 and 1.9 ng mL-1 for ADPP and BPA along with enrichment factors of 30 and 38 for ADPP and BPA were achieved. The high efficiency of cloud point extraction to carry out the determination of analyte in complex matrices was demonstrated. The proposed method was applied to the analysis of biological, industrial, natural and wastewater, soil and blood samples.   Keywords: 4-Amino-2,3-dimethyl-1-phenyl-3-pyrazoline-5-one (ADPP, N-Benzoyl-N-phenylhydroxylamine (BPA ,   Cloud Point Extraction, Triton X-114, Flame Atomic Absorption Spectrometry.

  1. Determination of cadmium(II), cobalt(II), nickel(II), lead(II), zinc(II), and copper(II) in water samples using dual-cloud point extraction and inductively coupled plasma emission spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng [College of Chemistry and Life Sciences, Zhejiang Normal University, Jinhua 321004 (China); Chen, Jianrong, E-mail: cjr@zjnu.cn [College of Chemistry and Life Sciences, Zhejiang Normal University, Jinhua 321004 (China); College of Geography and Environmental Sciences, Zhejiang Normal University, Jinhua 321004 (China)

    2012-11-15

    Highlights: Black-Right-Pointing-Pointer A dual-cloud point extraction (d-CPE) procedure was firstly developed for simultaneous pre-concentration and separation of trace metal ions combining with ICP-OES. Black-Right-Pointing-Pointer The developed d-CPE can significantly eliminate the surfactant of Triton X-114 and successfully extend to the determination of water samples with good performance. Black-Right-Pointing-Pointer The designed method is simple, high efficient, low cost, and in accordance with the green chemistry concept. - Abstract: A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd{sup 2+}, Co{sup 2+}, Ni{sup 2+}, Pb{sup 2+}, Zn{sup 2+}, and Cu{sup 2+} ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH = 7.0, Triton X-114 = 0.05% (w/v), 8-HQ = 2.0 Multiplication-Sign 10{sup -4} mol L{sup -1}, HNO{sub 3} = 0.8 mol L{sup -1}), the detection limits for Cd{sup 2+}, Co{sup 2+}, Ni{sup 2+}, Pb{sup 2+}, Zn{sup 2+}, and Cu{sup 2+} ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 {mu}g L{sup -1}, respectively. Relative standard deviation (RSD) values for 10 replicates at 100 {mu}g L{sup -1} were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd{sup 2+}, Co{sup 2+}, Ni{sup 2+}, Pb{sup 2+}, Zn{sup 2+}, and Cu{sup 2+} ion in water samples.

  2. CLASSIFICATION BY USING MULTISPECTRAL POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    C. T. Liao

    2012-07-01

    Full Text Available Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  3. Classification by Using Multispectral Point Cloud Data

    Science.gov (United States)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  4. Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services

    Science.gov (United States)

    Collins, Patrick; Bahr, Thomas

    2016-04-01

    The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of

  5. Cloud point extraction for the determination of heavy metals by nonionic surfactant Triton X-100 and PAN

    International Nuclear Information System (INIS)

    Cabrera Puig, I.; Perez Gramatges, A.

    2006-01-01

    A novel methodology for extraction and preconcentration of trace metals based on cloud point phenomenon was applied to the analysis of Co(II), Cu(II), Cd(II), Pb(II) y Ni(II) in a certified reference material (CRM), using Triton X-100 as nonionic surfactant, and AAS for the determination. Different parameters that can influence the extraction efficiency were studied, such as pH and ionic strength of the solution. The precision, accuracy and detection limits of the method were determined using a CRM from the Environmental Analysis Laboratory of InSTEC. We applied our methodology to the detection of the metals in naturals waters (Almendares river and tap water) . The data obtained presented in this work is part of the validation file of the proposed analytical procedure for the determination of heavy metals

  6. Cloud-point extraction and reversed-phase high-performance liquid chromatography for the determination of synthetic phenolic antioxidants in edible oils.

    Science.gov (United States)

    Chen, Miao; Xia, Qinghai; Liu, Mousheng; Yang, Yaling

    2011-01-01

    A cloud-point extraction (CPE) method using Triton X-114 (TX-114) nonionic surfactant was developed for the extraction and preconcentration of propyl gallate (PG), tertiary butyl hydroquinone (TBHQ), butylated hydroxyanisole (BHA), and butylated hydroxytoluene (BHT) from edible oils. The optimum conditions of CPE were 2.5% (v/v) TX-114, 0.5% (w/v) NaCl and 40 min equilibration time at 50 °C. The surfactant-rich phase was then analyzed by reversed-phase high-performance liquid chromatography with ultraviolet detection at 280 nm, using a gradient mobile phase consisting of methanol and 1.5% (v/v) acetic acid. Under the studied conditions, 4 synthetic phenolic antioxidants (SPAs) were successfully separated within 24 min. The limits of detection (LOD) were 1.9 ng mL(-1) for PG, 11 ng mL(-1) for TBHQ, 2.3 ng mL(-1) for BHA, and 5.9 ng mL(-1) for BHT. Recoveries of the SPAs spiked into edible oil were in the range 81% to 88%. The CPE method was shown to be potentially useful for the preconcentration of the target analytes, with a preconcentration factor of 14. Moreover, the method is simple, has high sensitivity, consumes much less solvent than traditional methods, and is environment-friendly. Practical Application: The method established in this article uses less organic solvent to extract SPAs from edible oils; it is simple, highly sensitive and results in no pollution to the environment.

  7. Flame atomic absorption spectrometric determination of trace quantities of cadmium in water samples after cloud point extraction in Triton X-114 without added chelating agents

    International Nuclear Information System (INIS)

    Afkhami, Abbas; Madrakian, Tayyebeh; Siampour, Hajar

    2006-01-01

    A new micell-mediated phase separation method for preconcentration of ultra-trace quantities of cadmium as a prior step to its determination by flame atomic absorption spectrometry has been developed. The method is based on the cloud point extraction (CPE) of cadmium in iodide media with Triton X-114 in the absence of any chelating agent. The optimal extraction and reaction conditions (e.g., acid concentration, iodide concentration, effect of time) were studied, and the analytical characteristics of the method (e.g., limit of detection, linear range, preconcentration, and improvement factors) were obtained. Linearity was obeyed in the range of 3-300 ng mL -1 of cadmium. The detection limit of the method is 1.0 ng mL -1 of cadmium. The interference effect of some anions and cations was also tested. The method was applied to the determination of cadmium in tap water, waste water, and sea water samples

  8. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  9. Determination of cadmium(II), cobalt(II), nickel(II), lead(II), zinc(II), and copper(II) in water samples using dual-cloud point extraction and inductively coupled plasma emission spectrometry

    International Nuclear Information System (INIS)

    Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng; Chen, Jianrong

    2012-01-01

    Highlights: ► A dual-cloud point extraction (d-CPE) procedure was firstly developed for simultaneous pre-concentration and separation of trace metal ions combining with ICP-OES. ► The developed d-CPE can significantly eliminate the surfactant of Triton X-114 and successfully extend to the determination of water samples with good performance. ► The designed method is simple, high efficient, low cost, and in accordance with the green chemistry concept. - Abstract: A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd 2+ , Co 2+ , Ni 2+ , Pb 2+ , Zn 2+ , and Cu 2+ ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH = 7.0, Triton X-114 = 0.05% (w/v), 8-HQ = 2.0 × 10 −4 mol L −1 , HNO 3 = 0.8 mol L −1 ), the detection limits for Cd 2+ , Co 2+ , Ni 2+ , Pb 2+ , Zn 2+ , and Cu 2+ ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 μg L −1 , respectively. Relative standard deviation (RSD) values for 10 replicates at 100 μg L −1 were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd 2+ , Co 2+ , Ni 2+ , Pb 2+ , Zn 2+ , and Cu 2+ ion in water samples.

  10. PROCESSING UAV AND LIDAR POINT CLOUDS IN GRASS GIS

    Directory of Open Access Journals (Sweden)

    V. Petras

    2016-06-01

    Full Text Available Today’s methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM, and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM. Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL, Point Cloud Library (PCL, and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  11. Determination of ultra trace arsenic species in water samples by hydride generation atomic absorption spectrometry after cloud point extraction

    Energy Technology Data Exchange (ETDEWEB)

    Ulusoy, Halil Ibrahim, E-mail: hiulusoy@yahoo.com [University of Cumhuriyet, Faculty of Science, Department of Chemistry, TR-58140, Sivas (Turkey); Akcay, Mehmet; Ulusoy, Songuel; Guerkan, Ramazan [University of Cumhuriyet, Faculty of Science, Department of Chemistry, TR-58140, Sivas (Turkey)

    2011-10-10

    Graphical abstract: The possible complex formation mechanism for ultra-trace As determination. Highlights: {yields} CPE/HGAAS system for arsenic determination and speciation in real samples has been applied first time until now. {yields} The proposed method has the lowest detection limit when compared with those of similar CPE studies present in literature. {yields} The linear range of the method is highly wide and suitable for its application to real samples. - Abstract: Cloud point extraction (CPE) methodology has successfully been employed for the preconcentration of ultra-trace arsenic species in aqueous samples prior to hydride generation atomic absorption spectrometry (HGAAS). As(III) has formed an ion-pairing complex with Pyronine B in presence of sodium dodecyl sulfate (SDS) at pH 10.0 and extracted into the non-ionic surfactant, polyethylene glycol tert-octylphenyl ether (Triton X-114). After phase separation, the surfactant-rich phase was diluted with 2 mL of 1 M HCl and 0.5 mL of 3.0% (w/v) Antifoam A. Under the optimized conditions, a preconcentration factor of 60 and a detection limit of 0.008 {mu}g L{sup -1} with a correlation coefficient of 0.9918 was obtained with a calibration curve in the range of 0.03-4.00 {mu}g L{sup -1}. The proposed preconcentration procedure was successfully applied to the determination of As(III) ions in certified standard water samples (TMDA-53.3 and NIST 1643e, a low level fortified standard for trace elements) and some real samples including natural drinking water and tap water samples.

  12. A new dispersive liquid-liquid microextraction using ionic liquid based microemulsion coupled with cloud point extraction for determination of copper in serum and water samples.

    Science.gov (United States)

    Arain, Salma Aslam; Kazi, Tasneem Gul; Afridi, Hassan Imran; Arain, Mariam Shahzadi; Panhwar, Abdul Haleem; Khan, Naeemullah; Baig, Jameel Ahmed; Shah, Faheem

    2016-04-01

    A simple and rapid dispersive liquid-liquid microextraction procedure based on ionic liquid assisted microemulsion (IL-µE-DLLME) combined with cloud point extraction has been developed for preconcentration copper (Cu(2+)) in drinking water and serum samples of adolescent female hepatitits C (HCV) patients. In this method a ternary system was developed to form microemulsion (µE) by phase inversion method (PIM), using ionic liquid, 1-butyl-3-methylimidazolium hexafluorophosphate ([C4mim][PF6]) and nonionic surfactant, TX-100 (as a stabilizer in aqueous media). The Ionic liquid microemulsion (IL-µE) was evaluated through visual assessment, optical light microscope and spectrophotometrically. The Cu(2+) in real water and aqueous acid digested serum samples were complexed with 8-hydroxyquinoline (oxine) and extracted into IL-µE medium. The phase separation of stable IL-µE was carried out by the micellar cloud point extraction approach. The influence of of different parameters such as pH, oxine concentration, centrifugation time and rate were investigated. At optimized experimental conditions, the limit of detection and enhancement factor were found to be 0.132 µg/L and 70 respectively, with relative standard deviation <5%. In order to validate the developed method, certified reference materials (SLRS-4 Riverine water) and human serum (Sero-M10181) were analyzed. The resulting data indicated a non-significant difference in obtained and certified values of Cu(2+). The developed procedure was successfully applied for the preconcentration and determination of trace levels of Cu(2+) in environmental and biological samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Pointo - a Low Cost Solution to Point Cloud Processing

    Science.gov (United States)

    Houshiar, H.; Winkler, S.

    2017-11-01

    With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a

  14. Cleaning Massive Sonar Point Clouds

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Larsen, Kasper Green; Mølhave, Thomas

    2010-01-01

    We consider the problem of automatically cleaning massive sonar data point clouds, that is, the problem of automatically removing noisy points that for example appear as a result of scans of (shoals of) fish, multiple reflections, scanner self-reflections, refraction in gas bubbles, and so on. We...

  15. Benchmarking and improving point cloud data management in MonetDB

    NARCIS (Netherlands)

    O. Martinez-Rubi (Oscar); P. van Oosterom; R.A. Goncalves (Romulo); T. Tijssen; M.G. Ivanova (Milena); M.L. Kersten (Martin); F. Alvanaki (Foteini)

    2014-01-01

    htmlabstractThe popularity, availability and sizes of point cloud data sets are increasing, thus raising interesting data management and processing challenges. Various software solutions are available for the management of point cloud data. A benchmark for point cloud data management systems was

  16. Benchmarking and improving point cloud data management in MonetDB

    NARCIS (Netherlands)

    Martinez-Rubi, O.; Van Oosterom, P.J.M.; Goncalves, R.; Tijssen, T.P.M.; Ivanova, M.; Kersten, M.L.; Alvanaki, F.

    2015-01-01

    The popularity, availability and sizes of point cloud data sets are increasing, thus raising interesting data management and processing challenges. Various software solutions are available for the management of point cloud data. A benchmark for point cloud data management systems was defined and it

  17. Characterizing Sorghum Panicles using 3D Point Clouds

    Science.gov (United States)

    Lonesome, M.; Popescu, S. C.; Horne, D. W.; Pugh, N. A.; Rooney, W.

    2017-12-01

    To address demands of population growth and impacts of global climate change, plant breeders must increase crop yield through genetic improvement. However, plant phenotyping, the characterization of a plant's physical attributes, remains a primary bottleneck in modern crop improvement programs. 3D point clouds generated from terrestrial laser scanning (TLS) and unmanned aerial systems (UAS) based structure from motion (SfM) are a promising data source to increase the efficiency of screening plant material in breeding programs. This study develops and evaluates methods for characterizing sorghum (Sorghum bicolor) panicles (heads) in field plots from both TLS and UAS-based SfM point clouds. The TLS point cloud over experimental sorghum field at Texas A&M farm in Burleston County TX were collected using a FARO Focus X330 3D laser scanner. SfM point cloud was generated from UAS imagery captured using a Phantom 3 Professional UAS at 10m altitude and 85% image overlap. The panicle detection method applies point cloud reflectance, height and point density attributes characteristic of sorghum panicles to detect them and estimate their dimensions (panicle length and width) through image classification and clustering procedures. We compare the derived panicle counts and panicle sizes with field-based and manually digitized measurements in selected plots and study the strengths and limitations of each data source for sorghum panicle characterization.

  18. Large-scale urban point cloud labeling and reconstruction

    Science.gov (United States)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  19. Multiview point clouds denoising based on interference elimination

    Science.gov (United States)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  20. Ifcwall Reconstruction from Unstructured Point Clouds

    Science.gov (United States)

    Bassier, M.; Klein, R.; Van Genechten, B.; Vergauwen, M.

    2018-05-01

    The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is the creation of accurate wall geometry as it forms the basis for further reconstruction of objects in a BIM. After segmenting and classifying the initial point cloud, the labelled segments are processed and the wall topology is reconstructed. However, the preocedure is challenging due to noise, occlusions and the complexity of the input data.In this work, a method is presented to automatically reconstruct consistent wall geometry from point clouds. More specifically, the use of room information is proposed to aid the wall topology creation. First, a set of partial walls is constructed based on classified planar primitives. Next, the rooms are identified using the retrieved wall information along with the floors and ceilings. The wall topology is computed by the intersection of the partial walls conditioned on the room information. The final wall geometry is defined by creating IfcWallStandardCase objects conform the IFC4 standard. The result is a set of walls according to the as-built conditions of a building. The experiments prove that the used method is a reliable framework for wall reconstruction from unstructured point cloud data. Also, the implementation of room information reduces the rate of false positives for the wall topology. Given the walls, ceilings and floors, 94% of the rooms is correctly identified. A key advantage of the proposed method is that it deals with complex rooms and is not bound to single storeys.

  1. Development of a Cloud-Point Extraction Method for Cobalt Determination in Natural Water Samples

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Jamali

    2013-01-01

    Full Text Available A new, simple, and versatile cloud-point extraction (CPE methodology has been developed for the separation and preconcentration of cobalt. The cobalt ions in the initial aqueous solution were complexed with 4-Benzylpiperidinedithiocarbamate, and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the cobalt content was measured by flame atomic absorption spectrometry. The main factors affecting CPE procedure, such as pH, concentration of ligand, amount of Triton X-114, equilibrium temperature, and incubation time were investigated and optimized. Under the optimal conditions, the limit of detection (LOD for cobalt was 0.5 μg L-1, with sensitivity enhancement factor (EF of 67. Calibration curve was linear in the range of 2–150 μg L-1, and relative standard deviation was 3.2% (c=100 μg L-1; n=10. The proposed method was applied to the determination of trace cobalt in real water samples with satisfactory analytical results.

  2. Geographical point cloud modelling with the 3D medial axis transform

    NARCIS (Netherlands)

    Peters, R.Y.

    2018-01-01

    A geographical point cloud is a detailed three-dimensional representation of the geometry of our geographic environment.
    Using geographical point cloud modelling, we are able to extract valuable information from geographical point clouds that can be used for applications in asset management,

  3. Processing Terrain Point Cloud Data

    KAUST Repository

    DeVore, Ronald; Petrova, Guergana; Hielsberg, Matthew; Owens, Luke; Clack, Billy; Sood, Alok

    2013-01-01

    Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization

  4. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    Science.gov (United States)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  5. POINT-CLOUD COMPRESSION FOR VEHICLE-BASED MOBILE MAPPING SYSTEMS USING PORTABLE NETWORK GRAPHICS

    Directory of Open Access Journals (Sweden)

    K. Kohira

    2017-09-01

    Full Text Available A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects.Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  6. DETECTION OF SLOPE MOVEMENT BY COMPARING POINT CLOUDS CREATED BY SFM SOFTWARE

    Directory of Open Access Journals (Sweden)

    K. Oda

    2016-06-01

    Full Text Available This paper proposes movement detection method between point clouds created by SFM software, without setting any onsite georeferenced points. SfM software, like Smart3DCaputure, PhotoScan, and Pix4D, are convenient for non-professional operator of photogrammetry, because these systems require simply specification of sequence of photos and output point clouds with colour index which corresponds to the colour of original image pixel where the point is projected. SfM software can execute aerial triangulation and create dense point clouds fully automatically. This is useful when monitoring motion of unstable slopes, or loos rocks in slopes along roads or railroads. Most of existing method, however, uses mesh-based DSM for comparing point clouds before/after movement and it cannot be applied in such cases that part of slopes forms overhangs. And in some cases movement is smaller than precision of ground control points and registering two point clouds with GCP is not appropriate. Change detection method in this paper adopts CCICP (Classification and Combined ICP algorithm for registering point clouds before / after movement. The CCICP algorithm is a type of ICP (Iterative Closest Points which minimizes point-to-plane, and point-to-point distances, simultaneously, and also reject incorrect correspondences based on point classification by PCA (Principle Component Analysis. Precision test shows that CCICP method can register two point clouds up to the 1 pixel size order in original images. Ground control points set in site are useful for initial setting of two point clouds. If there are no GCPs in site of slopes, initial setting is achieved by measuring feature points as ground control points in the point clouds before movement, and creating point clouds after movement with these ground control points. When the motion is rigid transformation, in case that a loose Rock is moving in slope, motion including rotation can be analysed by executing CCICP for a

  7. Buildings and Terrain of Urban Area Point Cloud Segmentation based on PCL

    International Nuclear Information System (INIS)

    Liu, Ying; Zhong, Ruofei

    2014-01-01

    One current problem with laser radar point data classification is building and urban terrain segmentation, this paper proposes a point cloud segmentation method base on PCL libraries. PCL is a large cross-platform open source C++ programming library, which implements a large number of point cloud related efficient data structures and generic algorithms involving point cloud retrieval, filtering, segmentation, registration, feature extraction and curved surface reconstruction, visualization, etc. Due to laser radar point cloud characteristics with large amount of data, unsymmetrical distribution, this paper proposes using the data structure of kd-tree to organize data; then using Voxel Grid filter for point cloud resampling, namely to reduce the amount of point cloud data, and at the same time keep the point cloud shape characteristic; use PCL Segmentation Module, we use a Euclidean Cluster Extraction class with Europe clustering for buildings and ground three-dimensional point cloud segmentation. The experimental results show that this method avoids the multiple copy system existing data needs, saves the program storage space through the call of PCL library method and class, shortens the program compiled time and improves the running speed of the program

  8. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    Science.gov (United States)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of

  9. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    Science.gov (United States)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  10. SEMANTIC SEGMENTATION OF BUILDING ELEMENTS USING POINT CLOUD HASHING

    Directory of Open Access Journals (Sweden)

    M. Chizhova

    2018-05-01

    Full Text Available For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect into different building types and structural elements (dome, nave, transept etc., including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling.

  11. LSAH: a fast and efficient local surface feature for point cloud registration

    Science.gov (United States)

    Lu, Rongrong; Zhu, Feng; Wu, Qingxiao; Kong, Yanzi

    2018-04-01

    Point cloud registration is a fundamental task in high level three dimensional applications. Noise, uneven point density and varying point cloud resolutions are the three main challenges for point cloud registration. In this paper, we design a robust and compact local surface descriptor called Local Surface Angles Histogram (LSAH) and propose an effectively coarse to fine algorithm for point cloud registration. The LSAH descriptor is formed by concatenating five normalized sub-histograms into one histogram. The five sub-histograms are created by accumulating a different type of angle from a local surface patch respectively. The experimental results show that our LSAH is more robust to uneven point density and point cloud resolutions than four state-of-the-art local descriptors in terms of feature matching. Moreover, we tested our LSAH based coarse to fine algorithm for point cloud registration. The experimental results demonstrate that our algorithm is robust and efficient as well.

  12. Speciation of silver nanoparticles and Ag(I) species using cloud point extraction followed by electrothermal atomic absorption spectrometry

    International Nuclear Information System (INIS)

    López-García, Ignacio; Vicente-Martínez, Yesica; Hernández-Córdoba, Manuel

    2014-01-01

    Silver nanoparticles in the presence of Triton-X114 were extracted into a micellar phase obtained after incubation at 40 °C for 10 min followed by centrifugation. After injection of an aliquot (30 μL) of the surfactant-rich phase into the electrothermal atomizer, the enrichment effect due to cloud point extraction allowed a detection limit of 2 ng L −1 silver to be achieved. The preconcentration factor was 242, and the repeatability for ten measurements at a 50 ng L −1 silver level was 4.6%. Ag(I) species were adsorbed onto the silver nanoparticles and were also extracted in the micellar phase. The incorporation of 0.01 mol L −1 ammonium thiocyanate to the sample solution prevented the extraction of Ag(I) species. Speciation was carried out using two extractions, one in the absence and the other in the presence of thiocyanate, the concentration of Ag(I) species being obtained by difference. The procedure was applied to the determination of silver nanoparticles and Ag(I) species in waters and in lixiviates obtained from sticking plasters and cleaning cloths. - Highlights: • Silver nanoparticles and Ag(I) species are separated into a surfactant-rich phase. • The Ag(I) species are not extracted in the presence of thiocyanate. • The cloud point extraction of two aliquots allows speciation to be carried out. • Extreme sensitivity (detection limit 2 ng L −1 ) is achieved

  13. Speciation of silver nanoparticles and Ag(I) species using cloud point extraction followed by electrothermal atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    López-García, Ignacio; Vicente-Martínez, Yesica; Hernández-Córdoba, Manuel, E-mail: hcordoba@um.es

    2014-11-01

    Silver nanoparticles in the presence of Triton-X114 were extracted into a micellar phase obtained after incubation at 40 °C for 10 min followed by centrifugation. After injection of an aliquot (30 μL) of the surfactant-rich phase into the electrothermal atomizer, the enrichment effect due to cloud point extraction allowed a detection limit of 2 ng L{sup −1} silver to be achieved. The preconcentration factor was 242, and the repeatability for ten measurements at a 50 ng L{sup −1} silver level was 4.6%. Ag(I) species were adsorbed onto the silver nanoparticles and were also extracted in the micellar phase. The incorporation of 0.01 mol L{sup −1} ammonium thiocyanate to the sample solution prevented the extraction of Ag(I) species. Speciation was carried out using two extractions, one in the absence and the other in the presence of thiocyanate, the concentration of Ag(I) species being obtained by difference. The procedure was applied to the determination of silver nanoparticles and Ag(I) species in waters and in lixiviates obtained from sticking plasters and cleaning cloths. - Highlights: • Silver nanoparticles and Ag(I) species are separated into a surfactant-rich phase. • The Ag(I) species are not extracted in the presence of thiocyanate. • The cloud point extraction of two aliquots allows speciation to be carried out. • Extreme sensitivity (detection limit 2 ng L{sup −1}) is achieved.

  14. FPFH-based graph matching for 3D point cloud registration

    Science.gov (United States)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  15. Towards extending IFC with point cloud data

    NARCIS (Netherlands)

    Krijnen, T.F.; Beetz, J.; Ochmann, S.; Vock, R.; Wessel, R.

    2015-01-01

    In this paper we suggest an extension to the Industry Foundation Classes model to integrate point cloud datasets. The proposal includes a schema extension to the core model allowing the storage of points either as Cartesian coordinates, points in parametric space of a surface associated with a

  16. APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD

    Directory of Open Access Journals (Sweden)

    S. Cai

    2018-04-01

    Full Text Available Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging data post-processing. Cloth simulation filtering (CSF algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM, 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

  17. Microfluidic paper-based biomolecule preconcentrator based on ion concentration polarization.

    Science.gov (United States)

    Han, Sung Il; Hwang, Kyo Seon; Kwak, Rhokyun; Lee, Jeong Hoon

    2016-06-21

    Microfluidic paper-based analytical devices (μPADs) for molecular detection have great potential in the field of point-of-care diagnostics. Currently, a critical problem being faced by μPADs is improving their detection sensitivity. Various preconcentration processes have been developed, but they still have complicated structures and fabrication processes to integrate into μPADs. To address this issue, we have developed a novel paper-based preconcentrator utilizing ion concentration polarization (ICP) with minimal addition on lateral-flow paper. The cation selective membrane (i.e., Nafion) is patterned on adhesive tape, and this tape is then attached to paper-based channels. When an electric field is applied across the Nafion, ICP is initiated to preconcentrate the biomolecules in the paper channel. Departing from previous paper-based preconcentrators, we maintain steady lateral fluid flow with the separated Nafion layer; as a result, fluorescent dyes and proteins (FITC-albumin and bovine serum albumin) are continuously delivered to the preconcentration zone, achieving high preconcentration performance up to 1000-fold. In addition, we demonstrate that the Nafion-patterned tape can be integrated with various geometries (multiplexed preconcentrator) and platforms (string and polymer microfluidic channel). This work would facilitate integration of various ICP devices, including preconcentrators, pH/concentration modulators, and micro mixers, with steady lateral flows in paper-based platforms.

  18. Spectrophotometric determination of low levels arsenic species in beverages after ion-pairing vortex-assisted cloud-point extraction with acridine red.

    Science.gov (United States)

    Altunay, Nail; Gürkan, Ramazan; Kır, Ufuk

    2016-01-01

    A new, low-cost, micellar-sensitive and selective spectrophotometric method was developed for the determination of inorganic arsenic (As) species in beverage samples. Vortex-assisted cloud-point extraction (VA-CPE) was used for the efficient pre-concentration of As(V) in the selected samples. The method is based on selective and sensitive ion-pairing of As(V) with acridine red (ARH(+)) in the presence of pyrogallol and sequential extraction into the micellar phase of Triton X-45 at pH 6.0. Under the optimised conditions, the calibration curve was highly linear in the range of 0.8-280 µg l(-1) for As(V). The limits of detection and quantification of the method were 0.25 and 0.83 µg l(-1), respectively. The method was successfully applied to the determination of trace As in the pre-treated and digested samples under microwave and ultrasonic power. As(V) and total As levels in the samples were spectrophotometrically determined after pre-concentration with VA-CPE at 494 nm before and after oxidation with acidic KMnO4. The As(III) levels were calculated from the difference between As(V) and total As levels. The accuracy of the method was demonstrated by analysis of two certified reference materials (CRMs) where the measured values for As were statistically within the 95% confidence limit for the certified values.

  19. Visualization and labeling of point clouds in virtual reality

    DEFF Research Database (Denmark)

    Stets, Jonathan Dyssel; Sun, Yongbin; Greenwald, Scott W.

    2017-01-01

    We present a Virtual Reality (VR) application for labeling and handling point cloud data sets. A series of room-scale point clouds are recorded as a video sequence using a Microsoft Kinect. The data can be played and paused, and frames can be skipped just like in a video player. The user can walk...

  20. Lidar Point Cloud - USGS National Map 3DEP Downloadable Data Collection

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data collection consists of Lidar Point Cloud (LPC) projects as provided to the USGS. These point cloud files contain all the original lidar points collected,...

  1. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    Science.gov (United States)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  2. Preconcentration of trace elements

    International Nuclear Information System (INIS)

    Zolotov, Yu. A.; Kuz'min, N.M.

    1990-01-01

    This monograph deals with the theory and practical applications of trace metals preconcentration. It gives general characteristics of the process and describes in detail the methods of preconcentration: solvent extraction, sorption, co-precipitation, volatilization, and others. Special attention is given to preconcentration in combination with subsequent determination methods. The use of preconcentration in analysis of environmental and biological samples, mineral raw materials, high purity substances, and various industrial materials is also considered

  3. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  4. Automatic registration of terrestrial point cloud using panoramic reflectance images

    NARCIS (Netherlands)

    Kang, Z.

    2008-01-01

    Much attention is paid to registration of terrestrial point clouds nowadays. Research is carried out towards improved efficiency and automation of the registration process. This paper reports a new approach for point clouds registration utilizing reflectance panoramic images. The approach follows a

  5. Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation

    Science.gov (United States)

    Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.

    2018-05-01

    Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.

  6. From Point Clouds to Definitions of Architectural Space

    DEFF Research Database (Denmark)

    Tamke, Martin; Blümel, Ina; Ochmann, Sebastian

    2014-01-01

    Regarding interior building topology as an important aspect in building design and management, several approaches to indoor point cloud structuring have been introduced recently. Apart from a high-level semantic segmentation of the formerly unstructured point clouds into stories and rooms...... possible applications of these approaches in architectural design and building management and comment on the possible benefits for the building profession. While contemporary practice of spatial arrangement is predominantly based on the manual iteration of spatial topologies, we show that the segmentation...

  7. LIDAR, Point Clouds, and their Archaeological Applications

    Energy Technology Data Exchange (ETDEWEB)

    White, Devin A [ORNL

    2013-01-01

    It is common in contemporary archaeological literature, in papers at archaeological conferences, and in grant proposals to see heritage professionals use the term LIDAR to refer to high spatial resolution digital elevation models and the technology used to produce them. The goal of this chapter is to break that association and introduce archaeologists to the world of point clouds, in which LIDAR is only one member of a larger family of techniques to obtain, visualize, and analyze three-dimensional measurements of archaeological features. After describing how point clouds are constructed, there is a brief discussion on the currently available software and analytical techniques designed to make sense of them.

  8. Performance testing of 3D point cloud software

    Science.gov (United States)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  9. Performance testing of 3D point cloud software

    Directory of Open Access Journals (Sweden)

    M. Varela-González

    2013-10-01

    Full Text Available LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI. The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  10. AUTOMATIC RECOGNITION OF INDOOR NAVIGATION ELEMENTS FROM KINECT POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    L. Zeng

    2017-09-01

    Full Text Available This paper realizes automatically the navigating elements defined by indoorGML data standard – door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor – histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor – in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  11. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds

    Science.gov (United States)

    Zeng, L.; Kang, Z.

    2017-09-01

    This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  12. ALIGNMENT OF POINT CLOUD DSMs FROM TLS AND UAV PLATFORMS

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2015-08-01

    Full Text Available The co-registration of 3D point clouds has received considerable attention from various communities, particularly those in photogrammetry, computer graphics and computer vision. Although significant progress has been made, various challenges such as coarse alignment using multi-sensory data with different point densities and minimal overlap still exist. There is a need to address such data integration issues, particularly with the advent of new data collection platforms such as the unmanned aerial vehicles (UAVs. In this study, we propose an approach to align 3D point clouds derived photogrammetrically from UAV approximately vertical images with point clouds measured by terrestrial laser scanners (TLS. The method begins by automatically extracting 3D surface keypoints on both point cloud datasets. Afterwards, regions of interest around each keypoint are established to facilitate the establishment of scale-invariant descriptors for each of them. We use the popular SURF descriptor for matching the keypoints. In our experiments, we report the accuracies of the automatically derived transformation parameters in comparison to manually-derived reference parameter data.

  13. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints

    Directory of Open Access Journals (Sweden)

    Junhui Huang

    2016-12-01

    Full Text Available Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.

  14. COMPARISON OF POINT CLOUDS DERIVED FROM AERIAL IMAGE MATCHING WITH DATA FROM AIRBORNE LASER SCANNING

    Directory of Open Access Journals (Sweden)

    Dominik Wojciech

    2017-04-01

    Full Text Available The aim of this study was to invest igate the properties of point clouds derived from aerial image matching and to compare them with point clouds from airborne laser scanning. A set of aerial images acquired in years 2010 - 2013 over the city of Elblag were used for the analysis. Images were acquired with the use of three digital cameras: DMC II 230, DMC I and DigiCAM60 with a GSD varying from 4.5 cm to 15 cm. Eight sets of images that were used in the study were acquired at different stages of the growing season – from March to December. Two L iDAR point clouds were used for the comparison – one with a density of 1.3 p/m 2 and a second with a density of 10 p/m 2 . Based on the input images point clouds were created with the use of the semi - global matching method. The properties of the obtained poi nt clouds were analyzed in three ways: – b y the comparison of the vertical accuracy of point clouds with reference to a terrain profile surveyed on bare ground with GPS - RTK method – b y visual assessment of point cloud profiles generated both from SGM and LiDAR point clouds – b y visual assessment of a digital surface model generated from a SGM point cloud with reference to a digital surface model generated from a LiDAR point cloud. The conducted studies allowed a number of observations about the quality o f SGM point clouds to be formulated with respect to different factors. The main factors having influence on the quality of SGM point clouds are GSD and base/height ratio. The essential problem related to SGM point clouds are areas covered with vegetation w here SGM point clouds are visibly worse in terms of both accuracy and the representation of terrain surface. It is difficult to expect that in these areas SG M point clouds could replace LiDAR point clouds. This leads to a general conclusion that SGM point clouds are less reliable, more unpredictable and are dependent on more factors than LiDAR point clouds. Nevertheless, SGM point

  15. The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)

    Science.gov (United States)

    Kuçak, R. A.; Özdemir, E.; Erol, S.

    2017-05-01

    Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.

  16. THE SEGMENTATION OF POINT CLOUDS WITH K-MEANS AND ANN (ARTIFICAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. A. Kuçak

    2017-05-01

    Full Text Available Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM which is a type of ANN (Artificial Neural Network segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.

  17. A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Börcs

    2012-07-01

    Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.

  18. KNOWLEDGE-BASED OBJECT DETECTION IN LASER SCANNING POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    F. Boochs

    2012-07-01

    Full Text Available Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This “understanding” enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL, used for formulating the knowledge base and the Semantic Web Rule Language (SWRL with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists’ knowledge of the scene and algorithmic processing.

  19. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    Science.gov (United States)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  20. Manhattan-World Urban Reconstruction from Point Clouds

    KAUST Repository

    Li, Minglei; Wonka, Peter; Nan, Liangliang

    2016-01-01

    Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.

  1. Manhattan-World Urban Reconstruction from Point Clouds

    KAUST Repository

    Li, Minglei

    2016-09-16

    Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.

  2. Processing Terrain Point Cloud Data

    KAUST Repository

    DeVore, Ronald

    2013-01-10

    Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.

  3. Joint classification and contour extraction of large 3D point clouds

    Science.gov (United States)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  4. User requirements Massive Point Clouds for eSciences (WP1)

    NARCIS (Netherlands)

    Suijker, P.M.; Alkemade, I.; Kodde, M.P.; Nonhebel, A.E.

    2014-01-01

    This report is a milestone in work package 1 (WP1) of the project Massive point clouds for eSciences. In WP1 the basic functionalities needed for a new Point Cloud Spatial Database Management System are identified. This is achieved by (1) literature research, (2) discussions with the project

  5. Generating Free-Form Grid Truss Structures from 3D Scanned Point Clouds

    Directory of Open Access Journals (Sweden)

    Hui Ding

    2017-01-01

    Full Text Available Reconstruction, according to physical shape, is a novel way to generate free-form grid truss structures. 3D scanning is an effective means of acquiring physical form information and it generates dense point clouds on surfaces of objects. However, generating grid truss structures from point clouds is still a challenge. Based on the advancing front technique (AFT which is widely used in Finite Element Method (FEM, a scheme for generating grid truss structures from 3D scanned point clouds is proposed in this paper. Based on the characteristics of point cloud data, the search box is adopted to reduce the search space in grid generating. A front advancing procedure suit for point clouds is established. Delaunay method and Laplacian method are used to improve the quality of the generated grids, and an adjustment strategy that locates grid nodes at appointed places is proposed. Several examples of generating grid truss structures from 3D scanned point clouds of seashells are carried out to verify the proposed scheme. Physical models of the grid truss structures generated in the examples are manufactured by 3D print, which solidifies the feasibility of the scheme.

  6. PolyFit: Polygonal Surface Reconstruction from Point Clouds

    KAUST Repository

    Nan, Liangliang; Wonka, Peter

    2017-01-01

    We propose a novel framework for reconstructing lightweight polygonal surfaces from point clouds. Unlike traditional methods that focus on either extracting good geometric primitives or obtaining proper arrangements of primitives, the emphasis of this work lies in intersecting the primitives (planes only) and seeking for an appropriate combination of them to obtain a manifold polygonal surface model without boundary.,We show that reconstruction from point clouds can be cast as a binary labeling problem. Our method is based on a hypothesizing and selection strategy. We first generate a reasonably large set of face candidates by intersecting the extracted planar primitives. Then an optimal subset of the candidate faces is selected through optimization. Our optimization is based on a binary linear programming formulation under hard constraints that enforce the final polygonal surface model to be manifold and watertight. Experiments on point clouds from various sources demonstrate that our method can generate lightweight polygonal surface models of arbitrary piecewise planar objects. Besides, our method is capable of recovering sharp features and is robust to noise, outliers, and missing data.

  7. PolyFit: Polygonal Surface Reconstruction from Point Clouds

    KAUST Repository

    Nan, Liangliang

    2017-12-25

    We propose a novel framework for reconstructing lightweight polygonal surfaces from point clouds. Unlike traditional methods that focus on either extracting good geometric primitives or obtaining proper arrangements of primitives, the emphasis of this work lies in intersecting the primitives (planes only) and seeking for an appropriate combination of them to obtain a manifold polygonal surface model without boundary.,We show that reconstruction from point clouds can be cast as a binary labeling problem. Our method is based on a hypothesizing and selection strategy. We first generate a reasonably large set of face candidates by intersecting the extracted planar primitives. Then an optimal subset of the candidate faces is selected through optimization. Our optimization is based on a binary linear programming formulation under hard constraints that enforce the final polygonal surface model to be manifold and watertight. Experiments on point clouds from various sources demonstrate that our method can generate lightweight polygonal surface models of arbitrary piecewise planar objects. Besides, our method is capable of recovering sharp features and is robust to noise, outliers, and missing data.

  8. The Feasibility of 3d Point Cloud Generation from Smartphones

    Science.gov (United States)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  9. FINDING CUBOID-BASED BUILDING MODELS IN POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    W. Nguatem

    2012-07-01

    Full Text Available In this paper, we present an automatic approach for the derivation of 3D building models of level-of-detail 1 (LOD 1 from point clouds obtained from (dense image matching or, for comparison only, from LIDAR. Our approach makes use of the predominance of vertical structures and orthogonal intersections in architectural scenes. After robustly determining the scene's vertical direction based on the 3D points we use it as constraint for a RANSAC-based search for vertical planes in the point cloud. The planes are further analyzed to segment reliable outlines for rectangular surface within these planes, which are connected to construct cuboid-based building models. We demonstrate that our approach is robust and effective over a range of real-world input data sets with varying point density, amount of noise, and outliers.

  10. Interactive Trunk Extraction from Forest Point Cloud

    Directory of Open Access Journals (Sweden)

    T. Mizoguchi

    2014-06-01

    Full Text Available For forest management or monitoring, it is required to constantly measure several parameters of each tree, such as height, diameter at breast height, and trunk volume. Terrestrial laser scanner has been used for this purpose instead of human workers to reduce time and cost for the measurement. In order to use point cloud captured by terrestrial laser scanner in the above applications, it is an important step to extract all trees or their trunks separately. For this purpose, we propose an interactive system in which a user can intuitively and efficiently extract each trunk by a simple editing on the distance image created from the point cloud. We demonstrate the effectiveness of our proposed system from various experiments.

  11. CURB-BASED STREET FLOOR EXTRACTION FROM MOBILE TERRESTRIAL LIDAR POINT CLOUD

    Directory of Open Access Journals (Sweden)

    S. Ibrahim

    2012-07-01

    Full Text Available Mobile terrestrial laser scanners (MTLS produce huge 3D point clouds describing the terrestrial surface, from which objects like different street furniture can be generated. Extraction and modelling of the street curb and the street floor from MTLS point clouds is important for many applications such as right-of-way asset inventory, road maintenance and city planning. The proposed pipeline for the curb and street floor extraction consists of a sequence of five steps: organizing the 3D point cloud and nearest neighbour search; 3D density-based segmentation to segment the ground; morphological analysis to refine out the ground segment; derivative of Gaussian filtering to detect the curb; solving the travelling salesman problem to form a closed polygon of the curb and point-inpolygon test to extract the street floor. Two mobile laser scanning datasets of different scenes are tested with the proposed pipeline. The results of the extracted curb and street floor are evaluated based on a truth data. The obtained detection rates for the extracted street floor for the datasets are 95% and 96.53%. This study presents a novel approach to the detection and extraction of the road curb and the street floor from unorganized 3D point clouds captured by MTLS. It utilizes only the 3D coordinates of the point cloud.

  12. Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds

    Science.gov (United States)

    Koppanyi, Z.; Toth, C., K.

    2015-03-01

    Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.

  13. A simple method for determination of carmine in food samples based on cloud point extraction and spectrophotometric detection.

    Science.gov (United States)

    Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz

    2015-01-01

    In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. EXTRACTION OF BUILDING BOUNDARY LINES FROM AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Y.-H. Tseng

    2016-10-01

    Full Text Available Building boundary lines are important spatial features that characterize the topographic maps and three-dimensional (3D city models. Airborne LiDAR Point clouds provide adequate 3D spatial information for building boundary mapping. However, information of boundary features contained in point clouds is implicit. This study focuses on developing an automatic algorithm of building boundary line extraction from airborne LiDAR data. In an airborne LiDAR dataset, top surfaces of buildings, such as roofs, tend to have densely distributed points, but vertical surfaces, such as walls, usually have sparsely distributed points or even no points. The intersection lines of roof and wall planes are, therefore, not clearly defined in point clouds. This paper proposes a novel method to extract those boundary lines of building edges. The extracted line features can be used as fundamental data to generate topographic maps of 3D city model for an urban area. The proposed method includes two major process steps. The first step is to extract building boundary points from point clouds. Then the second step is followed to form building boundary line features based on the extracted boundary points. In this step, a line fitting algorithm is developed to improve the edge extraction from LiDAR data. Eight test objects, including 4 simple low buildings and 4 complicated tall buildings, were selected from the buildings in NCKU campus. The test results demonstrate the feasibility of the proposed method in extracting complicate building boundary lines. Some results which are not as good as expected suggest the need of further improvement of the method.

  15. Genomic cloud computing: legal and ethical points to consider.

    Science.gov (United States)

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Knoppers, Bartha M

    2015-10-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key 'points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These 'points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure.

  16. A Classification-oriented Method of Feature Image Generation for Vehicle-borne Laser Scanning Point Clouds

    Directory of Open Access Journals (Sweden)

    YANG Bisheng

    2016-02-01

    Full Text Available An efficient method of feature image generation of point clouds to automatically classify dense point clouds into different categories is proposed, such as terrain points, building points. The method first uses planar projection to sort points into different grids, then calculates the weights and feature values of grids according to the distribution of laser scanning points, and finally generates the feature image of point clouds. Thus, the proposed method adopts contour extraction and tracing means to extract the boundaries and point clouds of man-made objects (e.g. buildings and trees in 3D based on the image generated. Experiments show that the proposed method provides a promising solution for classifying and extracting man-made objects from vehicle-borne laser scanning point clouds.

  17. A GLOBAL REGISTRATION ALGORITHM OF THE SINGLE-CLOSED RING MULTI-STATIONS POINT CLOUD

    Directory of Open Access Journals (Sweden)

    R. Yang

    2018-04-01

    Full Text Available Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.

  18. Microwave assisted aqua regia extraction of thallium from sediment and coal fly ash samples and interference free determination by continuum source ETAAS after cloud point extraction.

    Science.gov (United States)

    Meeravali, Noorbasha N; Madhavi, K; Kumar, Sunil Jai

    2013-01-30

    A simple cloud point extraction method is described for the separation and pre-concentration of thallium from the microwave assisted aqua regia extracts of sediment and coal fly ash samples. The method is based on the formation of extractable species of thallium and its interaction with hydrophobic solubilizing sites of Triton X-114 micelles in the presence of aqua regia and electrolyte NaCl. These interactions of micelles are used for extraction of thallium from a bulk aqueous phase into a small micelles-rich phase. The potential chloride interferences are eliminated effectively, which enabled interference free determination of thallium from aqua regia extracts using continuum source ETAAS. The parameters affecting the extraction process are optimized. Under the optimized conditions, pre-concentration factor and limit of detection are 40 and 0.2 ng g(-1), respectively. The recoveries are in the range of 95-102%. A characteristic mass, 13 pg was obtained. The accuracy of the method is verified by analyzing certified reference materials such as NIST 1633b coal fly ash, NIST 1944 marine sediment and GBW 07312 stream sediments. The results obtained are in good agreement with the certified values and method is also applied to real samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    Science.gov (United States)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  20. Classification of Mobile Laser Scanning Point Clouds from Height Features

    Science.gov (United States)

    Zheng, M.; Lemmens, M.; van Oosterom, P.

    2017-09-01

    The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.

  1. Frame Filtering and Skipping for Point Cloud Data Video Transmission

    Directory of Open Access Journals (Sweden)

    Carlos Moreno

    2017-01-01

    Full Text Available Sensors for collecting 3D spatial data from the real world are becoming more important. They are a prime research area topic and have applications in consumer markets, such as medical, entertainment, and robotics. However, a primary concern with collecting this data is the vast amount of information being generated, and thus, needing to be processed before being transmitted. To address the issue, we propose the use of filtering methods and frame skipping. To collect the 3D spatial data, called point clouds, we used the Microsoft Kinect sensor. In addition, we utilized the Point Cloud Library to process and filter the data being generated by the Kinect. Two different computers were used: a client which collects, filters, and transmits the point clouds; and a server that receives and visualizes the point clouds. The client is also checking for similarity in consecutive frames, skipping those that reach a similarity threshold. In order to compare the filtering methods and test the effectiveness of the frame skipping technique, quality of service (QoS metrics such as frame rate and percentage of filter were introduced. These metrics indicate how well a certain combination of filtering method and frame skipping accomplishes the goal of transmitting point clouds from one location to another. We found that the pass through filter in conjunction with frame skipping provides the best relative QoS. However, results also show that there is still too much data for a satisfactory QoS. For a real-time system to provide reasonable end-to-end quality, dynamic compression and progressive transmission need to be utilized.

  2. ESTIMATING AIRCRAFT HEADING BASED ON LASERSCANNER DERIVED POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Z. Koppanyi

    2015-03-01

    Full Text Available Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles. In the second approach, iterative closest point (ICP method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.

  3. MIN-CUT BASED SEGMENTATION OF AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    S. Ural

    2012-07-01

    Full Text Available Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance

  4. Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density

    Science.gov (United States)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2016-06-01

    We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.

  5. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    Science.gov (United States)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  6. a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects

    Science.gov (United States)

    Zhao, Y.; Hu, Q.; Hu, W.

    2018-04-01

    This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.

  7. Morphological Operations to Extract Urban Curbs in 3D MLS Point Clouds

    Directory of Open Access Journals (Sweden)

    Borja Rodríguez-Cuenca

    2016-06-01

    Full Text Available Automatic curb detection is an important issue in road maintenance, three-dimensional (3D urban modeling, and autonomous navigation fields. This paper is focused on the segmentation of curbs and street boundaries using a 3D point cloud captured by a mobile laser scanner (MLS system. Our method provides a solution based on the projection of the measured point cloud on the XY plane. Over that plane, a segmentation algorithm is carried out based on morphological operations to determine the location of street boundaries. In addition, a solution to extract curb edges based on the roughness of the point cloud is proposed. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. The proposed method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. The extraction method provides completeness and correctness rates above 90% and quality values higher than 85% in both studied datasets.

  8. SURFACE FITTING FILTERING OF LIDAR POINT CLOUD WITH WAVEFORM INFORMATION

    Directory of Open Access Journals (Sweden)

    S. Xing

    2017-09-01

    Full Text Available Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from “WATER (Watershed Allied Telemetry Experimental Research” are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.

  9. Methods for registration laser scanner point clouds in forest stands

    International Nuclear Information System (INIS)

    Bienert, A.; Pech, K.; Maas, H.-G.

    2011-01-01

    Laser scanning is a fast and efficient 3-D measurement technique to capture surface points describing the geometry of a complex object in an accurate and reliable way. Besides airborne laser scanning, terrestrial laser scanning finds growing interest for forestry applications. These two different recording platforms show large differences in resolution, recording area and scan viewing direction. Using both datasets for a combined point cloud analysis may yield advantages because of their largely complementary information. In this paper, methods will be presented to automatically register airborne and terrestrial laser scanner point clouds of a forest stand. In a first step, tree detection is performed in both datasets in an automatic manner. In a second step, corresponding tree positions are determined using RANSAC. Finally, the geometric transformation is performed, divided in a coarse and fine registration. After a coarse registration, the fine registration is done in an iterative manner (ICP) using the point clouds itself. The methods are tested and validated with a dataset of a forest stand. The presented registration results provide accuracies which fulfill the forestry requirements [de

  10. A point cloud based pipeline for depth reconstruction from autostereoscopic sets

    Science.gov (United States)

    Niquin, Cédric; Prévost, Stéphanie; Remion, Yannick

    2010-02-01

    This is a three step pipeline to construct a 3D mesh of a scene from a set of N images, destined to be viewed on auto-stereoscopic displays. The first step matches the pixels to create a point cloud using a new algorithm based on graph-cuts. It exploits the data redundancy of the N images to ensure the geometric consistency of the scene and to reduce the graph complexity, in order to speed up the computation. It performs an accurate detection of occlusions and its results can then be used in applications like view synthesis. The second step slightly moves the points along the Z-axis to refine the point cloud. It uses a new cost including both occlusion positions and light variations deduced from the matching. The Z values are selected using a dynamic programming algorithm. This step finally generates a point cloud, which is fine enough for applications like augmented reality. From any of the two previously defined point clouds, the last step creates a colored mesh, which is a convenient data structure to be used in graphics APIs. It also generates N depth maps, allowing a comparison between the results of our method with those of other methods.

  11. On the performance of metrics to predict quality in point cloud representations

    Science.gov (United States)

    Alexiou, Evangelos; Ebrahimi, Touradj

    2017-09-01

    Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.

  12. Feature Extraction from 3D Point Cloud Data Based on Discrete Curves

    Directory of Open Access Journals (Sweden)

    Yi An

    2013-01-01

    Full Text Available Reliable feature extraction from 3D point cloud data is an important problem in many application domains, such as reverse engineering, object recognition, industrial inspection, and autonomous navigation. In this paper, a novel method is proposed for extracting the geometric features from 3D point cloud data based on discrete curves. We extract the discrete curves from 3D point cloud data and research the behaviors of chord lengths, angle variations, and principal curvatures at the geometric features in the discrete curves. Then, the corresponding similarity indicators are defined. Based on the similarity indicators, the geometric features can be extracted from the discrete curves, which are also the geometric features of 3D point cloud data. The threshold values of the similarity indicators are taken from [0,1], which characterize the relative relationship and make the threshold setting easier and more reasonable. The experimental results demonstrate that the proposed method is efficient and reliable.

  13. HIERARCHICAL REGULARIZATION OF POLYGONS FOR PHOTOGRAMMETRIC POINT CLOUDS OF OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    L. Xie

    2017-05-01

    Full Text Available Despite the success of multi-view stereo (MVS reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  14. A simple method for simultaneous spectrophotometric determination of brilliant blue fcf and sunset yellow fcf in food samples after cloud point extraction

    International Nuclear Information System (INIS)

    Heydari, R.

    2016-01-01

    In this study, a simple and low-cost method for extraction and pre-concentration of brilliant blue FCF and sunset yellow FCF in food samples using cloud point extraction (CPE) and spectrophotometric detection was developed. The effects of main factors such as solution pH, surfactant concentration, salt and its concentration, incubation time and temperature on the CPE of both dyes were investigated and optimized. Linear range of calibration graphs were obtained in the range of 16.0-1300 ng mL-1 for brilliant blue FCF and 25.0-1300 ng mL/sup -1/ for sunset yellow FCF under the optimum conditions. Limit of detection values for brilliant blue FCF and sunset yellow FCF were 3 and 6 ng mL-1, respectively. The relative standard deviation (RSD) values of both dyes for repeated measurements (n=6) were less than 4.57 %. The obtained results were demonstrated the proposed method can be applied satisfactory to determine these dyes in different food samples. (author)

  15. Efficient point cloud data processing in shipbuilding: Reformative component extraction method and registration method

    Directory of Open Access Journals (Sweden)

    Jingyu Sun

    2014-07-01

    Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.

  16. FULLY CONVOLUTIONAL NETWORKS FOR GROUND CLASSIFICATION FROM LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2018-05-01

    Full Text Available Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs. In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN, a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher. The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  17. Robust Spacecraft Component Detection in Point Clouds

    Directory of Open Access Journals (Sweden)

    Quanmao Wei

    2018-03-01

    Full Text Available Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  18. Robust Spacecraft Component Detection in Point Clouds.

    Science.gov (United States)

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  19. Visibility Analysis in a Point Cloud Based on the Medial Axis Transform

    NARCIS (Netherlands)

    Peters, R.; Ledoux, H.; Biljecki, F.

    2015-01-01

    Visibility analysis is an important application of 3D GIS data. Current approaches require 3D city models that are often derived from detailed aerial point clouds. We present an approach to visibility analysis that does not require a city model but works directly on the point cloud. Our approach is

  20. Taming the beast : Free and open-source massive point cloud web visualization

    NARCIS (Netherlands)

    Martinez-Rubi, O.; Verhoeven, S.; Van Meersbergen, M.; Schûtz, M.; Van Oosterom, P.; Gonçalves, R.; Tijssen, T.

    2015-01-01

    Powered by WebGL, some renderers have recently become available for the visualization of point cloud data over the web, for example Plasio or Potree. We have extended Potree to be able to visualize massive point clouds and we have successfully used it with the second national Lidar survey of the

  1. Feature curve extraction from point clouds via developable strip intersection

    Directory of Open Access Journals (Sweden)

    Kai Wah Lee

    2016-04-01

    Full Text Available In this paper, we study the problem of computing smooth feature curves from CAD type point clouds models. The proposed method reconstructs feature curves from the intersections of developable strip pairs which approximate the regions along both sides of the features. The generation of developable surfaces is based on a linear approximation of the given point cloud through a variational shape approximation approach. A line segment sequencing algorithm is proposed for collecting feature line segments into different feature sequences as well as sequential groups of data points. A developable surface approximation procedure is employed to refine incident approximation planes of data points into developable strips. Some experimental results are included to demonstrate the performance of the proposed method.

  2. Point Cluster Analysis Using a 3D Voronoi Diagram with Applications in Point Cloud Segmentation

    Directory of Open Access Journals (Sweden)

    Shen Ying

    2015-08-01

    Full Text Available Three-dimensional (3D point analysis and visualization is one of the most effective methods of point cluster detection and segmentation in geospatial datasets. However, serious scattering and clotting characteristics interfere with the visual detection of 3D point clusters. To overcome this problem, this study proposes the use of 3D Voronoi diagrams to analyze and visualize 3D points instead of the original data item. The proposed algorithm computes the cluster of 3D points by applying a set of 3D Voronoi cells to describe and quantify 3D points. The decompositions of point cloud of 3D models are guided by the 3D Voronoi cell parameters. The parameter values are mapped from the Voronoi cells to 3D points to show the spatial pattern and relationships; thus, a 3D point cluster pattern can be highlighted and easily recognized. To capture different cluster patterns, continuous progressive clusters and segmentations are tested. The 3D spatial relationship is shown to facilitate cluster detection. Furthermore, the generated segmentations of real 3D data cases are exploited to demonstrate the feasibility of our approach in detecting different spatial clusters for continuous point cloud segmentation.

  3. An approach of point cloud denoising based on improved bilateral filtering

    Science.gov (United States)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  4. UAV-BASED PHOTOGRAMMETRIC POINT CLOUDS – TREE STEM MAPPING IN OPEN STANDS IN COMPARISON TO TERRESTRIAL LASER SCANNER POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Fritz

    2013-08-01

    Full Text Available In both ecology and forestry, there is a high demand for structural information of forest stands. Forest structures, due to their heterogeneity and density, are often difficult to assess. Hence, a variety of technologies are being applied to account for this "difficult to come by" information. Common techniques are aerial images or ground- and airborne-Lidar. In the present study we evaluate the potential use of unmanned aerial vehicles (UAVs as a platform for tree stem detection in open stands. A flight campaign over a test site near Freiburg, Germany covering a target area of 120 × 75 [m2] was conducted. The dominant tree species of the site is oak (quercus robur with almost no understory growth. Over 1000 images with a tilt angle of 45° were shot. The flight pattern applied consisted of two antipodal staggered flight routes at a height of 55 [m] above the ground. We used a Panasonic G3 consumer camera equipped with a 14–42 [mm] standard lens and a 16.6 megapixel sensor. The data collection took place in leaf-off state in April 2013. The area was prepared with artificial ground control points for transformation of the structure-from-motion (SFM point cloud into real world coordinates. After processing, the results were compared with a terrestrial laser scanner (TLS point cloud of the same area. In the 0.9 [ha] test area, 102 individual trees above 7 [cm] diameter at breast height were located on in the TLS-cloud. We chose the software CMVS/PMVS-2 since its algorithms are developed with focus on dense reconstruction. The processing chain for the UAV-acquired images consists of six steps: a. cleaning the data: removing of blurry, under- or over exposed and off-site images; b. applying the SIFT operator [Lowe, 2004]; c. image matching; d. bundle adjustment; e. clustering; and f. dense reconstruction. In total, 73 stems were considered as reconstructed and located within one meter of the reference trees. In general stems were far less accurate

  5. Preconcentration of traces of radionuclides and elements with foamed polyurethane sorbents in the analysis of environmental samples

    International Nuclear Information System (INIS)

    Palagyi, S.; Braun, T.

    1986-01-01

    The importance of preconcentration and the permanent need of efficient preconcentrating agents in environmental analysis are pointed out. Foamed polyurethane sorbents draw attention as novel agents in separation chemistry. A survey is presented of recent applications of unloaded and reagent-loaded open-cell type resilient polyurethane foams in the separation and preconcentration of radionuclides from environmental samples, and of the latest uses of these foams in the preconcentration and detection of traces of some, mainly inorganic materials in environmental samples, using radioanalytical techniques. Possible future uses of polyurethane foams in trace element detection in environmental analysis are outlined. (author)

  6. Multi-Class Simultaneous Adaptive Segmentation and Quality Control of Point Cloud Data

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2016-01-01

    Full Text Available 3D modeling of a given site is an important activity for a wide range of applications including urban planning, as-built mapping of industrial sites, heritage documentation, military simulation, and outdoor/indoor analysis of airflow. Point clouds, which could be either derived from passive or active imaging systems, are an important source for 3D modeling. Such point clouds need to undergo a sequence of data processing steps to derive the necessary information for the 3D modeling process. Segmentation is usually the first step in the data processing chain. This paper presents a region-growing multi-class simultaneous segmentation procedure, where planar, pole-like, and rough regions are identified while considering the internal characteristics (i.e., local point density/spacing and noise level of the point cloud in question. The segmentation starts with point cloud organization into a kd-tree data structure and characterization process to estimate the local point density/spacing. Then, proceeding from randomly-distributed seed points, a set of seed regions is derived through distance-based region growing, which is followed by modeling of such seed regions into planar and pole-like features. Starting from optimally-selected seed regions, planar and pole-like features are then segmented. The paper also introduces a list of hypothesized artifacts/problems that might take place during the region-growing process. Finally, a quality control process is devised to detect, quantify, and mitigate instances of partially/fully misclassified planar and pole-like features. Experimental results from airborne and terrestrial laser scanning as well as image-based point clouds are presented to illustrate the performance of the proposed segmentation and quality control framework.

  7. A portable low-cost 3D point cloud acquiring method based on structure light

    Science.gov (United States)

    Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia

    2018-03-01

    A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.

  8. A column-store meets the point clouds

    NARCIS (Netherlands)

    Martinez-Rubi, O.; Kersten, M.L.; Goncalves, R.; Ivanova, M.

    2014-01-01

    Column stores have become the de-facto standard for most datawarehouse solutions. The regularity and query patterns of LIDAR data are a potential application area well suited for this technology. In this short work in progress paper we report on our experiences in supporting point cloud data using

  9. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    Science.gov (United States)

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  10. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2016-12-01

    Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  11. Towards 4d Virtual City Reconstruction from LIDAR Point Cloud Sequences

    Science.gov (United States)

    Józsa, O.; Börcs, A.; Benedek, C.

    2013-05-01

    In this paper we propose a joint approach on virtual city reconstruction and dynamic scene analysis based on point cloud sequences of a single car-mounted Rotating Multi-Beam (RMB) Lidar sensor. The aim of the addressed work is to create 4D spatio-temporal models of large dynamic urban scenes containing various moving and static objects. Standalone RMB Lidar devices have been frequently applied in robot navigation tasks and proved to be efficient in moving object detection and recognition. However, they have not been widely exploited yet for geometric approximation of ground surfaces and building facades due to the sparseness and inhomogeneous density of the individual point cloud scans. In our approach we propose an automatic registration method of the consecutive scans without any additional sensor information such as IMU, and introduce a process for simultaneously extracting reconstructed surfaces, motion information and objects from the registered dense point cloud completed with point time stamp information.

  12. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    Science.gov (United States)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  13. Optimization of cloud point extraction and solid phase extraction methods for speciation of arsenic in natural water using multivariate technique.

    Science.gov (United States)

    Baig, Jameel A; Kazi, Tasneem G; Shah, Abdul Q; Arain, Mohammad B; Afridi, Hassan I; Kandhro, Ghulam A; Khan, Sumaira

    2009-09-28

    The simple and rapid pre-concentration techniques viz. cloud point extraction (CPE) and solid phase extraction (SPE) were applied for the determination of As(3+) and total inorganic arsenic (iAs) in surface and ground water samples. The As(3+) was formed complex with ammonium pyrrolidinedithiocarbamate (APDC) and extracted by surfactant-rich phases in the non-ionic surfactant Triton X-114, after centrifugation the surfactant-rich phase was diluted with 0.1 mol L(-1) HNO(3) in methanol. While total iAs in water samples was adsorbed on titanium dioxide (TiO(2)); after centrifugation, the solid phase was prepared to be slurry for determination. The extracted As species were determined by electrothermal atomic absorption spectrometry. The multivariate strategy was applied to estimate the optimum values of experimental factors for the recovery of As(3+) and total iAs by CPE and SPE. The standard addition method was used to validate the optimized methods. The obtained result showed sufficient recoveries for As(3+) and iAs (>98.0%). The concentration factor in both cases was found to be 40.

  14. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    Science.gov (United States)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  15. A GROSS ERROR ELIMINATION METHOD FOR POINT CLOUD DATA BASED ON KD-TREE

    Directory of Open Access Journals (Sweden)

    Q. Kang

    2018-04-01

    Full Text Available Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  16. SEMANTIC3D.NET: a New Large-Scale Point Cloud Classification Benchmark

    Science.gov (United States)

    Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J. D.; Schindler, K.; Pollefeys, M.

    2017-05-01

    This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.

  17. Object-Based Coregistration of Terrestrial Photogrammetric and ALS Point Clouds in Forested Areas

    Science.gov (United States)

    Polewski, P.; Erickson, A.; Yao, W.; Coops, N.; Krzystek, P.; Stilla, U.

    2016-06-01

    Airborne Laser Scanning (ALS) and terrestrial photogrammetry are methods applicable for mapping forested environments. While ground-based techniques provide valuable information about the forest understory, the measured point clouds are normally expressed in a local coordinate system, whose transformation into a georeferenced system requires additional effort. In contrast, ALS point clouds are usually georeferenced, yet the point density near the ground may be poor under dense overstory conditions. In this work, we propose to combine the strengths of the two data sources by co-registering the respective point clouds, thus enriching the georeferenced ALS point cloud with detailed understory information in a fully automatic manner. Due to markedly different sensor characteristics, coregistration methods which expect a high geometric similarity between keypoints are not suitable in this setting. Instead, our method focuses on the object (tree stem) level. We first calculate approximate stem positions in the terrestrial and ALS point clouds and construct, for each stem, a descriptor which quantifies the 2D and vertical distances to other stem centers (at ground height). Then, the similarities between all descriptor pairs from the two point clouds are calculated, and standard graph maximum matching techniques are employed to compute corresponding stem pairs (tiepoints). Finally, the tiepoint subset yielding the optimal rigid transformation between the terrestrial and ALS coordinate systems is determined. We test our method on simulated tree positions and a plot situated in the northern interior of the Coast Range in western Oregon, USA, using ALS data (76 x 121 m2) and a photogrammetric point cloud (33 x 35 m2) derived from terrestrial photographs taken with a handheld camera. Results on both simulated and real data show that the proposed stem descriptors are discriminative enough to derive good correspondences. Specifically, for the real plot data, 24

  18. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    Science.gov (United States)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally

  19. AN ADAPTIVE APPROACH FOR SEGMENTATION OF 3D LASER POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2012-09-01

    Full Text Available Automatic processing and object extraction from 3D laser point cloud is one of the major research topics in the field of photogrammetry. Segmentation is an essential step in the processing of laser point cloud, and the quality of extracted objects from laser data is highly dependent on the validity of the segmentation results. This paper presents a new approach for reliable and efficient segmentation of planar patches from a 3D laser point cloud. In this method, the neighbourhood of each point is firstly established using an adaptive cylinder while considering the local point density and surface trend. This neighbourhood definition has a major effect on the computational accuracy of the segmentation attributes. In order to efficiently cluster planar surfaces and prevent introducing ambiguities, the coordinates of the origin's projection on each point's best fitted plane are used as the clustering attributes. Then, an octree space partitioning method is utilized to detect and extract peaks from the attribute space. Each detected peak represents a specific cluster of points which are located on a distinct planar surface in the object space. Experimental results show the potential and feasibility of applying this method for segmentation of both airborne and terrestrial laser data.

  20. Automatic Registration of Vehicle-borne Mobile Mapping Laser Point Cloud and Sequent Panoramas

    Directory of Open Access Journals (Sweden)

    CHEN Chi

    2018-02-01

    Full Text Available An automatic registration method of mobile mapping system laser point cloud and sequence panoramic image is proposed in this paper.Firstly,hierarchical object extraction method is applied on LiDAR data to extract the building façade and outline polygons are generated to construct the skyline vectors.A virtual imaging method is proposed to solve the distortion on panoramas and corners on skylines are further detected on the virtual images combining segmentation and corner detection results.Secondly,the detected skyline vectors are taken as the registration primitives.Registration graphs are built according to the extracted skyline vector and further matched under graph edit distance minimization criteria.The matched conjugate primitives are utilized to solve the 2D-3D rough registration model to obtain the initial transformation between the sequence panoramic image coordinate system and the LiDAR point cloud coordinate system.Finally,to reduce the impact of registration primitives extraction and matching error on the registration results,the optimal transformation between the multi view stereo matching dens point cloud generated from the virtual imaging of the sequent panoramas and the LiDAR point cloud are solved by a 3D-3D ICP registration algorithm variant,thus,refine the exterior orientation parameters of panoramas indirectly.Experiments are undertaken to validate the proposed method and the results show that 1.5 pixel level registration results are achieved on the experiment dataset.The registration results can be applied to point cloud and panoramas fusion applications such as true color point cloud generation.

  1. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    Science.gov (United States)

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar

  2. Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation

    Science.gov (United States)

    An, Lu; Guo, Baolong

    2018-03-01

    Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).

  3. Automatic Generation of Structural Building Descriptions from 3D Point Cloud Scans

    DEFF Research Database (Denmark)

    Ochmann, Sebastian; Vock, Richard; Wessel, Raoul

    2013-01-01

    We present a new method for automatic semantic structuring of 3D point clouds representing buildings. In contrast to existing approaches which either target the outside appearance like the facade structure or rather low-level geometric structures, we focus on the building’s interior using indoor...... scans to derive high-level architectural entities like rooms and doors. Starting with a registered 3D point cloud, we probabilistically model the affiliation of each measured point to a certain room in the building. We solve the resulting clustering problem using an iterative algorithm that relies...... on the estimated visibilities between any two locations within the point cloud. With the segmentation into rooms at hand, we subsequently determine the locations and extents of doors between adjacent rooms. In our experiments, we demonstrate the feasibility of our method by applying it to synthetic as well...

  4. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    Science.gov (United States)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  5. Using a space filling curve approach for the management of dynamic point clouds

    NARCIS (Netherlands)

    Psomadaki, S; van Oosterom, P.J.M.; Tijssen, T.P.M.; Baart, F.

    2016-01-01

    Point cloud usage has increased over the years. The development of low-cost sensors makes it now possible to acquire frequent point cloud measurements on a short time period (day, hour, second). Based on the requirements coming from the coastal monitoring domain, we have developed, implemented and

  6. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    Directory of Open Access Journals (Sweden)

    Lujiang Liu

    2016-06-01

    Full Text Available Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective.

  7. Biological preconcentrator

    Science.gov (United States)

    Manginell, Ronald P [Albuquerque, NM; Bunker, Bruce C [Albuquerque, NM; Huber, Dale L [Albuquerque, NM

    2008-09-09

    A biological preconcentrator comprises a stimulus-responsive active film on a stimulus-producing microfabricated platform. The active film can comprise a thermally switchable polymer film that can be used to selectively absorb and desorb proteins from a protein mixture. The biological microfabricated platform can comprise a thin membrane suspended on a substrate with an integral resistive heater and/or thermoelectric cooler for thermal switching of the active polymer film disposed on the membrane. The active polymer film can comprise hydrogel-like polymers, such as poly(ethylene oxide) or poly(n-isopropylacrylamide), that are tethered to the membrane. The biological preconcentrator can be fabricated with semiconductor materials and technologies.

  8. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    Science.gov (United States)

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  9. 3D Point Cloud Reconstruction from Single Plenoptic Image

    Directory of Open Access Journals (Sweden)

    F. Murgia

    2016-06-01

    Full Text Available Novel plenoptic cameras sample the light field crossing the main camera lens. The information available in a plenoptic image must be processed, in order to create the depth map of the scene from a single camera shot. In this paper a novel algorithm, for the reconstruction of 3D point cloud of the scene from a single plenoptic image, taken with a consumer plenoptic camera, is proposed. Experimental analysis is conducted on several test images, and results are compared with state of the art methodologies. The results are very promising, as the quality of the 3D point cloud from plenoptic image, is comparable with the quality obtained with current non-plenoptic methodologies, that necessitate more than one image.

  10. Design, implementation and evaluation of a point cloud codec for Tele-Immersive Video

    NARCIS (Netherlands)

    R.N. Mekuria (Rufael); C.L. Blom (Kees); P.S. Cesar Garcia (Pablo Santiago)

    2017-01-01

    htmlabstractwe present a generic and real-time time-varying point cloud codec for 3D immersive video. This codec is suitable for mixed reality applications where 3D point clouds are acquired at a fast rate. In this codec, intra frames are coded progressively in an octree subdivision. To further

  11. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    Science.gov (United States)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a

  12. Determination of rhodium in metallic alloy and water samples using cloud point extraction coupled with spectrophotometric technique

    Science.gov (United States)

    Kassem, Mohammed A.; Amin, Alaa S.

    2015-02-01

    A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4‧-nitro-2‧,6‧-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50 °C, the surfactant-rich phase was heated again at 100 °C to remove water after decantation and the remaining phase was dissolved using 0.5 mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75 ng mL-1 and the detection limit was 0.15 ng mL-1 of the original solution. The enhancement factor of 500 was achieved for 250 mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples.

  13. Registration of vehicle based panoramic image and LiDAR point cloud

    Science.gov (United States)

    Chen, Changjun; Cao, Liang; Xie, Hong; Zhuo, Xiangyu

    2013-10-01

    Higher quality surface information would be got when data from optical images and LiDAR were integrated, owing to the fact that optical images and LiDAR point cloud have unique characteristics that make them preferable in many applications. While most previous works focus on registration of pinhole perspective cameras to 2D or 3D LiDAR data. In this paper, a method for the registration of vehicle based panoramic image and LiDAR point cloud is proposed. Using the translation among panoramic image, single CCD image, laser scanner and Position and Orientation System (POS) along with the GPS/IMU data, precise co-registration between the panoramic image and the LiDAR point cloud in the world system is achieved. Results are presented under a real world data set collected by a new developed Mobile Mapping System (MMS) integrated with a high resolution panoramic camera, two laser scanners and a POS.

  14. Evaluation of a cloud point extraction approach for the preconcentration and quantification of trace CuO nanoparticles in environmental waters

    International Nuclear Information System (INIS)

    Majedi, Seyed Mohammad; Kelly, Barry C.; Lee, Hian Kee

    2014-01-01

    Graphical abstract: - Highlights: • The robustness of cloud point extraction approach was investigated for the analysis of trace CuO NPs in water. • The behavior and fate, and therefore, the recovery of CuO NPs varied substantially under different extraction conditions. • The effects of environmental factors on the NP behavior and extraction were determined and minimized. • Limits of detection of 0.02 and 0.06 μg L −1 were achieved using ICP-MS and GF-AAS, respectively. • Environmental water samples were successfully pre-treated and analyzed. - Abstract: The cloud point extraction (CPE) of commercial copper(II) oxide nanoparticles (CuO NPs, mean diameter of 28 nm) in water samples was fully investigated. Factors such as Triton X-114 (TX-114) concentration, pH, incubation temperature and time, were optimized. The effects of CuO NP behavior like agglomeration, dissolution, and surface adsorption of natural organic matter, Cu 2+ , and coating chemicals, on its recovery were studied. The results indicated that all the CPE factors had significant effects on the extraction efficiency. An enrichment factor of ∼89 was obtained under optimum CPE conditions. The hydrodynamic diameter of CuO NPs increased to 4–5 μm upon agglomeration of NP-micelle assemblies, and decreased at pH >10.0 at which the extraction efficiency was also lowered. The solubility and therefore, the loss of NPs were greatly enhanced at pH 5 mg C L −1 and Cu 2+ >2 times that of CuO NPs, lowered and enhanced the extraction efficiency, respectively. Pre-treatment of samples with 3% w v −1 of hydrogen peroxide and 10 mM of ethylenediaminetetraacetic acid minimized the interferences posed by DOC and Cu 2+ , respectively. The decrease in CPE efficiency was also evident for ligands like poly(ethylene glycol). The TX-114-rich phase could be determined with either inductively coupled plasma mass spectrometry following microwave digestion, or graphite furnace atomic absorption spectrometry

  15. Parallel Processing of Big Point Clouds Using Z-Order Partitioning

    Science.gov (United States)

    Alis, C.; Boehm, J.; Liu, K.

    2016-06-01

    As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm

  16. COMPARATIVE ANALYSIS OF 3D POINT CLOUDS GENERATED FROM A FREEWARE AND TERRESTRIAL LASER SCANNER

    Directory of Open Access Journals (Sweden)

    K. R. Dayal

    2017-07-01

    Full Text Available In the recent past, several heritage structures have faced destruction due to both human-made incidents and natural calamities that have caused a great loss to the human race regarding its cultural achievements. In this context, the importance of documenting such structures to create a substantial database cannot be emphasised enough. The Clock Tower of Dehradun, India is one such structure. There is a lack of sufficient information in the digital domain, which justified the need to carry out this study. Thus, an attempt has been made to gauge the possibilities of using open source 3D tools such as VSfM to quickly and easily obtain point clouds of an object and assess its quality. The photographs were collected using consumer grade cameras with reasonable effort to ensure overlap. The sparse reconstruction and dense reconstruction were carried out to generate a 3D point cloud model of the tower. A terrestrial laser scanner (TLS was also used to obtain a point cloud of the tower. The point clouds obtained from the two methods were analyzed to understand the quality of the information present; TLS acquired point cloud being a benchmark to assess the VSfM point cloud. They were compared to analyze the point density and subjected to a plane-fitting test for sample flat portions on the structure. The plane-fitting test revealed the planarity of the point clouds. A Gauss distribution fit yielded a standard deviation of 0.002 and 0.01 for TLS and VSfM, respectively. For more insight, comparisons with Agisoft Photoscan results were also made.

  17. Cloud-point measurement for (sulphate salts + polyethylene glycol 15000 + water) systems by the particle counting method

    International Nuclear Information System (INIS)

    Imani, A.; Modarress, H.; Eliassi, A.; Abdous, M.

    2009-01-01

    The phase separation of (water + salt + polyethylene glycol 15000) systems was studied by cloud-point measurements using the particle counting method. The effect of three kinds of sulphate salt (Na 2 SO 4 , K 2 SO 4 , (NH 4 ) 2 SO 4 ) concentration, polyethylene glycol 15000 concentration, mass ratio of polymer to salt on the cloud-point temperature of these systems have been investigated. The results obtained indicate that the cloud-point temperatures decrease linearly with increase in polyethylene glycol concentrations for different salts. Also, the cloud points decrease with an increase in mass ratio of salt to polymer.

  18. Automatic Generation of Structural Building Descriptions from 3D Point Cloud Scans

    DEFF Research Database (Denmark)

    Ochmann, Sebastian; Vock, Richard; Wessel, Raoul

    2013-01-01

    We present a new method for automatic semantic structuring of 3D point clouds representing buildings. In contrast to existing approaches which either target the outside appearance like the facade structure or rather low-level geometric structures, we focus on the building’s interior using indoor...... scans to derive high-level architectural entities like rooms and doors. Starting with a registered 3D point cloud, we probabilistically model the affiliation of each measured point to a certain room in the building. We solve the resulting clustering problem using an iterative algorithm that relies...

  19. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  20. Determination of the impact of RGB points cloud attribute quality on color-based segmentation process

    Directory of Open Access Journals (Sweden)

    Bartłomiej Kraszewski

    2015-06-01

    Full Text Available The article presents the results of research on the effect that radiometric quality of point cloud RGB attributes have on color-based segmentation. In the research, a point cloud with a resolution of 5 mm, received from FAROARO Photon 120 scanner, described the fragment of an office’s room and color images were taken by various digital cameras. The images were acquired by SLR Nikon D3X, and SLR Canon D200 integrated with the laser scanner, compact camera Panasonic TZ-30 and a mobile phone digital camera. Color information from images was spatially related to point cloud in FAROARO Scene software. The color-based segmentation of testing data was performed with the use of a developed application named “RGB Segmentation”. The application was based on public Point Cloud Libraries (PCL and allowed to extract subsets of points fulfilling the criteria of segmentation from the source point cloud using region growing method.Using the developed application, the segmentation of four tested point clouds containing different RGB attributes from various images was performed. Evaluation of segmentation process was performed based on comparison of segments acquired using the developed application and extracted manually by an operator. The following items were compared: the number of obtained segments, the number of correctly identified objects and the correctness of segmentation process. The best correctness of segmentation and most identified objects were obtained using the data with RGB attribute from Nikon D3X images. Based on the results it was found that quality of RGB attributes of point cloud had impact only on the number of identified objects. In case of correctness of the segmentation, as well as its error no apparent relationship between the quality of color information and the result of the process was found.[b]Keywords[/b]: terrestrial laser scanning, color-based segmentation, RGB attribute, region growing method, digital images, points cloud

  1. ACCURACY ASSESSMENT OF MOBILE MAPPING POINT CLOUDS USING THE EXISTING ENVIRONMENT AS TERRESTRIAL REFERENCE

    Directory of Open Access Journals (Sweden)

    S. Hofmann

    2016-06-01

    Full Text Available Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.

  2. POINT CLOUD ORIENTED SHOULDER LINE EXTRACTION IN LOESS HILLY AREA

    Directory of Open Access Journals (Sweden)

    L. Min

    2016-06-01

    Full Text Available Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains. Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i ground points were selected by using a grid filter in order to remove most of noisy points. (ii Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains, using Natural Break Classified method. (iii The common boundary between two slopes is extracted as shoulder line candidate. (iv Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  3. Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area

    Science.gov (United States)

    Min, Li; Xin, Yang; Liyang, Xiong

    2016-06-01

    Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  4. On-line Incorporation of Cloud Point Extraction in Flame Atomic Absorption Spectrometric Determination of Silver

    OpenAIRE

    DALALI, Nasser; JAVADI, Nasrin; AGRAWAL, Yadvendra KUMAR

    2008-01-01

    A cloud point extraction method was incorporated into a flow injection system, coupled with flame atomic absorption spectrometry, for determination of trace amounts of silver. The analyte in the aqueous solution was acidified with 0.2 mol L-1 sulfuric acid and complexed with dithizone. The cloud point extraction was performed using the non-ionic surfactant Triton X-114. After obtaining the cloud point, the surfactant-rich phase containing the dithizonate complex was collected in a m...

  5. Preconcentration of a low grade uranium ore in CPDU and laboratory investigation to optimize the dewatering conditions of the preconcentration products

    International Nuclear Information System (INIS)

    Cristovici, M.A.; Berry, T.F.; Raicevic, M.M.; Brady, E.L.; Bredin, E.L.; Leigh, G.W.; Rouleau, J.P.

    1982-04-01

    A process consisting of pyrite flotation and magnetic concentration of radionuclides was developed by CANMET over several years, to preconcentrate low grade uranium ores prior to leaching. When the economics of the preconcentration-leaching technology was compared with the leaching of the entire ore after pyrite flotation (Base Case variant), the preconcentration method appeared to be economically less advantageous than expected, due to the high cost of dewatering the preconcentration products. Further investigations examined in-depth the metallurgy and dewatering of the two variants: preconcentration and base case. A typical low grade uranium ore from Elliot Lake area was used. The metallurgy was compared based on data from continuous operation (CPDU). In the preconcentration variant the amount of ore directed to leaching was reduced to more than one third of that processed in the base case, while the radionuclide concentration became more than three times higher. However, by preconcentration 7% of the uranium was lost before leaching. Systematic laboratory-scale settling and filter tests optimized the dewatering conditions of the preconcentration technology to the extent that rates similar to those of the base case were obtained

  6. Robotic Online Path Planning on Point Cloud.

    Science.gov (United States)

    Liu, Ming

    2016-05-01

    This paper deals with the path-planning problem for mobile wheeled- or tracked-robot which drive in 2.5-D environments, where the traversable surface is usually considered as a 2-D-manifold embedded in a 3-D ambient space. Specially, we aim at solving the 2.5-D navigation problem using raw point cloud as input. The proposed method is independent of traditional surface parametrization or reconstruction methods, such as a meshing process, which generally has high-computational complexity. Instead, we utilize the output of 3-D tensor voting framework on the raw point clouds. The computation of tensor voting is accelerated by optimized implementation on graphics computation unit. Based on the tensor voting results, a novel local Riemannian metric is defined using the saliency components, which helps the modeling of the latent traversable surface. Using the proposed metric, we prove that the geodesic in the 3-D tensor space leads to rational path-planning results by experiments. Compared to traditional methods, the results reveal the advantages of the proposed method in terms of smoothing the robot maneuver while considering the minimum travel distance.

  7. A New Perspective on the Relationship Between Cloud Shade and Point Cloudiness

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Badescu, V.; Paulescu, M.; Dumitrescu, A.

    172-173, 15 May (2016), s. 136-146 ISSN 0169-8095 Institutional support: RVO:67985807 Keywords : point cloud iness * cloud shade * statistical analysis * semi-parametric modeling Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.778, year: 2016

  8. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    Science.gov (United States)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  9. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  10. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have

  11. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    International Nuclear Information System (INIS)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  12. Economic evaluation of preconcentration of uranium ores

    International Nuclear Information System (INIS)

    1981-04-01

    The economics of two options for the preconcentration of low-grade uranium ores prior to hydrochloric acid leaching were studied. The first option uses flotation followed by wet high-intensity magnetic separation. The second option omits the flotation step. In each case it was assumed that most of the pyrite in the ore would be recovered by froth flotation, dewatered, and roasted to produce sulphuric acid and a calcine suitable for acid leaching. Savings in operating costs from preconcentration are offset by the value of uranium losses. However, a capital saving of approximately 6 million dollars is indicated for each preconcentration option. As a result of the capital saving, preconcentration appears to be economically attractive when combined with hydrochloric acid leaching. There appears to be no economic advantage to preconcentration in combination with sulphuric acid leaching of the ore

  13. Cloud point extraction of vanadium in pharmaceutical formulations, dialysate and parenteral solutions using 8-hydroxyquinoline and nonionic surfactant

    International Nuclear Information System (INIS)

    Khan, Sumaira; Kazi, Tasneem G.; Baig, Jameel A.; Kolachi, Nida F.; Afridi, Hassan I.; Wadhwa, Sham Kumar; Shah, Abdul Q.; Kandhro, Ghulam A.; Shah, Faheem

    2010-01-01

    A cloud point extraction (CPE) method has been developed for the determination of trace quantity of vanadium ions in pharmaceutical formulations (PF), dialysate (DS) and parenteral solutions (PS). The CPE of vanadium (V) using 8-hydroxyquinoline (oxine) as complexing reagent and mediated by nonionic surfactant (Triton X-114) was investigated. The parameters that affect the extraction efficiency of CPE, such as pH of sample solution, concentration of oxine and Triton X-114, equilibration temperature and time period for shaking were investigated in detail. The validity of CPE of V was checked by standard addition method in real samples. The extracted surfactant-rich phase was diluted with nitric acid in ethanol, prior to subjecting electrothermal atomic absorption spectrometry. Under these conditions, the preconcentration of 50 mL sample solutions, allowed raising an enrichment factor of 125-fold. The lower limit of detection obtained under the optimal conditions was 42 ng/L. The proposed method has been successfully applied to the determination of trace quantity of V in various pharmaceutical preparations with satisfactory results. The concentration ranges of V in PF, DS and PS samples were found in the range of 10.5-15.2, 0.65-1.32 and 1.76-6.93 μg/L, respectively.

  14. A Green Preconcentration Method for Determination of Cobalt and Lead in Fresh Surface and Waste Water Samples Prior to Flame Atomic Absorption Spectrometry

    Directory of Open Access Journals (Sweden)

    Naeemullah

    2012-01-01

    Full Text Available Cloud point extraction (CPE has been used for the preconcentration and simultaneous determination of cobalt (Co and lead (Pb in fresh and wastewater samples. The extraction of analytes from aqueous samples was performed in the presence of 8-hydroxyquinoline (oxine as a chelating agent and Triton X-114 as a nonionic surfactant. Experiments were conducted to assess the effect of different chemical variables such as pH, amounts of reagents (oxine and Triton X-114, temperature, incubation time, and sample volume. After phase separation, based on the cloud point, the surfactant-rich phase was diluted with acidic ethanol prior to its analysis by the flame atomic absorption spectrometry (FAAS. The enhancement factors 70 and 50 with detection limits of 0.26 μg L−1 and 0.44 μg L−1 were obtained for Co and Pb, respectively. In order to validate the developed method, a certified reference material (SRM 1643e was analyzed and the determined values obtained were in a good agreement with the certified values. The proposed method was applied successfully to the determination of Co and Pb in a fresh surface and waste water sample.

  15. The Pose Estimation of Mobile Robot Based on Improved Point Cloud Registration

    Directory of Open Access Journals (Sweden)

    Yanzi Miao

    2016-03-01

    Full Text Available Due to GPS restrictions, an inertial sensor is usually used to estimate the location of indoor mobile robots. However, it is difficult to achieve high-accuracy localization and control by inertial sensors alone. In this paper, a new method is proposed to estimate an indoor mobile robot pose with six degrees of freedom based on an improved 3D-Normal Distributions Transform algorithm (3D-NDT. First, point cloud data are captured by a Kinect sensor and segmented according to the distance to the robot. After the segmentation, the input point cloud data are processed by the Approximate Voxel Grid Filter algorithm in different sized voxel grids. Second, the initial registration and precise registration are performed respectively according to the distance to the sensor. The most distant point cloud data use the 3D-Normal Distributions Transform algorithm (3D-NDT with large-sized voxel grids for initial registration, based on the transformation matrix from the odometry method. The closest point cloud data use the 3D-NDT algorithm with small-sized voxel grids for precise registration. After the registrations above, a final transformation matrix is obtained and coordinated. Based on this transformation matrix, the pose estimation problem of the indoor mobile robot is solved. Test results show that this method can obtain accurate robot pose estimation and has better robustness.

  16. Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data

    Science.gov (United States)

    Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun

    2014-11-01

    Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.

  17. COMPARISON OF 2D AND 3D APPROACHES FOR THE ALIGNMENT OF UAV AND LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2017-08-01

    Full Text Available The automatic alignment of 3D point clouds acquired or generated from different sensors is a challenging problem. The objective of the alignment is to estimate the 3D similarity transformation parameters, including a global scale factor, 3 rotations and 3 translations. To do so, corresponding anchor features are required in both data sets. There are two main types of alignment: i Coarse alignment and ii Refined Alignment. Coarse alignment issues include lack of any prior knowledge of the respective coordinate systems for a source and target point cloud pair and the difficulty to extract and match corresponding control features (e.g., points, lines or planes co-located on both point cloud pairs to be aligned. With the increasing use of UAVs, there is a need to automatically co-register their generated point cloud-based digital surface models with those from other data acquisition systems such as terrestrial or airborne lidar point clouds. This works presents a comparative study of two independent feature matching techniques for addressing 3D conformal point cloud alignment of UAV and lidar data in different 3D coordinate systems without any prior knowledge of the seven transformation parameters.

  18. Cloud point enhancement profile of libraries of modified Poly(N-isopropylmethacrylamide)

    International Nuclear Information System (INIS)

    Tavares, Alexandre Guilherme Silva; Silveira, Kelly Cristine da; Lucas, Elizabete Fernandes

    2016-01-01

    Full text: Poly(N-isopropyl methacrylamide) (PNIPMAM) based polymers are commercially available. These polymers present low cloud point, which may result in precipitation problems when applying for petroleum exploration [1]. Production of oil and gas has high temperature points, which can induce loss of activity for kinetic hydrate inhibitors (KHI), causing blockages by hydrates in pipes, fittings or valves. Hydrophobic groups can be added to modify PNIPMAM based polymers for hydrate inhibition during petroleum production. The cloud point enhancement profile of series of modified polymers was studied in this work. We synthesized poly(N-isopropyl methacrylamide-co-acrylic acid), P(NIPMAM-co-AA), by standard polymerization using AIBN as initiator. Series of modified polymers using two different groups (terc-butil and cyclopentyl) were studied. The characterization was made by nuclear magnetic resonance (NMR) to confirm the chemical structure; titration was used to determine the acrylic acid content for all synthesized polymers; Gel Permeation Chromatography (GPC) was applied to determine molar mass and polydispersity. A carbodiimide mediated coupling reaction (CMC) [2] was used to post synthetically modify the base polymer P(NIPMAM-co-AA) with N-(3-dimethylaminopropyl)-N’- ethylcarbodiimide hydrochloride (EDC) and N-hydroxysuccinimide (NHS) as activation agents. The cloud point experiment was carried out with deionized water and brine water where small vials with polymer solution were heated at 12 deg C/min rate. The temperature when the solution became turbid was monitored. The modified PNIPMAM based polymers presented a significant enhancement on cloud point temperature, up to 80 deg C, in comparison to unmodified polymer, P(NIPMAM-co- AA). References: [1] Mady, M. F.; Kelland, M.A. Energy and Fuels,28, 5714 (2014) [2] Silveira, K.C.; Sheng, Q.; Tian, W.; Lucas, E.F.; Wood, C.D. J. Appl. Poly. Sci.,132, 42797 (2015). (author)

  19. Multiview 3D sensing and analysis for high quality point cloud reconstruction

    Science.gov (United States)

    Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard

    2018-04-01

    Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.

  20. Automated Extraction of 3D Trees from Mobile LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Y. Yu

    2014-06-01

    Full Text Available This paper presents an automated algorithm for extracting 3D trees directly from 3D mobile light detection and ranging (LiDAR data. To reduce both computational and spatial complexities, ground points are first filtered out from a raw 3D point cloud via blockbased elevation filtering. Off-ground points are then grouped into clusters representing individual objects through Euclidean distance clustering and voxel-based normalized cut segmentation. Finally, a model-driven method is proposed to achieve the extraction of 3D trees based on a pairwise 3D shape descriptor. The proposed algorithm is tested using a set of mobile LiDAR point clouds acquired by a RIEGL VMX-450 system. The results demonstrate the feasibility and effectiveness of the proposed algorithm.

  1. A Novel Method for the Filterless Preconcentration of Iron

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    2005-01-01

    A novel method of analysis of iron by filterless preconcentration is presented. This is the first example of efficient preconcentration of a refractory transition metal where coprecipitation and columns were omitted. The method applies a manifold of flow injection analysis (FIA) to iron species...... that are preconcentrated on the inner walls of a tubular reactor. It was found that the adsorption of iron species to the walls was particularly pronounced in reactors of nylon material and enrichment factors of 30-35 could be attained, as dependent on the length of the reactor and on the time of preconcentration....... In the preconcentration step of the FIA accessory, the optimum efficacy was obtained when the acidity of the samples were adjusted by HCl to pH = 2.5 whereas the ammonia preconcentration buffer should be kept alkaline at pH = 9.8. After being preconcentrated on the tubular reactor, the iron species were eluted...

  2. High precision target center determination from a point cloud

    Directory of Open Access Journals (Sweden)

    K. Kregar

    2013-10-01

    Full Text Available Many applications of terrestrial laser scanners (TLS require the determination of a specific point from a point cloud. In this paper procedure of high precision planar target center acquisition from point cloud is presented. The process is based on an image matching algorithm but before we can deal with raster image to fit a target on it, we need to properly determine the best fitting plane and project points on it. The main emphasis of this paper is in the precision estimation and propagation through the whole procedure which allows us to obtain precision assessment of final results (target center coordinates. Theoretic precision estimations – obtained through the procedure were rather high so we compared them with the empiric precision estimations obtained as standard deviations of results of 60 independently scanned targets. An χ2-test confirmed that theoretic precisions are overestimated. The problem most probably lies in the overestimated precisions of the plane parameters due to vast redundancy of points. However, empirical precisions also confirmed that the proposed procedure can ensure a submillimeter precision level. The algorithm can automatically detect grossly erroneous results to some extent. It can operate when the incidence angles of a laser beam are as high as 80°, which is desirable property if one is going to use planar targets as tie points in scan registration. The proposed algorithm will also contribute to improve TLS calibration procedures.

  3. PARALLEL PROCESSING OF BIG POINT CLOUDS USING Z-ORDER-BASED PARTITIONING

    Directory of Open Access Journals (Sweden)

    C. Alis

    2016-06-01

    Full Text Available As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112 is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest

  4. Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method

    Directory of Open Access Journals (Sweden)

    Darae Jeong

    2018-01-01

    Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.

  5. STRUCTURE LINE DETECTION FROM LIDAR POINT CLOUDS USING TOPOLOGICAL ELEVATION ANALYSIS

    Directory of Open Access Journals (Sweden)

    C. Y. Lo

    2012-07-01

    Full Text Available Airborne LIDAR point clouds, which have considerable points on object surfaces, are essential to building modeling. In the last two decades, studies have developed different approaches to identify structure lines using two main approaches, data-driven and modeldriven. These studies have shown that automatic modeling processes depend on certain considerations, such as used thresholds, initial value, designed formulas, and predefined cues. Following the development of laser scanning systems, scanning rates have increased and can provide point clouds with higher point density. Therefore, this study proposes using topological elevation analysis (TEA to detect structure lines instead of threshold-dependent concepts and predefined constraints. This analysis contains two parts: data pre-processing and structure line detection. To preserve the original elevation information, a pseudo-grid for generating digital surface models is produced during the first part. The highest point in each grid is set as the elevation value, and its original threedimensional position is preserved. In the second part, using TEA, the structure lines are identified based on the topology of local elevation changes in two directions. Because structure lines can contain certain geometric properties, their locations have small relieves in the radial direction and steep elevation changes in the circular direction. Following the proposed approach, TEA can be used to determine 3D line information without selecting thresholds. For validation, the TEA results are compared with those of the region growing approach. The results indicate that the proposed method can produce structure lines using dense point clouds.

  6. Three-dimensional point-cloud room model in room acoustics simulations

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...... and rotated around the vertical axis while for the second one the device is moved within the room. Benefits of both approaches were analyzed. The device's depth sensor provides a set of points in a three-dimensional coordinate system which represents scanned surfaces of the room interior. These data are used...... to build a 3D point-cloud model of the room. Several models are created to meet requirements of different room acoustics simulation algorithms: plane fitting and uniform voxel grid for geometric methods and triangulation mesh for the numerical methods. Advantages of the proposed method over the traditional...

  7. Three-dimensional point-cloud room model for room acoustics simulations

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...... and rotated around the vertical axis while for the second one the device is moved within the room. Benefits of both approaches were analyzed. The device's depth sensor provides a set of points in a three-dimensional coordinate system which represents scanned surfaces of the room interior. These data are used...... to build a 3D point-cloud model of the room. Several models are created to meet requirements of different room acoustics simulation algorithms: plane fitting and uniform voxel grid for geometric methods and triangulation mesh for the numerical methods. Advantages of the proposed method over the traditional...

  8. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    Science.gov (United States)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  9. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    Science.gov (United States)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  10. Combined discrete nebulization and microextraction process for molybdenum determination by flame atomic absorption spectrometry (FAAS)

    International Nuclear Information System (INIS)

    Oviedo, Jenny A.; Jesus, Amanda M.D. de; Fialho, Lucimar L.; Pereira-Filho, Edenir R.

    2014-01-01

    Simple and sensitive procedures for the extraction/preconcentration of molybdenum based on vortex-assisted solidified floating organic drop microextraction (VA-SFODME) and cloud point combined with flame absorption atomic spectrometry (FAAS) and discrete nebulization were developed. The influence of the discrete nebulization on the sensitivity of the molybdenum preconcentration processes was studied. An injection volume of 200 μ resulted in a lower relative standard deviation with both preconcentration procedures. Enrichment factors of 31 and 67 and limits of detection of 25 and 5 μ L -1 were obtained for cloud point and VA-SFODME, respectively. The developed procedures were applied to the determination of Mo in mineral water and multivitamin samples. (author)

  11. Combined discrete nebulization and microextraction process for molybdenum determination by flame atomic absorption spectrometry (FAAS); Avaliacao da combinacao da nebulizacao discreta e processos de microextracao aplicados a determinacao de molibdenio por espectrometria de absorcao atomica com chama (FAAS)

    Energy Technology Data Exchange (ETDEWEB)

    Oviedo, Jenny A.; Jesus, Amanda M.D. de; Fialho, Lucimar L.; Pereira-Filho, Edenir R., E-mail: erpf@ufscar.br [Universidade Federal de Sao Carlos (UFSCar), SP (Brazil). Departamento de Quimica

    2014-04-15

    Simple and sensitive procedures for the extraction/preconcentration of molybdenum based on vortex-assisted solidified floating organic drop microextraction (VA-SFODME) and cloud point combined with flame absorption atomic spectrometry (FAAS) and discrete nebulization were developed. The influence of the discrete nebulization on the sensitivity of the molybdenum preconcentration processes was studied. An injection volume of 200 μ resulted in a lower relative standard deviation with both preconcentration procedures. Enrichment factors of 31 and 67 and limits of detection of 25 and 5 μ L{sup -1} were obtained for cloud point and VA-SFODME, respectively. The developed procedures were applied to the determination of Mo in mineral water and multivitamin samples. (author)

  12. Biotoxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system.

    Science.gov (United States)

    Pan, Tao; Liu, Chunyan; Zeng, Xinying; Xin, Qiao; Xu, Meiying; Deng, Yangwu; Dong, Wei

    2017-06-01

    A recent work has shown that hydrophobic organic compounds solubilized in the micelle phase of some nonionic surfactants present substrate toxicity to microorganisms with increasing bioavailability. However, in cloud point systems, biotoxicity is prevented, because the compounds are solubilized into a coacervate phase, thereby leaving a fraction of compounds with cells in a dilute phase. This study extends the understanding of the relationship between substrate toxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system. Biotoxicity experiments were conducted with naphthalene and phenanthrene in the presence of mixed nonionic surfactants Brij30 and TMN-3, which formed a micelle phase or cloud point system at different concentrations. Saccharomyces cerevisiae, unable to degrade these compounds, was used for the biotoxicity experiments. Glucose in the cloud point system was consumed faster than in the nonionic surfactant micelle phase, indicating that the solubilized compounds had increased toxicity to cells in the nonionic surfactant micelle phase. The results were verified by subsequent biodegradation experiments. The compounds were degraded faster by PAH-degrading bacterium in the cloud point system than in the micelle phase. All these results showed that biotoxicity of the hydrophobic organic compounds increases with bioavailability in the surfactant micelle phase but remains at a low level in the cloud point system. These results provide a guideline for the application of cloud point systems as novel media for microbial transformation or biodegradation.

  13. Cloud point extraction coupled with microwave-assisted back-extraction (CPE-MABE) for determination of Eszopiclone (Z-drug) using UV-Visible, HPLC and mass spectroscopic (MS) techniques: Spiked and in vivo analysis.

    Science.gov (United States)

    Kori, Shivpoojan; Parmar, Ankush; Goyal, Jony; Sharma, Shweta

    2018-02-01

    A procedure for the determination of Eszopiclone (ESZ) from complex matrices i.e. in vitro (spiked matrices), as well as in vivo (mice model) was developed using cloud point extraction coupled with microwave-assisted back-extraction (CPE-MABE). Analytical measurements have been carried using UV-Visible, HPLC and MS techniques. The proposed method has been validated according to ICH guidelines and legitimate reproducible and reliability of protocol is assessed through intraday and inter-day precision UV-Visible techniques, corresponding to assessed linearity range. The coaservate phase in CPE was back extracted under microwaves exposure, with isooctane at pre-concentration factor ~50 when 5mL of sample solution was pre-concentrated to 0.1mL. Under optimized conditions i.e. Aqueous-Triton X-114 4% (w/v), pH4.0, NaCl 4% (w/v) and equilibrium temperature of 45°C for 20min, average extraction recovery has been obtained between 89.8 and 99.2% and 84.0-99.2% from UV-Visible and HPLC analysis, respectively. The method has been successfully applied to the pharmacokinetic estimation (post intraperitoneal administration) of ESZ in mice. MS analysis precisely depicted the presence of active N‑desmethyl zopiclone in impales as well as in mice plasma. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. High-throughput liquid-absorption preconcentrator sampling methods

    Science.gov (United States)

    Zaromb, Solomon

    1994-01-01

    A system for detecting trace concentrations of an analyte in air includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container comprising a wettable material. The wettable material is continuously wetted with an analyte-sorbing liquid which flows from one part of the container to a lower end. Sampled air flows through the container in contact with the wetted material with a swirling motion which results in efficient transfer of analyte vapors or aerosol particles to the sorbing liquid and preconcentration of traces of analyte in the liquid. The preconcentrated traces of analyte may be either detected within the container or removed therefrom for injection into a separate detection means or for subsequent analysis.

  15. CO-REGISTRATION AIRBORNE LIDAR POINT CLOUD DATA AND SYNCHRONOUS DIGITAL IMAGE REGISTRATION BASED ON COMBINED ADJUSTMENT

    Directory of Open Access Journals (Sweden)

    Z. H. Yang

    2016-06-01

    Full Text Available Aim at the problem of co-registration airborne laser point cloud data with the synchronous digital image, this paper proposed a registration method based on combined adjustment. By integrating tie point, point cloud data with elevation constraint pseudo observations, using the principle of least-squares adjustment to solve the corrections of exterior orientation elements of each image, high-precision registration results can be obtained. In order to ensure the reliability of the tie point, and the effectiveness of pseudo observations, this paper proposed a point cloud data constrain SIFT matching and optimizing method, can ensure that the tie points are located on flat terrain area. Experiments with the airborne laser point cloud data and its synchronous digital image, there are about 43 pixels error in image space using the original POS data. If only considering the bore-sight of POS system, there are still 1.3 pixels error in image space. The proposed method regards the corrections of the exterior orientation elements of each image as unknowns and the errors are reduced to 0.15 pixels.

  16. MULTISEASONAL TREE CROWN STRUCTURE MAPPING WITH POINT CLOUDS FROM OTS QUADROCOPTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    S. Hese

    2017-08-01

    Full Text Available OTF (Off The Shelf quadro copter systems provide a cost effective (below 2000 Euro, flexible and mobile platform for high resolution point cloud mapping. Various studies showed the full potential of these small and flexible platforms. Especially in very tight and complex 3D environments the automatic obstacle avoidance, low copter weight, long flight times and precise maneuvering are important advantages of these small OTS systems in comparison with larger octocopter systems. This study examines the potential of the DJI Phantom 4 pro series and the Phantom 3A series for within-stand and forest tree crown 3D point cloud mapping using both within stand oblique imaging in different altitude levels and data captured from a nadir perspective. On a test site in Brandenburg/Germany a beach crown was selected and measured with 3 different altitude levels in Point Of Interest (POI mode with oblique data capturing and deriving one nadir mosaic created with 85/85 % overlap using Drone Deploy automatic mapping software. Three different flight campaigns were performed, one in September 2016 (leaf-on, one in March 2017 (leaf-off and one in May 2017 (leaf-on to derive point clouds from different crown structure and phenological situations – covering the leaf-on and leafoff status of the tree crown. After height correction, the point clouds where used with GPS geo referencing to calculate voxel based densities on 50 × 10 × 10 cm voxel definitions using a topological network of chessboard image objects in 0,5 m height steps in an object based image processing environment. Comparison between leaf-off and leaf-on status was done on volume pixel definitions comparing the attributed point densities per volume and plotting the resulting values as a function of distance to the crown center. In the leaf-off status SFM (structure from motion algorithms clearly identified the central stem and also secondary branch systems. While the penetration into the

  17. Multiseasonal Tree Crown Structure Mapping with Point Clouds from OTS Quadrocopter Systems

    Science.gov (United States)

    Hese, S.; Behrendt, F.

    2017-08-01

    OTF (Off The Shelf) quadro copter systems provide a cost effective (below 2000 Euro), flexible and mobile platform for high resolution point cloud mapping. Various studies showed the full potential of these small and flexible platforms. Especially in very tight and complex 3D environments the automatic obstacle avoidance, low copter weight, long flight times and precise maneuvering are important advantages of these small OTS systems in comparison with larger octocopter systems. This study examines the potential of the DJI Phantom 4 pro series and the Phantom 3A series for within-stand and forest tree crown 3D point cloud mapping using both within stand oblique imaging in different altitude levels and data captured from a nadir perspective. On a test site in Brandenburg/Germany a beach crown was selected and measured with 3 different altitude levels in Point Of Interest (POI) mode with oblique data capturing and deriving one nadir mosaic created with 85/85 % overlap using Drone Deploy automatic mapping software. Three different flight campaigns were performed, one in September 2016 (leaf-on), one in March 2017 (leaf-off) and one in May 2017 (leaf-on) to derive point clouds from different crown structure and phenological situations - covering the leaf-on and leafoff status of the tree crown. After height correction, the point clouds where used with GPS geo referencing to calculate voxel based densities on 50 × 10 × 10 cm voxel definitions using a topological network of chessboard image objects in 0,5 m height steps in an object based image processing environment. Comparison between leaf-off and leaf-on status was done on volume pixel definitions comparing the attributed point densities per volume and plotting the resulting values as a function of distance to the crown center. In the leaf-off status SFM (structure from motion) algorithms clearly identified the central stem and also secondary branch systems. While the penetration into the crown

  18. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Seoungjae Cho

    2014-01-01

    Full Text Available A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  19. A hierarchical methodology for urban facade parsing from TLS point clouds

    Science.gov (United States)

    Li, Zhuqiang; Zhang, Liqiang; Mathiopoulos, P. Takis; Liu, Fangyu; Zhang, Liang; Li, Shuaipeng; Liu, Hao

    2017-01-01

    The effective and automated parsing of building facades from terrestrial laser scanning (TLS) point clouds of urban environments is an important research topic in the GIS and remote sensing fields. It is also challenging because of the complexity and great variety of the available 3D building facade layouts as well as the noise and data missing of the input TLS point clouds. In this paper, we introduce a novel methodology for the accurate and computationally efficient parsing of urban building facades from TLS point clouds. The main novelty of the proposed methodology is that it is a systematic and hierarchical approach that considers, in an adaptive way, the semantic and underlying structures of the urban facades for segmentation and subsequent accurate modeling. Firstly, the available input point cloud is decomposed into depth planes based on a data-driven method; such layer decomposition enables similarity detection in each depth plane layer. Secondly, the labeling of the facade elements is performed using the SVM classifier in combination with our proposed BieS-ScSPM algorithm. The labeling outcome is then augmented with weak architectural knowledge. Thirdly, least-squares fitted normalized gray accumulative curves are applied to detect regular structures, and a binarization dilation extraction algorithm is used to partition facade elements. A dynamic line-by-line division is further applied to extract the boundaries of the elements. The 3D geometrical façade models are then reconstructed by optimizing facade elements across depth plane layers. We have evaluated the performance of the proposed method using several TLS facade datasets. Qualitative and quantitative performance comparisons with several other state-of-the-art methods dealing with the same facade parsing problem have demonstrated its superiority in performance and its effectiveness in improving segmentation accuracy.

  20. POINT CLOUD DERIVED FROMVIDEO FRAMES: ACCURACY ASSESSMENT IN RELATION TO TERRESTRIAL LASER SCANNINGAND DIGITAL CAMERA DATA

    Directory of Open Access Journals (Sweden)

    P. Delis

    2017-02-01

    Full Text Available The use of image sequences in the form of video frames recorded on data storage is very useful in especially when working with large and complex structures. Two cameras were used in this study: Sony NEX-5N (for the test object and Sony NEX-VG10 E (for the historic building. In both cases, a Sony α f = 16 mm fixed focus wide-angle lens was used. Single frames with sufficient overlap were selected from the video sequence using an equation for automatic frame selection. In order to improve the quality of the generated point clouds, each video frame underwent histogram equalization and image sharpening. Point clouds were generated from the video frames using the SGM-like image matching algorithm. The accuracy assessment was based on two reference point clouds: the first from terrestrial laser scanning and the second generated based on images acquired using a high resolution camera, the NIKON D800. The performed research has shown, that highest accuracies are obtained for point clouds generated from video frames, for which a high pass filtration and histogram equalization had been performed. Studies have shown that to obtain a point cloud density comparable to TLS, an overlap between subsequent video frames must be 85 % or more. Based on the point cloud generated from video data, a parametric 3D model can be generated. This type of the 3D model can be used in HBIM construction.

  1. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    Science.gov (United States)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  2. Determination of cadmium(II), cobalt(II), nickel(II), lead(II), zinc(II), and copper(II) in water samples using dual-cloud point extraction and inductively coupled plasma emission spectrometry.

    Science.gov (United States)

    Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng; Chen, Jianrong

    2012-11-15

    A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH=7.0, Triton X-114=0.05% (w/v), 8-HQ=2.0×10(-4) mol L(-1), HNO3=0.8 mol L(-1)), the detection limits for Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 μg L(-1), respectively. Relative standard deviation (RSD) values for 10 replicates at 100 μg L(-1) were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion in water samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Feasibility of Smartphone Based Photogrammetric Point Clouds for the Generation of Accessibility Maps

    Science.gov (United States)

    Angelats, E.; Parés, M. E.; Kumar, P.

    2018-05-01

    Accessible cities with accessible services are an old claim of people with reduced mobility. But this demand is still far away of becoming a reality as lot of work is required to be done yet. First step towards accessible cities is to know about real situation of the cities and its pavement infrastructure. Detailed maps or databases on street slopes, access to sidewalks, mobility in public parks and gardens, etc. are required. In this paper, we propose to use smartphone based photogrammetric point clouds, as a starting point to create accessible maps or databases. This paper analyses the performance of these point clouds and the complexity of the image acquisition procedure required to obtain them. The paper proves, through two test cases, that smartphone technology is an economical and feasible solution to get the required information, which is quite often seek by city planners to generate accessible maps. The proposed approach paves the way to generate, in a near term, accessibility maps through the use of point clouds derived from crowdsourced smartphone imagery.

  4. Study into Point Cloud Geometric Rigidity and Accuracy of TLS-Based Identification of Geometric Bodies

    Science.gov (United States)

    Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz

    2017-12-01

    Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects

  5. Effect of target color and scanning geometry on terrestrial LiDAR point-cloud noise and plane fitting

    Science.gov (United States)

    Bolkas, Dimitrios; Martinez, Aaron

    2018-01-01

    Point-cloud coordinate information derived from terrestrial Light Detection And Ranging (LiDAR) is important for several applications in surveying and civil engineering. Plane fitting and segmentation of target-surfaces is an important step in several applications such as in the monitoring of structures. Reliable parametric modeling and segmentation relies on the underlying quality of the point-cloud. Therefore, understanding how point-cloud errors affect fitting of planes and segmentation is important. Point-cloud intensity, which accompanies the point-cloud data, often goes hand-in-hand with point-cloud noise. This study uses industrial particle boards painted with eight different colors (black, white, grey, red, green, blue, brown, and yellow) and two different sheens (flat and semi-gloss) to explore how noise and plane residuals vary with scanning geometry (i.e., distance and incidence angle) and target-color. Results show that darker colors, such as black and brown, can produce point clouds that are several times noisier than bright targets, such as white. In addition, semi-gloss targets manage to reduce noise in dark targets by about 2-3 times. The study of plane residuals with scanning geometry reveals that, in many of the cases tested, residuals decrease with increasing incidence angles, which can assist in understanding the distribution of plane residuals in a dataset. Finally, a scheme is developed to derive survey guidelines based on the data collected in this experiment. Three examples demonstrate that users should consider instrument specification, required precision of plane residuals, required point-spacing, target-color, and target-sheen, when selecting scanning locations. Outcomes of this study can aid users to select appropriate instrumentation and improve planning of terrestrial LiDAR data-acquisition.

  6. An Efficient Method to Create Digital Terrain Models from Point Clouds Collected by Mobile LiDAR Systems

    Science.gov (United States)

    Gézero, L.; Antunes, C.

    2017-05-01

    The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.

  7. Raster Vs. Point Cloud LiDAR Data Classification

    Science.gov (United States)

    El-Ashmawy, N.; Shaker, A.

    2014-09-01

    Airborne Laser Scanning systems with light detection and ranging (LiDAR) technology is one of the fast and accurate 3D point data acquisition techniques. Generating accurate digital terrain and/or surface models (DTM/DSM) is the main application of collecting LiDAR range data. Recently, LiDAR range and intensity data have been used for land cover classification applications. Data range and Intensity, (strength of the backscattered signals measured by the LiDAR systems), are affected by the flying height, the ground elevation, scanning angle and the physical characteristics of the objects surface. These effects may lead to uneven distribution of point cloud or some gaps that may affect the classification process. Researchers have investigated the conversion of LiDAR range point data to raster image for terrain modelling. Interpolation techniques have been used to achieve the best representation of surfaces, and to fill the gaps between the LiDAR footprints. Interpolation methods are also investigated to generate LiDAR range and intensity image data for land cover classification applications. In this paper, different approach has been followed to classifying the LiDAR data (range and intensity) for land cover mapping. The methodology relies on the classification of the point cloud data based on their range and intensity and then converted the classified points into raster image. The gaps in the data are filled based on the classes of the nearest neighbour. Land cover maps are produced using two approaches using: (a) the conventional raster image data based on point interpolation; and (b) the proposed point data classification. A study area covering an urban district in Burnaby, British Colombia, Canada, is selected to compare the results of the two approaches. Five different land cover classes can be distinguished in that area: buildings, roads and parking areas, trees, low vegetation (grass), and bare soil. The results show that an improvement of around 10 % in the

  8. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium(II), lead(II), palladium(II) and silver(I) in environmental samples

    International Nuclear Information System (INIS)

    Ghaedi, Mehrorang; Shokrollahi, Ardeshir; Niknam, Khodabakhsh; Niknam, Ebrahim; Najibi, Asma; Soylak, Mustafa

    2009-01-01

    The phase-separation phenomenon of non-ionic surfactants occurring in aqueous solution was used for the extraction of cadmium(II), lead(II), palladium(II) and silver(I). The analytical procedure involved the formation of understudy metals complex with bis((1H-benzo [d] imidazol-2yl)ethyl) sulfane (BIES), and quantitatively extracted to the phase rich in octylphenoxypolyethoxyethanol (Triton X-114) after centrifugation. Methanol acidified with 1 mol L -1 HNO 3 was added to the surfactant-rich phase prior to its analysis by flame atomic absorption spectrometry (FAAS). The concentration of BIES, pH and amount of surfactant (Triton X-114) was optimized. At optimum conditions, the detection limits of (3 sdb/m) of 1.4, 2.8, 1.6 and 1.4 ng mL -1 for Cd 2+ , Pb 2+ , Pd 2+ and Ag + along with preconcentration factors of 30 and enrichment factors of 48, 39, 32 and 42 for Cd 2+ , Pb 2+ , Pd 2+ and Ag + , respectively, were obtained. The proposed cloud point extraction has been successfully applied for the determination of metal ions in real samples with complicated matrix such as radiology waste, vegetable, blood and urine samples.

  9. Registration of TLS and MLS Point Cloud Combining Genetic Algorithm with ICP

    Directory of Open Access Journals (Sweden)

    YAN Li

    2018-04-01

    Full Text Available Large scene point cloud can be quickly acquired by mobile laser scanning (MLS technology,which needs to be supplemented by terrestrial laser scanning (TLS point cloud because of limited field of view and occlusion.MLS and TLS point cloud are located in geodetic coordinate system and local coordinate system respectively.This paper proposes an automatic registration method combined genetic algorithm (GA and iterative closed point ICP to achieve a uniform coordinate reference frame.The local optimizer is utilized in ICP.The efficiency of ICP is higher than that of GA registration,but it depends on a initial solution.GA is a global optimizer,but it's inefficient.The combining strategy is that ICP is enabled to complete the registration when the GA tends to local search.The rough position measured by a built-in GPS of a terrestrial laser scanner is used in the GA registration to limit its optimizing search space.To improve the GA registration accuracy,a maximum registration model called normalized sum of matching scores (NSMS is presented.The results for measured data show that the NSMS model is effective,the root mean square error (RMSE of GA registration is 1~5 cm and the registration efficiency can be improved by about 50% combining GA with ICP.

  10. Continuous Extraction of Subway Tunnel Cross Sections Based on Terrestrial Point Clouds

    Directory of Open Access Journals (Sweden)

    Zhizhong Kang

    2014-01-01

    Full Text Available An efficient method for the continuous extraction of subway tunnel cross sections using terrestrial point clouds is proposed. First, the continuous central axis of the tunnel is extracted using a 2D projection of the point cloud and curve fitting using the RANSAC (RANdom SAmple Consensus algorithm, and the axis is optimized using a global extraction strategy based on segment-wise fitting. The cross-sectional planes, which are orthogonal to the central axis, are then determined for every interval. The cross-sectional points are extracted by intersecting straight lines that rotate orthogonally around the central axis within the cross-sectional plane with the tunnel point cloud. An interpolation algorithm based on quadric parametric surface fitting, using the BaySAC (Bayesian SAmpling Consensus algorithm, is proposed to compute the cross-sectional point when it cannot be acquired directly from the tunnel points along the extraction direction of interest. Because the standard shape of the tunnel cross section is a circle, circle fitting is implemented using RANSAC to reduce the noise. The proposed approach is tested on terrestrial point clouds that cover a 150-m-long segment of a Shanghai subway tunnel, which were acquired using a LMS VZ-400 laser scanner. The results indicate that the proposed quadric parametric surface fitting using the optimized BaySAC achieves a higher overall fitting accuracy (0.9 mm than the accuracy (1.6 mm obtained by the plain RANSAC. The results also show that the proposed cross section extraction algorithm can achieve high accuracy (millimeter level, which was assessed by comparing the fitted radii with the designed radius of the cross section and comparing corresponding chord lengths in different cross sections and high efficiency (less than 3 s/section on average.

  11. A Green and Efficient Method for the Preconcentration and Determination of Gallic Acid, Bergenin, Quercitrin, and Embelin from Ardisia japonica Using Nononic Surfactant Genapol X-080 as the Extraction Solvent

    Science.gov (United States)

    Chen, Ying; Du, Kunze; Li, Jin; Bai, Yun; An, Mingrui; Tan, Zhijing

    2018-01-01

    A simple cloud point preconcentration method was developed and validated for the determination of gallic acid, bergenin, quercitrin, and embelin in Ardisia japonica by high-performance liquid chromatography (HPLC) using ultrasonic assisted micellar extraction. Nonionic surfactant Genapol X-080 was selected as the extraction solvent. The effects of various experimental conditions such as the type and concentration of surfactant and salt, temperature, and solution pH on the extraction of these components were studied to optimize the conditions of Ardisia japonica. The solution was incubated in a thermostatic water bath at 60°C for 10 min, and 35% NaH2PO4 (w/v) was added to the solution to promote the phase separation and increase the preconcentration factor. The intraday and interday precision (RSD) were both below 5.0% and the limits of detection (LOD) for the analytes were between 10 and 20 ng·mL−1. The proposed method provides a simple, efficient, and organic solvent-free method to analyze gallic acid, bergenin, quercitrin, and embelin for the quality control of Ardisia japonica. PMID:29487621

  12. A Green and Efficient Method for the Preconcentration and Determination of Gallic Acid, Bergenin, Quercitrin, and Embelin from Ardisia japonica Using Nononic Surfactant Genapol X-080 as the Extraction Solvent

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2018-01-01

    Full Text Available A simple cloud point preconcentration method was developed and validated for the determination of gallic acid, bergenin, quercitrin, and embelin in Ardisia japonica by high-performance liquid chromatography (HPLC using ultrasonic assisted micellar extraction. Nonionic surfactant Genapol X-080 was selected as the extraction solvent. The effects of various experimental conditions such as the type and concentration of surfactant and salt, temperature, and solution pH on the extraction of these components were studied to optimize the conditions of Ardisia japonica. The solution was incubated in a thermostatic water bath at 60°C for 10 min, and 35% NaH2PO4 (w/v was added to the solution to promote the phase separation and increase the preconcentration factor. The intraday and interday precision (RSD were both below 5.0% and the limits of detection (LOD for the analytes were between 10 and 20 ng·mL−1. The proposed method provides a simple, efficient, and organic solvent-free method to analyze gallic acid, bergenin, quercitrin, and embelin for the quality control of Ardisia japonica.

  13. Ultratrace determination of lead by hydride generation in-atomizer trapping atomic absorption spectrometry: Optimization of plumbane generation and analyte preconcentration in a quartz trap-and-atomizer device

    Energy Technology Data Exchange (ETDEWEB)

    Kratzer, Jan, E-mail: jkratzer@biomed.cas.cz

    2012-05-15

    A compact trap-and-atomizer device and a preconcentration procedure based on hydride trapping in excess of oxygen over hydrogen in the collection step, both constructed and developed previously in our laboratory, were employed to optimize plumbane trapping in this device and to develop a routine method for ultratrace lead determination subsequently. The inherent advantage of this preconcentration approach is that 100% preconcentration efficiency for lead is reached in this device which has never been reported before using quartz or metal traps. Plumbane is completely retained in the trap-and-atomizer device at 290 Degree-Sign C in oxygen-rich atmosphere and trapped species are subsequently volatilized at 830 Degree-Sign C in hydrogen-rich atmosphere. Effect of relevant experimental parameters on plumbane trapping and lead volatilization are discussed, and possible trapping mechanisms are hypothesized. Plumbane trapping in the trap-and-atomizer device can be routinely used for lead determination at ultratrace levels reaching a detection limit of 0.21 ng ml{sup -1} Pb (30 s preconcentration, sample volume 2 ml). Further improvement of the detection limit is feasible by reducing the blank signal and increasing the trapping time. - Highlights: Black-Right-Pointing-Pointer In-atomizer trapping HG-AAS was optimized for Pb. Black-Right-Pointing-Pointer A compact quartz trap-and-atomizer device was employed. Black-Right-Pointing-Pointer Generation, preconcentration and atomization steps were investigated in detail. Black-Right-Pointing-Pointer 100% preconcentration efficiency for lead was reached. Black-Right-Pointing-Pointer Routine analytical method was developed for Pb determination (LOD of 0.2 ng ml{sup -1} Pb).

  14. Cloud point extraction and spectrophotometric determination of mercury species at trace levels in environmental samples.

    Science.gov (United States)

    Ulusoy, Halil İbrahim; Gürkan, Ramazan; Ulusoy, Songül

    2012-01-15

    A new micelle-mediated separation and preconcentration method was developed for ultra-trace quantities of mercury ions prior to spectrophotometric determination. The method is based on cloud point extraction (CPE) of Hg(II) ions with polyethylene glycol tert-octylphenyl ether (Triton X-114) in the presence of chelating agents such as 1-(2-pyridylazo)-2-naphthol (PAN) and 4-(2-thiazolylazo) resorcinol (TAR). Hg(II) ions react with both PAN and TAR in a surfactant solution yielding a hydrophobic complex at pH 9.0 and 8.0, respectively. The phase separation was accomplished by centrifugation for 5 min at 3500 rpm. The calibration graphs obtained from Hg(II)-PAN and Hg(II)-TAR complexes were linear in the concentration ranges of 10-1000 μg L(-1) and 50-2500 μg L(-1) with detection limits of 1.65 and 14.5 μg L(-1), respectively. The relative standard deviations (RSDs) were 1.85% and 2.35% in determinations of 25 and 250 μg L(-1) Hg(II), respectively. The interference effect of several ions were studied and seen commonly present ions in water samples had no significantly effect on determination of Hg(II). The developed methods were successfully applied to determine mercury concentrations in environmental water samples. The accuracy and validity of the proposed methods were tested by means of five replicate analyses of the certified standard materials such as QC Metal LL3 (VWR, drinking water) and IAEA W-4 (NIST, simulated fresh water). Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Preconcentration of traces of radionuclides with sorbents based on spherical polyurethane membrane systems in the analysis of environmental samples

    International Nuclear Information System (INIS)

    Palagyi, S.; Braun, T.

    1986-01-01

    In the paper the importance of preconcentration and a permanent need for efficient preconcentrating agents in environmental analysis are pointed out. The increased attention is devoted to the foamed polyurethane sorbents as a novel advance in the separation chemistry. The paper has two main aims. The first is to present a survey of recent applications of unloaded and reagent loaded open-cell type resilient polyurethane foams to the separation and preconcentration of radionuclides from environmental samples. The second is to show the newest results in the use of these foams for the preconcentration and determination of traces of some mainly inorganic species in environmental samples by radioanalytical techniques. Some future possibilities of the use of polyurethane foams in trace elemental determinations in environmental analysis are also outlined. (author)

  16. Speeding up coarse point cloud registration by threshold-independent baysac match selection

    NARCIS (Netherlands)

    Kang, Z.; Lindenbergh, R.C.; Pu, S

    2016-01-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method - Threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point- To-surface residual to reduce

  17. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    Science.gov (United States)

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  18. A Knowledge Base for Automatic Feature Recognition from Point Clouds in an Urban Scene

    Directory of Open Access Journals (Sweden)

    Xu-Feng Xing

    2018-01-01

    Full Text Available LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes even more complex when the data is incomplete (occlusion problem or uncertain. In this paper, we propose to build a knowledge base comprising of ontology and semantic rules aiming at automatic feature recognition from point clouds in support of 3D modeling. First, several modules for ontology are defined from different perspectives to describe an urban scene. For instance, the spatial relations module allows the formalized representation of possible topological relations extracted from point clouds. Then, a knowledge base is proposed that contains different concepts, their properties and their relations, together with constraints and semantic rules. Then, instances and their specific relations form an urban scene and are added to the knowledge base as facts. Based on the knowledge and semantic rules, a reasoning process is carried out to extract semantic features of the objects and their components in the urban scene. Finally, several experiments are presented to show the validity of our approach to recognize different semantic features of buildings from LiDAR point clouds.

  19. QUALITY ASSESSMENT AND COMPARISON OF SMARTPHONE AND LEICA C10 LASER SCANNER BASED POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    B. Sirmacek

    2016-06-01

    Full Text Available 3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners and low cost cameras (which can generate point clouds based on photogrammetry can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  20. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds

    Science.gov (United States)

    Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu

    2016-06-01

    3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  1. Assessing the Accuracy of Georeferenced Point Clouds Produced via Multi-View Stereopsis from Unmanned Aerial Vehicle (UAV Imagery

    Directory of Open Access Journals (Sweden)

    Arko Lucieer

    2012-05-01

    Full Text Available Sensor miniaturisation, improved battery technology and the availability of low-cost yet advanced Unmanned Aerial Vehicles (UAV have provided new opportunities for environmental remote sensing. The UAV provides a platform for close-range aerial photography. Detailed imagery captured from micro-UAV can produce dense point clouds using multi-view stereopsis (MVS techniques combining photogrammetry and computer vision. This study applies MVS techniques to imagery acquired from a multi-rotor micro-UAV of a natural coastal site in southeastern Tasmania, Australia. A very dense point cloud ( < 1–3 cm point spacing is produced in an arbitrary coordinate system using full resolution imagery, whereas other studies usually downsample the original imagery. The point cloud is sparse in areas of complex vegetation and where surfaces have a homogeneous texture. Ground control points collected with Differential Global Positioning System (DGPS are identified and used for georeferencing via a Helmert transformation. This study compared georeferenced point clouds to a Total Station survey in order to assess and quantify their geometric accuracy. The results indicate that a georeferenced point cloud accurate to 25–40 mm can be obtained from imagery acquired from 50 m. UAV-based image capture provides the spatial and temporal resolution required to map and monitor natural landscapes. This paper assesses the accuracy of the generated point clouds based on field survey points. Based on our key findings we conclude that sub-decimetre terrain change (in this case coastal erosion can be monitored.

  2. Sideloading - Ingestion of Large Point Clouds Into the Apache Spark Big Data Engine

    Science.gov (United States)

    Boehm, J.; Liu, K.; Alis, C.

    2016-06-01

    In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.

  3. 3D Maize Plant Reconstruction Based on Georeferenced Overlapping LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Miguel Garrido

    2015-12-01

    Full Text Available 3D crop reconstruction with a high temporal resolution and by the use of non-destructive measuring technologies can support the automation of plant phenotyping processes. Thereby, the availability of such 3D data can give valuable information about the plant development and the interaction of the plant genotype with the environment. This article presents a new methodology for georeferenced 3D reconstruction of maize plant structure. For this purpose a total station, an IMU, and several 2D LiDARs with different orientations were mounted on an autonomous vehicle. By the multistep methodology presented, based on the application of the ICP algorithm for point cloud fusion, it was possible to perform the georeferenced point clouds overlapping. The overlapping point cloud algorithm showed that the aerial points (corresponding mainly to plant parts were reduced to 1.5%–9% of the total registered data. The remaining were redundant or ground points. Through the inclusion of different LiDAR point of views of the scene, a more realistic representation of the surrounding is obtained by the incorporation of new useful information but also of noise. The use of georeferenced 3D maize plant reconstruction at different growth stages, combined with the total station accuracy could be highly useful when performing precision agriculture at the crop plant level.

  4. Registration of Laser Scanning Point Clouds: A Review

    Science.gov (United States)

    Cheng, Liang; Chen, Song; Xu, Hao; Wu, Yang; Li, Manchun

    2018-01-01

    The integration of multi-platform, multi-angle, and multi-temporal LiDAR data has become important for geospatial data applications. This paper presents a comprehensive review of LiDAR data registration in the fields of photogrammetry and remote sensing. At present, a coarse-to-fine registration strategy is commonly used for LiDAR point clouds registration. The coarse registration method is first used to achieve a good initial position, based on which registration is then refined utilizing the fine registration method. According to the coarse-to-fine framework, this paper reviews current registration methods and their methodologies, and identifies important differences between them. The lack of standard data and unified evaluation systems is identified as a factor limiting objective comparison of different methods. The paper also describes the most commonly-used point cloud registration error analysis methods. Finally, avenues for future work on LiDAR data registration in terms of applications, data, and technology are discussed. In particular, there is a need to address registration of multi-angle and multi-scale data from various newly available types of LiDAR hardware, which will play an important role in diverse applications such as forest resource surveys, urban energy use, cultural heritage protection, and unmanned vehicles.

  5. Calibrated HDRI in 3D point clouds

    DEFF Research Database (Denmark)

    Bülow, Katja; Tamke, Martin

    2017-01-01

    the challenges of dynamic smart lighting planning in outdoor urban space. This paper presents findings on how 3D capturing of outdoor environments combined with HDRI establishes a new way for analysing and representing the spatial distribution of light in combination with luminance data.......3D-scanning technologies and point clouds as means for spatial representation introduce a new paradigm to the measuring and mapping of physical artefacts and space. This technology also offers possibilities for the measuring and mapping of outdoor urban lighting and has the potential to meet...

  6. Coarse point cloud registration by EGI matching of voxel clusters

    NARCIS (Netherlands)

    Wang, J.; Lindenbergh, R.C.; Shen, Y.; Menenti, M.

    2016-01-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The

  7. Thermodynamics of non-ionic surfactant Triton X-100-cationic surfactants mixtures at the cloud point

    International Nuclear Information System (INIS)

    Batigoec, Cigdem; Akbas, Halide; Boz, Mesut

    2011-01-01

    Highlights: → Non-ionic surfactants are used as emulsifier and solubilizate in such as textile, detergent and cosmetic. → Non-ionic surfactants occur phase separation at temperature as named the cloud point in solution. → Dimeric surfactants have attracted increasing attention due to their superior surface activity. → The positive values of ΔG cp 0 indicate that the process proceeds nonspontaneous. - Abstract: This study investigates the effects of gemini and conventional cationic surfactants on the cloud point (CP) of the non-ionic surfactant Triton X-100 (TX-100) in aqueous solutions. Instead of visual observation, a spectrophotometer was used for measurement of the cloud point temperatures. The thermodynamic parameters of these mixtures were calculated at different cationic surfactant concentrations. The gemini surfactants of the alkanediyl-α-ω-bis (alkyldimethylammonium) dibromide type, on the one hand, with different alkyl groups containing m carbon atoms and an ethanediyl spacer, referred to as 'm-2-m' (m = 10, 12, and 16) and, on the other hand, with -C 16 alkyl groups and different spacers containing s carbon atoms, referred to as '16-s-16' (s = 6 and 10) were synthesized, purified and characterized. Additions of the cationic surfactants to the TX-100 solution increased the cloud point temperature of the TX-100 solution. It was accepted that the solubility of non-ionic surfactant containing polyoxyethylene (POE) hydrophilic chain was a maximum at the cloud point so that the thermodynamic parameters were calculated at this temperature. The results showed that the standard Gibbs free energy (ΔG cp 0 ), the enthalpy (ΔH cp 0 ) and the entropy (ΔS cp 0 ) of the clouding phenomenon were found positive in all cases. The standard free energy (ΔG cp 0 ) increased with increasing hydrophobic alkyl chain for both gemini and conventional cationic surfactants; however, it decreased with increasing surfactant concentration.

  8. Quantifying Biomass from Point Clouds by Connecting Representations of Ecosystem Structure

    Science.gov (United States)

    Hendryx, S. M.; Barron-Gafford, G.

    2017-12-01

    Quantifying terrestrial ecosystem biomass is an essential part of monitoring carbon stocks and fluxes within the global carbon cycle and optimizing natural resource management. Point cloud data such as from lidar and structure from motion can be effective for quantifying biomass over large areas, but significant challenges remain in developing effective models that allow for such predictions. Inference models that estimate biomass from point clouds are established in many environments, yet, are often scale-dependent, needing to be fitted and applied at the same spatial scale and grid size at which they were developed. Furthermore, training such models typically requires large in situ datasets that are often prohibitively costly or time-consuming to obtain. We present here a scale- and sensor-invariant framework for efficiently estimating biomass from point clouds. Central to this framework, we present a new algorithm, assignPointsToExistingClusters, that has been developed for finding matches between in situ data and clusters in remotely-sensed point clouds. The algorithm can be used for assessing canopy segmentation accuracy and for training and validating machine learning models for predicting biophysical variables. We demonstrate the algorithm's efficacy by using it to train a random forest model of above ground biomass in a shrubland environment in Southern Arizona. We show that by learning a nonlinear function to estimate biomass from segmented canopy features we can reduce error, especially in the presence of inaccurate clusterings, when compared to a traditional, deterministic technique to estimate biomass from remotely measured canopies. Our random forest on cluster features model extends established methods of training random forest regressions to predict biomass of subplots but requires significantly less training data and is scale invariant. The random forest on cluster features model reduced mean absolute error, when evaluated on all test data in leave

  9. Dual cloud point extraction coupled with hydrodynamic-electrokinetic two-step injection followed by micellar electrokinetic chromatography for simultaneous determination of trace phenolic estrogens in water samples.

    Science.gov (United States)

    Wen, Yingying; Li, Jinhua; Liu, Junshen; Lu, Wenhui; Ma, Jiping; Chen, Lingxin

    2013-07-01

    A dual cloud point extraction (dCPE) off-line enrichment procedure coupled with a hydrodynamic-electrokinetic two-step injection online enrichment technique was successfully developed for simultaneous preconcentration of trace phenolic estrogens (hexestrol, dienestrol, and diethylstilbestrol) in water samples followed by micellar electrokinetic chromatography (MEKC) analysis. Several parameters affecting the extraction and online injection conditions were optimized. Under optimal dCPE-two-step injection-MEKC conditions, detection limits of 7.9-8.9 ng/mL and good linearity in the range from 0.05 to 5 μg/mL with correlation coefficients R(2) ≥ 0.9990 were achieved. Satisfactory recoveries ranging from 83 to 108% were obtained with lake and tap water spiked at 0.1 and 0.5 μg/mL, respectively, with relative standard deviations (n = 6) of 1.3-3.1%. This method was demonstrated to be convenient, rapid, cost-effective, and environmentally benign, and could be used as an alternative to existing methods for analyzing trace residues of phenolic estrogens in water samples.

  10. Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data

    Science.gov (United States)

    Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.

    2018-04-01

    With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.

  11. An Investigation of the High Efficiency Estimation Approach of the Large-Scale Scattered Point Cloud Normal Vector

    Directory of Open Access Journals (Sweden)

    Xianglin Meng

    2018-03-01

    Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.

  12. Point Clouds to Indoor/outdoor Accessibility Diagnosis

    Science.gov (United States)

    Balado, J.; Díaz-Vilariño, L.; Arias, P.; Garrido, I.

    2017-09-01

    This work presents an approach to automatically detect structural floor elements such as steps or ramps in the immediate environment of buildings, elements that may affect the accessibility to buildings. The methodology is based on Mobile Laser Scanner (MLS) point cloud and trajectory information. First, the street is segmented in stretches along the trajectory of the MLS to work in regular spaces. Next, the lower region of each stretch (the ground zone) is selected as the ROI and normal, curvature and tilt are calculated for each point. With this information, points in the ROI are classified in horizontal, inclined or vertical. Points are refined and grouped in structural elements using raster process and connected components in different phases for each type of previously classified points. At last, the trajectory data is used to distinguish between road and sidewalks. Adjacency information is used to classify structural elements in steps, ramps, curbs and curb-ramps. The methodology is tested in a real case study, consisting of 100 m of an urban street. Ground elements are correctly classified in an acceptable computation time. Steps and ramps also are exported to GIS software to enrich building models from Open Street Map with information about accessible/inaccessible entrances and their locations.

  13. RAPID INSPECTION OF PAVEMENT MARKINGS USING MOBILE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. Zhang

    2016-06-01

    Full Text Available This study aims at building a robust semi-automated pavement marking extraction workflow based on the use of mobile LiDAR point clouds. The proposed workflow consists of three components: preprocessing, extraction, and classification. In preprocessing, the mobile LiDAR point clouds are converted into the radiometrically corrected intensity imagery of the road surface. Then the pavement markings are automatically extracted with the intensity using a set of algorithms, including Otsu’s thresholding, neighbor-counting filtering, and region growing. Finally, the extracted pavement markings are classified with the geometric parameters using a manually defined decision tree. Case studies are conducted using the mobile LiDAR dataset acquired in Xiamen (Fujian, China with different road environments by the RIEGL VMX-450 system. The results demonstrated that the proposed workflow and our software tool can achieve 93% in completeness, 95% in correctness, and 94% in F-score when using Xiamen dataset.

  14. Automatic co-registration of 3D multi-sensor point clouds

    Science.gov (United States)

    Persad, Ravi Ancil; Armenakis, Costas

    2017-08-01

    We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.

  15. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    Science.gov (United States)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  16. An Approach for Automatic Orientation of Big Point Clouds from the Stationary Scanners Based on the Spherical Targets

    Directory of Open Access Journals (Sweden)

    YAO Jili

    2015-04-01

    Full Text Available Terrestrial laser scanning (TLS technology has high speed of data acquisition, large amount of point cloud, long distance of measuring. However, there are some disadvantages such as distance limitation in target detecting, hysteresis in point clouds processing, low automation and weaknesses of adapting long-distance topographic survey. In this case, we put forward a method on long-range targets detecting in big point clouds orientation. The method firstly searches point cloud rings that contain targets according to their engineering coordinate system. Then the detected rings are divided into sectors to detect targets in a very short time so as to obtain central coordinates of these targets. Finally, the position and orientation parameters of scanner are calculated and point clouds in scanner's own coordinate system(SOCS are converted into engineering coordinate system. The method is able to be applied in ordinary computers for long distance topographic(the distance between scanner and targets ranges from 180 to 700 m survey in mountainous areas with targets radius of 0.162m.

  17. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    Science.gov (United States)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  18. Feature relevance assessment for the semantic interpretation of 3D point cloud data

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2013-10-01

    Full Text Available The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.

  19. Segmentation of Large Unstructured Point Clouds Using Octree-Based Region Growing and Conditional Random Fields

    Science.gov (United States)

    Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.

    2017-11-01

    Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.

  20. 3d object segmentation of point clouds using profiling techniques

    African Journals Online (AJOL)

    Administrator

    optimization attempts to physically store the point cloud so that storage, retrieval and visualisation ..... Ideally three stacks should be sufficient, but in practice four or five are used. .... The authors would like to acknowledge that this paper is based on a paper presented at ... Theory, Processing and Application, 5 pages.

  1. Hierarchical model generation for architecture reconstruction using laser-scanned point clouds

    Science.gov (United States)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2014-06-01

    Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.

  2. 3-D OBJECT RECOGNITION FROM POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    W. Smith

    2012-09-01

    Full Text Available The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs. Massively parallel processes such as graphics processing unit (GPU computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM and digital elevation model (DEM, so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex

  3. 3-D Object Recognition from Point Cloud Data

    Science.gov (United States)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  4. SIDELOADING – INGESTION OF LARGE POINT CLOUDS INTO THE APACHE SPARK BIG DATA ENGINE

    Directory of Open Access Journals (Sweden)

    J. Boehm

    2016-06-01

    Full Text Available In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.

  5. Continuously deformation monitoring of subway tunnel based on terrestrial point clouds

    NARCIS (Netherlands)

    Kang, Z.; Tuo, L.; Zlatanova, S.

    2012-01-01

    The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the

  6. COMPREHENSIVE COMPARISON OF TWO IMAGE-BASED POINT CLOUDS FROM AERIAL PHOTOS WITH AIRBORNE LIDAR FOR LARGE-SCALE MAPPING

    Directory of Open Access Journals (Sweden)

    E. Widyaningrum

    2017-09-01

    Full Text Available The integration of computer vision and photogrammetry to generate three-dimensional (3D information from images has contributed to a wider use of point clouds, for mapping purposes. Large-scale topographic map production requires 3D data with high precision and accuracy to represent the real conditions of the earth surface. Apart from LiDAR point clouds, the image-based matching is also believed to have the ability to generate reliable and detailed point clouds from multiple-view images. In order to examine and analyze possible fusion of LiDAR and image-based matching for large-scale detailed mapping purposes, point clouds are generated by Semi Global Matching (SGM and by Structure from Motion (SfM. In order to conduct comprehensive and fair comparison, this study uses aerial photos and LiDAR data that were acquired at the same time. Qualitative and quantitative assessments have been applied to evaluate LiDAR and image-matching point clouds data in terms of visualization, geometric accuracy, and classification result. The comparison results conclude that LiDAR is the best data for large-scale mapping.

  7. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data

    Science.gov (United States)

    Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.

    2017-12-01

    The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.

  8. Automatic Generation of Indoor Navigable Space Using a Point Cloud and its Scanner Trajectory

    Science.gov (United States)

    Staats, B. R.; Diakité, A. A.; Voûte, R. L.; Zlatanova, S.

    2017-09-01

    Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may be repositioned to the user's preferences. Therefore, new approaches for the quick recording of indoor environments should be investigated. This paper concentrates on laser scanning with a Mobile Laser Scanner (MLS) device. The MLS device stores a point cloud and its trajectory. If the MLS device is operated by a human, the trajectory contains information which can be used to distinguish different surfaces. In this paper a method is presented for the identification of walkable surfaces based on the analysis of the point cloud and the trajectory of the MLS scanner. This method consists of several steps. First, the point cloud is voxelized. Second, the trajectory is analysing and projecting to acquire seed voxels. Third, these seed voxels are generated into floor regions by the use of a region growing process. By identifying dynamic objects, doors and furniture, these floor regions can be modified so that each region represents a specific navigable space inside a building as a free navigable voxel space. By combining the point cloud and its corresponding trajectory, the walkable space can be identified for any type of building even if the interior is scanned during business hours.

  9. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    Science.gov (United States)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information

  10. Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint

    Science.gov (United States)

    Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.

    2017-09-01

    For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.

  11. Micelle-Mediated Extraction and Cloud Point Pre-concentration for ...

    African Journals Online (AJOL)

    NICO

    RESEARCH ARTICLE. A. Reza Zarei F. Gholamian ... metric determination of phenol in water samples after .... Scientific Criteria for Environmental Quality, Chlorinated Phenols: Criteria for .... 50 V.A.Lemos,J.S.Santos,P.X.Baliza,J. Braz. Chem.

  12. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium(II), lead(II), palladium(II) and silver(I) in environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Ghaedi, Mehrorang, E-mail: m_ghaedi@mail.yu.ac.ir [Chemistry Department, Yasouj University, Yasouj 75914-353 (Iran, Islamic Republic of); Shokrollahi, Ardeshir [Chemistry Department, Yasouj University, Yasouj 75914-353 (Iran, Islamic Republic of); Niknam, Khodabakhsh [Chemistry Department, Persian Gulf University, Bushehr (Iran, Islamic Republic of); Niknam, Ebrahim; Najibi, Asma [Chemistry Department, Yasouj University, Yasouj 75914-353 (Iran, Islamic Republic of); Soylak, Mustafa [Chemistry Department, University of Erciyes, 38039 Kayseri (Turkey)

    2009-09-15

    The phase-separation phenomenon of non-ionic surfactants occurring in aqueous solution was used for the extraction of cadmium(II), lead(II), palladium(II) and silver(I). The analytical procedure involved the formation of understudy metals complex with bis((1H-benzo [d] imidazol-2yl)ethyl) sulfane (BIES), and quantitatively extracted to the phase rich in octylphenoxypolyethoxyethanol (Triton X-114) after centrifugation. Methanol acidified with 1 mol L{sup -1} HNO{sub 3} was added to the surfactant-rich phase prior to its analysis by flame atomic absorption spectrometry (FAAS). The concentration of BIES, pH and amount of surfactant (Triton X-114) was optimized. At optimum conditions, the detection limits of (3 sdb/m) of 1.4, 2.8, 1.6 and 1.4 ng mL{sup -1} for Cd{sup 2+}, Pb{sup 2+}, Pd{sup 2+} and Ag{sup +} along with preconcentration factors of 30 and enrichment factors of 48, 39, 32 and 42 for Cd{sup 2+}, Pb{sup 2+}, Pd{sup 2+} and Ag{sup +}, respectively, were obtained. The proposed cloud point extraction has been successfully applied for the determination of metal ions in real samples with complicated matrix such as radiology waste, vegetable, blood and urine samples.

  13. Dense range images from sparse point clouds using multi-scale processing

    NARCIS (Netherlands)

    Do, Q.L.; Ma, L.; With, de P.H.N.

    2013-01-01

    Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate highaccuracy dense range images from sparse point clouds to facilitate such

  14. EVALUATION OF METHODS FOR COREGISTRATION AND FUSION OF RPAS-BASED 3D POINT CLOUDS AND THERMAL INFRARED IMAGES

    Directory of Open Access Journals (Sweden)

    L. Hoegner

    2016-06-01

    Full Text Available This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  15. 2.5D Multi-View Gait Recognition Based on Point Cloud Registration

    Science.gov (United States)

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-01-01

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727

  16. FIRST PRISMATIC BUILDING MODEL RECONSTRUCTION FROM TOMOSAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2016-06-01

    Full Text Available This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007 and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  17. COMPARISON OF UAS-BASED PHOTOGRAMMETRY SOFTWARE FOR 3D POINT CLOUD GENERATION: A SURVEY OVER A HISTORICAL SITE

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2017-11-01

    Full Text Available Nowadays, Unmanned Aerial System (UAS-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.

  18. Comparison of Uas-Based Photogrammetry Software for 3d Point Cloud Generation: a Survey Over a Historical Site

    Science.gov (United States)

    Alidoost, F.; Arefi, H.

    2017-11-01

    Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.

  19. Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection

    Science.gov (United States)

    Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.

    2016-06-01

    In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  20. INDOOR NAVIGATION FROM POINT CLOUDS: 3D MODELLING AND OBSTACLE DETECTION

    Directory of Open Access Journals (Sweden)

    L. Díaz-Vilariño

    2016-06-01

    Full Text Available In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  1. RESEARCH OF REGISTRATION APPROACHES OF THERMAL INFRARED IMAGES AND INTENSITY IMAGES OF POINT CLOUD

    Directory of Open Access Journals (Sweden)

    L. Liu

    2017-09-01

    Full Text Available In order to realize the analysis of thermal energy of the objects in 3D vision, the registration approach of thermal infrared images and TLS (Terrestrial Laser Scanner point cloud was studied. The original data was pre-processed. For the sake of making the scale and brightness contrast of the two kinds of data meet the needs of basic matching, the intensity image of point cloud was produced and projected to spherical coordinate system, histogram equalization processing was done for thermal infrared image.This paper focused on the research of registration approaches of thermal infrared images and intensity images of point cloud based on SIFT,EOH-SIFT and PIIFD operators. The latter of which is usually used for medical image matching with different spectral character. The comparison results of the experiments showed that PIIFD operator got much more accurate feature point correspondences compared to SIFT and EOH-SIFT operators. The thermal infrared image and intensity image also have ideal overlap results by quadratic polynomial transformation. Therefore, PIIFD can be used as the basic operator for the registration of thermal infrared images and intensity images, and the operator can also be further improved by incorporating the iteration method.

  2. Numerical methods for polyline-to-point-cloud registration with applications to patient-specific stent reconstruction.

    Science.gov (United States)

    Lin, Claire Yilin; Veneziani, Alessandro; Ruthotto, Lars

    2018-03-01

    We present novel numerical methods for polyline-to-point-cloud registration and their application to patient-specific modeling of deployed coronary artery stents from image data. Patient-specific coronary stent reconstruction is an important challenge in computational hemodynamics and relevant to the design and improvement of the prostheses. It is an invaluable tool in large-scale clinical trials that computationally investigate the effect of new generations of stents on hemodynamics and eventually tissue remodeling. Given a point cloud of strut positions, which can be extracted from images, our stent reconstruction method aims at finding a geometrical transformation that aligns a model of the undeployed stent to the point cloud. Mathematically, we describe the undeployed stent as a polyline, which is a piecewise linear object defined by its vertices and edges. We formulate the nonlinear registration as an optimization problem whose objective function consists of a similarity measure, quantifying the distance between the polyline and the point cloud, and a regularization functional, penalizing undesired transformations. Using projections of points onto the polyline structure, we derive novel distance measures. Our formulation supports most commonly used transformation models including very flexible nonlinear deformations. We also propose 2 regularization approaches ensuring the smoothness of the estimated nonlinear transformation. We demonstrate the potential of our methods using an academic 2D example and a real-life 3D bioabsorbable stent reconstruction problem. Our results show that the registration problem can be solved to sufficient accuracy within seconds using only a few number of Gauss-Newton iterations. Copyright © 2017 John Wiley & Sons, Ltd.

  3. GAMMA-CLOUD: a computer code for calculating gamma-exposure due to a radioactive cloud released from a point source

    Energy Technology Data Exchange (ETDEWEB)

    Sugimoto, O [Chugoku Electric Power Co. Inc., Hiroshima (Japan); Sawaguchi, Y; Kaneko, M

    1979-03-01

    A computer code, designated GAMMA-CLOUD, has been developed by specialists of electric power companies to meet requests from the companies to have a unified means of calculating annual external doses from routine releases of radioactive gaseous effluents from nuclear power plants, based on the Japan Atomic Energy Commission's guides for environmental dose evaluation. GAMMA-CLOUD is written in FORTRAN language and its required capacity is less than 100 kilobytes. The average ..gamma..-exposure at an observation point can be calculated within a few minutes with comparable precision to other existing codes.

  4. Preconcentration of silver as silver xanthate on activated carbon

    International Nuclear Information System (INIS)

    Ramadevi, P.; Naidu, U.V.; Naidu, G.R.K.

    1988-01-01

    Silver from aqueous solution was preconcentrated by adsorption on activated carbon as silver xanthate. Factors influencing the adsorption of silver were studied. Optimum conditions for the preconcentration of silver were established. (author) 9 refs.; 3 tabs

  5. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  6. Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting

    Directory of Open Access Journals (Sweden)

    ZHU Xiaoxiao

    2018-02-01

    Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.

  7. [Cloud Point extraction for determination of mercury in Chinese herbal medicine by hydride generation atomic fluorescence spectrometry with optimization using Box-Behnken design].

    Science.gov (United States)

    Wang, Mei; Li, Shan; Zhou, Jian-dong; Xu, Ying; Long, Jun-biao; Yang, Bing-yi

    2014-08-01

    Cloud point extraction (CPE) is proposed as a pre-concentration procedure for the determination of Hg in Chinese herbal medicine samples by hydride generation-atomic fluorescence spectrometry (HG-AFS). Hg2+ was reacted with dithizone to form hydrophobic chelate under the condition of pH. Using Triton X-114, as surfactant, chelate was quantitatively extracted into small volume of the surfactant-rich phase by heating the solution in a water bath for 15 min and centrifuging. Four variables including pH, dithizone concentration, Triton X-114 concentration and equilibrium temperature (T) showed the significant effect on extraction efficiency of total Hg evaluated by single-factor experiment, and Box-Behnken design and response surface method- ology were adopted to further investigate the mutual interactions between these variables and to identify their optimal values that would generate maximum extraction efficiency. The results showed that the binomial was used to fit the response to experimental levels of each variable. ALL linear, quadratic terms of four variables, and interactions between pH and Trion X-114, pH and di- thizone affected the response value(extraction efficiency) significantly at 5% level. The optimum extraction conditions were as follows: pH 5.1, Triton X-114 concentration of 1.16 g x L(-1), dithizone concentration of 4.87 mol x L(-1), and T 58.2 degrees C, the predicted value of fluorescence was 4528.74 under the optimum conditions, and the experimental value had only 2.1% difference with it. Under the conditions, fluorescence was linear to mercury concentration in the range of 1-5 microg x L(-1). The limit of detection obtained was 0.01247 microg x L(-1) with the relative standard deviations (R.S.D.) for six replicate determinations of 1.30%. The proposed method was successfully applied to determination of Hg in morindae Radix, Andrographitis and dried tangerine samples with the recoveries of 95.0%-100.0%. Apparently Box-Behnken design combined with

  8. Feasibility of tetracycline, a common antibiotic, as chelating agent for spectrophotometric determination of UO22+ after cloud point extraction

    International Nuclear Information System (INIS)

    Esra Bagda; Ebru Yabas; Nihat Karakus

    2014-01-01

    A cloud point extraction (CPE) procedure was presented for the preconcentration of UO 2 2+ ion in different water samples. Tetracycline (TC) is the second most widely used antibiotics in the world and is used as chelating agent. To the best of our knowledge, this is the first work that an antibiotic is used as a chelating agent for CPE of UO 2 2+ . Besides, the use of TC as complexing agent provides excellent chelating features. TC molecule has large numbers of functional groups (adjacent hydroxyl oxygen atoms and cyclohexanone oxygen atoms, amide groups) which can form stable complex with UO 2 2+ . After complexation with TC, UO 2 2+ ions were quantitatively recovered in Triton X-100 after cooling in the ice bath. 3.0 mL of acetate buffer was added to the surfactant-rich phase prior to its analysis by UV-Vis spectrophotometer. The influence of analytical parameters including pH, buffer volume, TC, Triton X-100 concentrations, bath temperature, incubation time were optimized. The effect of the matrix ions on the recovery of UO 2 2+ ions was investigated. The limit of detection was 0.0746 μg mL -1 along with enrichment factor of 14.3 with a R.S.D. of 3.6 %. The proposed procedure was applied to the analysis of various environmental water samples. On the other hand, the electronic distribution of TC molecule is investigated with their frontier molecular orbital density distributions. (author)

  9. A ROBUST REGISTRATION ALGORITHM FOR POINT CLOUDS FROM UAV IMAGES FOR CHANGE DETECTION

    Directory of Open Access Journals (Sweden)

    A. Al-Rawabdeh

    2016-06-01

    Full Text Available Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs of the camera and the Exterior Orientation Parameters (EOPs of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV action camera which facilitated capturing high-resolution geo-tagged images

  10. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    Science.gov (United States)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  11. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    Science.gov (United States)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  12. AUTOMATIC REGISTRATION OF TERRESTRIAL LASER SCANNER POINT CLOUDS USING NATURAL PLANAR SURFACES

    Directory of Open Access Journals (Sweden)

    P. W. Theiler

    2012-07-01

    Full Text Available Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.

  13. Automatic Registration of Terrestrial Laser Scanner Point Clouds Using Natural Planar Surfaces

    Science.gov (United States)

    Theiler, P. W.; Schindler, K.

    2012-07-01

    Terrestrial laser scanners have become a standard piece of surveying equipment, used in diverse fields like geomatics, manufacturing and medicine. However, the processing of today's large point clouds is time-consuming, cumbersome and not automated enough. A basic step of post-processing is the registration of scans from different viewpoints. At present this is still done using artificial targets or tie points, mostly by manual clicking. The aim of this registration step is a coarse alignment, which can then be improved with the existing algorithm for fine registration. The focus of this paper is to provide such a coarse registration in a fully automatic fashion, and without placing any target objects in the scene. The basic idea is to use virtual tie points generated by intersecting planar surfaces in the scene. Such planes are detected in the data with RANSAC and optimally fitted using least squares estimation. Due to the huge amount of recorded points, planes can be determined very accurately, resulting in well-defined tie points. Given two sets of potential tie points recovered in two different scans, registration is performed by searching for the assignment which preserves the geometric configuration of the largest possible subset of all tie points. Since exhaustive search over all possible assignments is intractable even for moderate numbers of points, the search is guided by matching individual pairs of tie points with the help of a novel descriptor based on the properties of a point's parent planes. Experiments show that the proposed method is able to successfully coarse register TLS point clouds without the need for artificial targets.

  14. ANALYSIS, THEMATIC MAPS AND DATA MINING FROM POINT CLOUD TO ONTOLOGY FOR SOFTWARE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    R. Nespeca

    2016-06-01

    Full Text Available The primary purpose of the survey for the restoration of Cultural Heritage is the interpretation of the state of building preservation. For this, the advantages of the remote sensing systems that generate dense point cloud (range-based or image-based are not limited only to the acquired data. The paper shows that it is possible to extrapolate very useful information in diagnostics using spatial annotation, with the use of algorithms already implemented in open-source software. Generally, the drawing of degradation maps is the result of manual work, so dependent on the subjectivity of the operator. This paper describes a method of extraction and visualization of information, obtained by mathematical procedures, quantitative, repeatable and verifiable. The case study is a part of the east facade of the Eglise collégiale Saint-Maurice also called Notre Dame des Grâces, in Caromb, in southern France. The work was conducted on the matrix of information contained in the point cloud asci format. The first result is the extrapolation of new geometric descriptors. First, we create the digital maps with the calculated quantities. Subsequently, we have moved to semi-quantitative analyses that transform new data into useful information. We have written the algorithms for accurate selection, for the segmentation of point cloud, for automatic calculation of the real surface and the volume. Furthermore, we have created the graph of spatial distribution of the descriptors. This work shows that if we work during the data processing we can transform the point cloud into an enriched database: the use, the management and the data mining is easy, fast and effective for everyone involved in the restoration process.

  15. Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game

    Science.gov (United States)

    Zai, Dawei; Li, Jonathan; Guo, Yulan; Cheng, Ming; Huang, Pengdi; Cao, Xiaofei; Wang, Cheng

    2017-12-01

    It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.

  16. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    International Nuclear Information System (INIS)

    Liu, W; Sawant, A; Ruan, D

    2016-01-01

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity in local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real

  17. AN ACCURACY ASSESSMENT OF GEOREFERENCED POINT CLOUDS PRODUCED VIA MULTI-VIEW STEREO TECHNIQUES APPLIED TO IMAGERY ACQUIRED VIA UNMANNED AERIAL VEHICLE

    Directory of Open Access Journals (Sweden)

    S. Harwin

    2012-08-01

    Full Text Available Low-cost Unmanned Aerial Vehicles (UAVs are becoming viable environmental remote sensing tools. Sensor and battery technology is expanding the data capture opportunities. The UAV, as a close range remote sensing platform, can capture high resolution photography on-demand. This imagery can be used to produce dense point clouds using multi-view stereopsis techniques (MVS combining computer vision and photogrammetry. This study examines point clouds produced using MVS techniques applied to UAV and terrestrial photography. A multi-rotor micro UAV acquired aerial imagery from a altitude of approximately 30–40 m. The point clouds produced are extremely dense (<1–3 cm point spacing and provide a detailed record of the surface in the study area, a 70 m section of sheltered coastline in southeast Tasmania. Areas with little surface texture were not well captured, similarly, areas with complex geometry such as grass tussocks and woody scrub were not well mapped. The process fails to penetrate vegetation, but extracts very detailed terrain in unvegetated areas. Initially the point clouds are in an arbitrary coordinate system and need to be georeferenced. A Helmert transformation is applied based on matching ground control points (GCPs identified in the point clouds to GCPs surveying with differential GPS. These point clouds can be used, alongside laser scanning and more traditional techniques, to provide very detailed and precise representations of a range of landscapes at key moments. There are many potential applications for the UAV-MVS technique, including coastal erosion and accretion monitoring, mine surveying and other environmental monitoring applications. For the generated point clouds to be used in spatial applications they need to be converted to surface models that reduce dataset size without loosing too much detail. Triangulated meshes are one option, another is Poisson Surface Reconstruction. This latter option makes use of point normal

  18. Determination of Ultra-trace Rhodium in Water Samples by Graphite Furnace Atomic Absorption Spectrometry after Cloud Point Extraction Using 2-(5-Iodo-2-Pyridylazo)-5-Dimethylaminoaniline as a Chelating Agent.

    Science.gov (United States)

    Han, Quan; Huo, Yanyan; Wu, Jiangyan; He, Yaping; Yang, Xiaohui; Yang, Longhu

    2017-03-24

    A highly sensitive method based on cloud point extraction (CPE) separation/preconcentration and graphite furnace atomic absorption spectrometry (GFAAS) detection has been developed for the determination of ultra-trace amounts of rhodium in water samples. A new reagent, 2-(5-iodo-2-pyridylazo)-5-dimethylaminoaniline (5-I-PADMA), was used as the chelating agent and the nonionic surfactant TritonX-114 was chosen as extractant. In a HAc-NaAc buffer solution at pH 5.5, Rh(III) reacts with 5-I-PADMA to form a stable chelate by heating in a boiling water bath for 10 min. Subsequently, the chelate is extracted into the surfactant phase and separated from bulk water. The factors affecting CPE were investigated. Under the optimized conditions, the calibration graph was linear in the range of 0.1-6.0 ng/mL, the detection limit was 0.023 ng/mL for rhodium and relative standard deviation was 3.67% ( c = 1.0 ng/mL, n = 11).The method has been applied to the determination of trace rhodium in water samples with satisfactory results.

  19. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    Directory of Open Access Journals (Sweden)

    B. Sirmacek

    2013-10-01

    Full Text Available Fusion of 3D airborne laser (LIDAR data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  20. PointCloudExplore 2: Visual exploration of 3D gene expression

    Energy Technology Data Exchange (ETDEWEB)

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  1. Plane segmentation and decimation of point clouds for 3D environment reconstruction

    NARCIS (Netherlands)

    Ma, L.; Favier, R.J.J.; Do, Q.L.; Bondarev, E.; With, de P.H.N.

    2013-01-01

    Abstract—Three-dimensional (3D) models of environments are a promising technique for serious gaming and professional engineering applications. In this paper, we introduce a fast and memory-efficient system for the reconstruction of large-scale environments based on point clouds. Our main

  2. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    Science.gov (United States)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  3. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    Science.gov (United States)

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  4. THE IQMULUS URBAN SHOWCASE: AUTOMATIC TREE CLASSIFICATION AND IDENTIFICATION IN HUGE MOBILE MAPPING POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    J. Böhm

    2016-06-01

    Full Text Available Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  5. Battery operated preconcentration-assisted lateral flow assay.

    Science.gov (United States)

    Kim, Cheonjung; Yoo, Yong Kyoung; Han, Sung Il; Lee, Junwoo; Lee, Dohwan; Lee, Kyungjae; Hwang, Kyo Seon; Lee, Kyu Hyoung; Chung, Seok; Lee, Jeong Hoon

    2017-07-11

    Paper-based analytical devices (e.g. lateral flow assays) are highly advantageous as portable diagnostic systems owing to their low costs and ease of use. Because of their low sensitivity and detection limits for biomolecules, these devices have several limitations in applications for real-field diagnosis. Here, we demonstrate a paper-based preconcentration enhanced lateral flow assay using a commercial β-hCG-based test. Utilizing a simple 9 V battery operation with a low power consumption of approximately 81 μW, we acquire a 25-fold preconcentration factor, demonstrating a clear sensitivity enhancement in the colorimetric lateral flow assay; consequently, clear colors are observed in a rapid kit test line, which cannot be monitored without preconcentration. This device can also facilitate a semi-quantitative platform using the saturation value and/or color intensity in both paper-based colorimetric assays and smartphone-based diagnostics.

  6. Impact of Surface Active Ionic Liquids on the Cloud Points of Nonionic Surfactants and the Formation of Aqueous Micellar Two-Phase Systems.

    Science.gov (United States)

    Vicente, Filipa A; Cardoso, Inês S; Sintra, Tânia E; Lemus, Jesus; Marques, Eduardo F; Ventura, Sónia P M; Coutinho, João A P

    2017-09-21

    Aqueous micellar two-phase systems (AMTPS) hold a large potential for cloud point extraction of biomolecules but are yet poorly studied and characterized, with few phase diagrams reported for these systems, hence limiting their use in extraction processes. This work reports a systematic investigation of the effect of different surface-active ionic liquids (SAILs)-covering a wide range of molecular properties-upon the clouding behavior of three nonionic Tergitol surfactants. Two different effects of the SAILs on the cloud points and mixed micelle size have been observed: ILs with a more hydrophilic character and lower critical packing parameter (CPP formation of smaller micelles and concomitantly increase the cloud points; in contrast, ILs with a more hydrophobic character and higher CPP (CPP ≥ 1) induce significant micellar growth and a decrease in the cloud points. The latter effect is particularly interesting and unusual for it was accepted that cloud point reduction is only induced by inorganic salts. The effects of nonionic surfactant concentration, SAIL concentration, pH, and micelle ζ potential are also studied and rationalized.

  7. From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings

    Directory of Open Access Journals (Sweden)

    Hélène Macher

    2017-10-01

    Full Text Available The creation of as-built Building Information Models requires the acquisition of the as-is state of existing buildings. Laser scanners are widely used to achieve this goal since they permit to collect information about object geometry in form of point clouds and provide a large amount of accurate data in a very fast way and with a high level of details. Unfortunately, the scan-to-BIM (Building Information Model process remains currently largely a manual process which is time consuming and error-prone. In this paper, a semi-automatic approach is presented for the 3D reconstruction of indoors of existing buildings from point clouds. Several segmentations are performed so that point clouds corresponding to grounds, ceilings and walls are extracted. Based on these point clouds, walls and slabs of buildings are reconstructed and described in the IFC format in order to be integrated into BIM software. The assessment of the approach is proposed thanks to two datasets. The evaluation items are the degree of automation, the transferability of the approach and the geometric quality of results of the 3D reconstruction. Additionally, quality indexes are introduced to inspect the results in order to be able to detect potential errors of reconstruction.

  8. Quantitative evaluation for small surface damage based on iterative difference and triangulation of 3D point cloud

    Science.gov (United States)

    Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong

    2018-03-01

    This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.

  9. A Lightweight Surface Reconstruction Method for Online 3D Scanning Point Cloud Data Oriented toward 3D Printing

    Directory of Open Access Journals (Sweden)

    Buyun Sheng

    2018-01-01

    Full Text Available The existing surface reconstruction algorithms currently reconstruct large amounts of mesh data. Consequently, many of these algorithms cannot meet the efficiency requirements of real-time data transmission in a web environment. This paper proposes a lightweight surface reconstruction method for online 3D scanned point cloud data oriented toward 3D printing. The proposed online lightweight surface reconstruction algorithm is composed of a point cloud update algorithm (PCU, a rapid iterative closest point algorithm (RICP, and an improved Poisson surface reconstruction algorithm (IPSR. The generated lightweight point cloud data are pretreated using an updating and rapid registration method. The Poisson surface reconstruction is also accomplished by a pretreatment to recompute the point cloud normal vectors; this approach is based on a least squares method, and the postprocessing of the PDE patch generation was based on biharmonic-like fourth-order PDEs, which effectively reduces the amount of reconstructed mesh data and improves the efficiency of the algorithm. This method was verified using an online personalized customization system that was developed with WebGL and oriented toward 3D printing. The experimental results indicate that this method can generate a lightweight 3D scanning mesh rapidly and efficiently in a web environment.

  10. Application of template matching for improving classification of urban railroad point clouds

    NARCIS (Netherlands)

    Arastounia, M.; Oude Elberink, S.J.

    2016-01-01

    This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are

  11. Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3D Street View Services

    Directory of Open Access Journals (Sweden)

    Yan Li

    2016-12-01

    Full Text Available Extraction and analysis of building façades are key processes in the three-dimensional (3D building reconstruction and realistic geometrical modeling of the urban environment, which includes many applications, such as smart city management, autonomous navigation through the urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. This paper proposes a building facade pieces extraction and simplification algorithm based on morphological filtering with point clouds obtained by a mobile laser scanner (MLS. First, this study presents a point cloud projection algorithm with high-accuracy orientation parameters from the position and orientation system (POS of MLS that can convert large volumes of point cloud data to a raster image. Second, this study proposes a feature extraction approach based on morphological filtering with point cloud projection that can obtain building facade features in an image space. Third, this study designs an inverse transformation of point cloud projection to convert building facade features from an image space to a 3D space. A building facade feature with restricted facade plane detection algorithm is implemented to reconstruct façade pieces for street view service. The results of building facade extraction experiments with large volumes of point cloud from MLS show that the proposed approach is suitable for various types of building facade extraction. The geometric accuracy of building façades is 0.66 m in x direction, 0.64 in y direction and 0.55 m in the vertical direction, which is the same level as the space resolution (0.5 m of the point cloud.

  12. A RECOGNITION METHOD FOR AIRPLANE TARGETS USING 3D POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    M. Zhou

    2012-07-01

    Full Text Available LiDAR is capable of obtaining three dimension coordinates of the terrain and targets directly and is widely applied in digital city, emergent disaster mitigation and environment monitoring. Especially because of its ability of penetrating the low density vegetation and canopy, LiDAR technique has superior advantages in hidden and camouflaged targets detection and recognition. Based on the multi-echo data of LiDAR, and combining the invariant moment theory, this paper presents a recognition method for classic airplanes (even hidden targets mainly under the cover of canopy using KD-Tree segmented point cloud data. The proposed algorithm firstly uses KD-tree to organize and manage point cloud data, and makes use of the clustering method to segment objects, and then the prior knowledge and invariant recognition moment are utilized to recognise airplanes. The outcomes of this test verified the practicality and feasibility of the method derived in this paper. And these could be applied in target measuring and modelling of subsequent data processing.

  13. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    Science.gov (United States)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  14. CONTINUOUSLY DEFORMATION MONITORING OF SUBWAY TUNNEL BASED ON TERRESTRIAL POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Z. Kang

    2012-07-01

    Full Text Available The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the common control points can be used by each station and thus the error accumulation avoided within a section. Afterwards, the central axis of the subway tunnel is determined through RANSAC (Random Sample Consensus algorithm and curve fitting. Although with very high resolution, laser points are still discrete and thus the vertical section is computed via the quadric fitting of the vicinity of interest, instead of the fitting of the whole model of a subway tunnel, which is determined by the intersection line rotated about the central axis of tunnel within a vertical plane. The extraction of the vertical section is then optimized using RANSAC for the purpose of filtering out noises. Based on the extracted vertical sections, the volume of tunnel deformation is estimated by the comparison between vertical sections extracted at the same position from different epochs of point clouds. Furthermore, the continuously extracted vertical sections are deployed to evaluate the convergent tendency of the tunnel. The proposed algorithms are verified using real datasets in terms of accuracy and computation efficiency. The experimental result of fitting accuracy analysis shows the maximum deviation between interpolated point and real point is 1.5 mm, and the minimum one is 0.1 mm; the convergent tendency of the tunnel was detected by the comparison of adjacent fitting radius. The maximum error is 6 mm, while the minimum one is 1 mm. The computation cost of vertical section abstraction is within 3 seconds/section, which proves high efficiency..

  15. Simple computation of reaction–diffusion processes on point clouds

    KAUST Repository

    Macdonald, Colin B.

    2013-05-20

    The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.

  16. Simple computation of reaction–diffusion processes on point clouds

    KAUST Repository

    Macdonald, Colin B.; Merriman, Barry; Ruuth, Steven J.

    2013-01-01

    The study of reaction-diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction-diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction-diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces.

  17. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    Science.gov (United States)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  18. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method

  19. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan

    2015-11-01

    To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter

  20. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    Science.gov (United States)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  1. Outcrop-scale fracture trace identification using surface roughness derived from a high-density point cloud

    Science.gov (United States)

    Okyay, U.; Glennie, C. L.; Khan, S.

    2017-12-01

    Owing to the advent of terrestrial laser scanners (TLS), high-density point cloud data has become increasingly available to the geoscience research community. Research groups have started producing their own point clouds for various applications, gradually shifting their emphasis from obtaining the data towards extracting more and meaningful information from the point clouds. Extracting fracture properties from three-dimensional data in a (semi-)automated manner has been an active area of research in geosciences. Several studies have developed various processing algorithms for extracting only planar surfaces. In comparison, (semi-)automated identification of fracture traces at the outcrop scale, which could be used for mapping fracture distribution have not been investigated frequently. Understanding the spatial distribution and configuration of natural fractures is of particular importance, as they directly influence fluid-flow through the host rock. Surface roughness, typically defined as the deviation of a natural surface from a reference datum, has become an important metric in geoscience research, especially with the increasing density and accuracy of point clouds. In the study presented herein, a surface roughness model was employed to identify fracture traces and their distribution on an ophiolite outcrop in Oman. Surface roughness calculations were performed using orthogonal distance regression over various grid intervals. The results demonstrated that surface roughness could identify outcrop-scale fracture traces from which fracture distribution and density maps can be generated. However, considering outcrop conditions and properties and the purpose of the application, the definition of an adequate grid interval for surface roughness model and selection of threshold values for distribution maps are not straightforward and require user intervention and interpretation.

  2. An electrodynamic preconcentrator integrated thermoelectric biosensor chip for continuous monitoring of biochemical process

    International Nuclear Information System (INIS)

    Choi, Yong-Hwan; Kim, Min-gu; Kang, Dong-Hyun; Sim, Jaesam; Kim, Jongbaeg; Kim, Yong-Jun

    2012-01-01

    This paper proposes an integrated sensor chip for continuous monitoring of a biochemical process. It is composed of a preconcentrator and a thermoelectric biosensor. In the preconcentrator, the concentration of the injected biochemical sample is electrodynamically condensed. Then, in the downstream thermoelectric biosensor, the preconcentrated target molecules react with sequentially injected capture molecules and generate reaction heat. The reaction heat is detected based on the thermoelectric effect, and an integrated split-flow microchannel improves the sensor stability by providing ability to self-compensate thermal noise. These sequential preconcentration and detection processes are performed in completely label-free and continuous conditions and consequently enhance the sensor sensitivity. The performance of the integrated biosensor chip was evaluated at various flow rates and applied voltages. First, in order to verify characteristics of the fabricated preconcentrator, 10 µm -diameter polystyrene (PS) particles were used. The particles were concentrated by applying ac voltage from 0 to 16 V pp at 3 MHz at various flow rates. In the experimental result, approximately 92.8% of concentration efficiency was achieved at a voltage over 16 V pp and at a flow rate below 100 µl h −1 . The downstream thermoelectric biosensor was characterized by measuring reaction heat of biotin–streptavidin interaction. The preconcentrated streptavidin-coated PS particles flow into the reaction chamber and react with titrated biotin. The measured output voltage was 288.2 µV at a flow rate of 100 µl h −1 without preconcentration. However, by using proposed preconcentrator, an output voltage of 812.3 µV was achieved with a 16 V pp -applied preconcentration in the same given sample and flow rate. According to these results, the proposed label-free biomolecular preconcentration and detection technique can be applied in continuous and high-throughput biochemical applications

  3. Interactive Classification of Construction Materials: Feedback Driven Framework for Annotation and Analysis of 3d Point Clouds

    Science.gov (United States)

    Hess, M. R.; Petrovic, V.; Kuester, F.

    2017-08-01

    Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.

  4. Development of a simple, sensitive and inexpensive ion-pairing cloud point extraction approach for the determination of trace inorganic arsenic species in spring water, beverage and rice samples by UV-Vis spectrophotometry.

    Science.gov (United States)

    Gürkan, Ramazan; Kır, Ufuk; Altunay, Nail

    2015-08-01

    The determination of inorganic arsenic species in water, beverages and foods become crucial in recent years, because arsenic species are considered carcinogenic and found at high concentrations in the samples. This communication describes a new cloud-point extraction (CPE) method for the determination of low quantity of arsenic species in the samples, purchased from the local market by UV-Visible Spectrophotometer (UV-Vis). The method is based on selective ternary complex of As(V) with acridine orange (AOH(+)) being a versatile fluorescence cationic dye in presence of tartaric acid and polyethylene glycol tert-octylphenyl ether (Triton X-114) at pH 5.0. Under the optimized conditions, a preconcentration factor of 65 and detection limit (3S blank/m) of 1.14 μg L(-1) was obtained from the calibration curve constructed in the range of 4-450 μg L(-1) with a correlation coefficient of 0.9932 for As(V). The method is validated by the analysis of certified reference materials (CRMs). Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. DERIVING 3D POINT CLOUDS FROM TERRESTRIAL PHOTOGRAPHS - COMPARISON OF DIFFERENT SENSORS AND SOFTWARE

    Directory of Open Access Journals (Sweden)

    R. Niederheiser

    2016-06-01

    Full Text Available Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1 Agisoft PhotoScan Pro (1.16, (2 Pix4D (2.0.89, (3 a combination of Visual SFM (V0.5.22 and SURE (1.2.0.286, and (4 MicMac (1.0. We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. While PhotoScan and Pix4D offer the user-friendliest workflows, they are also “black-box” programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.

  6. Bilevel Optimization for Scene Segmentation of LiDAR Point Cloud

    Directory of Open Access Journals (Sweden)

    LI Minglei

    2018-02-01

    Full Text Available The segmentation of point clouds obtained by light detection and ranging (LiDAR systems is a critical step for many tasks,such as data organization,reconstruction and information extraction.In this paper,we propose a bilevel progressive optimization algorithm based on the local differentiability.First,we define the topological relation and distance metric of points in the framework of Riemannian geometry,and in the point-based level using k-means method generates over-segmentation results,e.g.super voxels.Then these voxels are formulated as nodes which consist a minimal spanning tree.High level features are extracted from voxel structures,and a graph-based optimization method is designed to yield the final adaptive segmentation results.The implementation experiments on real data demonstrate that our method is efficient and superior to state-of-the-art methods.

  7. Automated estimation of leaf distribution for individual trees based on TLS point clouds

    Science.gov (United States)

    Koma, Zsófia; Rutzinger, Martin; Bremer, Magnus

    2017-04-01

    Light Detection and Ranging (LiDAR) especially the ground based LiDAR (Terrestrial Laser Scanning - TLS) is an operational used and widely available measurement tool supporting forest inventory updating and research in forest ecology. High resolution point clouds from TLS already represent single leaves which can be used for a more precise estimation of Leaf Area Index (LAI) and for higher accurate biomass estimation. However, currently the methodology for extracting single leafs from the unclassified point clouds for individual trees is still missing. The aim of this study is to present a novel segmentation approach in order to extract single leaves and derive features related to leaf morphology (such as area, slope, length and width) of each single leaf from TLS point cloud data. For the study two exemplary single trees were scanned in leaf-on condition on the university campus of Innsbruck during calm wind conditions. A northern red oak (Quercus rubra) was scanned by a discrete return recording Optech ILRIS-3D TLS scanner and a tulip tree (Liliodendron tulpifera) with Riegl VZ-6000 scanner. During the scanning campaign a reference dataset was measured parallel to scanning. In this case 230 leaves were randomly collected around the lower branches of the tree and photos were taken. The developed workflow steps were the following: in the first step normal vectors and eigenvalues were calculated based on the user specified neighborhood. Then using the direction of the largest eigenvalue outliers i.e. ghost points were removed. After that region growing segmentation based on the curvature and angles between normal vectors was applied on the filtered point cloud. On each segment a RANSAC plane fitting algorithm was applied in order to extract the segment based normal vectors. Using the related features of the calculated segments the stem and branches were labeled as non-leaf and other segments were classified as leaf. The validation of the different segmentation

  8. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    International Nuclear Information System (INIS)

    Gao, Yang; Zhong, Ruofei; Liu, Xianlin; Tang, Tao; Wang, Liuzhao

    2017-01-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness ( p ) and completeness ( r ) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR. (paper)

  9. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    Science.gov (United States)

    Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin

    2017-08-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.

  10. Estimation of cylinder orientation in three-dimensional point cloud using angular distance-based optimization

    Science.gov (United States)

    Su, Yun-Ting; Hu, Shuowen; Bethel, James S.

    2017-05-01

    Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.

  11. Displacement fields from point cloud data: Application of particle imaging velocimetry to landslide geodesy

    Science.gov (United States)

    Aryal, Arjun; Brooks, Benjamin A.; Reid, Mark E.; Bawden, Gerald W.; Pawlak, Geno

    2012-01-01

    Acquiring spatially continuous ground-surface displacement fields from Terrestrial Laser Scanners (TLS) will allow better understanding of the physical processes governing landslide motion at detailed spatial and temporal scales. Problems arise, however, when estimating continuous displacement fields from TLS point-clouds because reflecting points from sequential scans of moving ground are not defined uniquely, thus repeat TLS surveys typically do not track individual reflectors. Here, we implemented the cross-correlation-based Particle Image Velocimetry (PIV) method to derive a surface deformation field using TLS point-cloud data. We estimated associated errors using the shape of the cross-correlation function and tested the method's performance with synthetic displacements applied to a TLS point cloud. We applied the method to the toe of the episodically active Cleveland Corral Landslide in northern California using TLS data acquired in June 2005–January 2007 and January–May 2010. Estimated displacements ranged from decimeters to several meters and they agreed well with independent measurements at better than 9% root mean squared (RMS) error. For each of the time periods, the method provided a smooth, nearly continuous displacement field that coincides with independently mapped boundaries of the slide and permits further kinematic and mechanical inference. For the 2010 data set, for instance, the PIV-derived displacement field identified a diffuse zone of displacement that preceded by over a month the development of a new lateral shear zone. Additionally, the upslope and downslope displacement gradients delineated by the dense PIV field elucidated the non-rigid behavior of the slide.

  12. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    Science.gov (United States)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  13. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    Science.gov (United States)

    Gupta, S.; Lohani, B.

    2014-05-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results

  14. Roadside Multiple Objects Extraction from Mobile Laser Scanning Point Cloud Based on DBN

    Directory of Open Access Journals (Sweden)

    LUO Haifeng

    2018-02-01

    Full Text Available This paper proposed an novel algorithm for exploring deep belief network (DBN architectures to extract and recognize roadside facilities (trees,cars and traffic poles from mobile laser scanning (MLS point cloud.The proposed methods firstly partitioned the raw MLS point cloud into blocks and then removed the ground and building points.In order to partition the off-ground objects into individual objects,off-ground points were organized into an Octree structure and clustered into candidate objects based on connected component.To improve segmentation performance on clusters containing overlapped objects,a refining processing using a voxel-based normalized cut was then implemented.In addition,multi-view features descriptor was generated for each independent roadside facilities based on binary images.Finally,a deep belief network (DBN was trained to extract trees,cars and traffic pole objects.Experiments are undertaken to evaluate the validities of the proposed method with two datasets acquired by Lynx Mobile Mapper System.The precision of trees,cars and traffic poles objects extraction results respectively was 97.31%,97.79% and 92.78%.The recall was 98.30%,98.75% and 96.77% respectively.The quality is 95.70%,93.81% and 90.00%.And the F1 measure was 97.80%,96.81% and 94.73%.

  15. Point Cloud Analysis for Conservation and Enhancement of Modernist Architecture

    Science.gov (United States)

    Balzani, M.; Maietti, F.; Mugayar Kühl, B.

    2017-02-01

    Documentation of cultural assets through improved acquisition processes for advanced 3D modelling is one of the main challenges to be faced in order to address, through digital representation, advanced analysis on shape, appearance and conservation condition of cultural heritage. 3D modelling can originate new avenues in the way tangible cultural heritage is studied, visualized, curated, displayed and monitored, improving key features such as analysis and visualization of material degradation and state of conservation. An applied research focused on the analysis of surface specifications and material properties by means of 3D laser scanner survey has been developed within the project of Digital Preservation of FAUUSP building, Faculdade de Arquitetura e Urbanismo da Universidade de São Paulo, Brazil. The integrated 3D survey has been performed by the DIAPReM Center of the Department of Architecture of the University of Ferrara in cooperation with the FAUUSP. The 3D survey has allowed the realization of a point cloud model of the external surfaces, as the basis to investigate in detail the formal characteristics, geometric textures and surface features. The digital geometric model was also the basis for processing the intensity values acquired by laser scanning instrument; this method of analysis was an essential integration to the macroscopic investigations in order to manage additional information related to surface characteristics displayable on the point cloud.

  16. FAST AND ROBUST SEGMENTATION AND CLASSIFICATION FOR CHANGE DETECTION IN URBAN POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    X. Roynard

    2016-06-01

    Full Text Available Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  17. NEW PERSPECTIVES OF POINT CLOUDS COLOR MANAGEMENT – THE DEVELOPMENT OF TOOL IN MATLAB FOR APPLICATIONS IN CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    M. Pepe

    2017-02-01

    Full Text Available The paper describes a method for Point Clouds Color management and Integration obtained from Terrestrial Laser Scanner (TLS and Image Based (IB survey techniques. Especially in the Cultural Heritage (CH environment, methods and techniques to improve the color quality of Point Clouds have a key role because a homogenous texture brings to a more accurate reconstruction of the investigated object and to a more pleasant perception of the color object as well. A color management method for point clouds can be useful in case of single data set acquired by TLS or IB technique as well as in case of chromatic heterogeneity resulting by merging different datasets. The latter condition can occur when the scans are acquired in different moments of the same day or when scans of the same object are performed in a period of weeks or months, and consequently with a different environment/lighting condition. In this paper, a procedure to balance the point cloud color in order to uniform the different data sets, to improve the chromatic quality and to highlight further details will be presented and discussed.

  18. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    OpenAIRE

    J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao

    2017-01-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...

  19. Comparison of 3D point clouds obtained by photogrammetric UAVs and TLS to determine the attitude of dolerite outcrops discontinuities.

    Science.gov (United States)

    Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria

    2015-04-01

    Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.

  20. An IFC schema extension and binary serialization format to efficiently integrate point cloud data into building models

    NARCIS (Netherlands)

    Krijnen, T.F.; Beetz, J.

    2017-01-01

    In this paper we suggest an extension to the Industry Foundation Classes (IFC) model to integrate point cloud datasets. The proposal includes a schema extension to the core model allowing the storage of points, either as Cartesian coordinates, points in parametric space of associated building

  1. Sample extraction and injection with a microscale preconcentrator.

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, Alex Lockwood (Advanced Sensor Technologies, Albuquerque, NM); Chan, Helena Kai Lun

    2007-09-01

    This report details the development of a microfabricated preconcentrator that functions as a fully integrated chemical extractor-injector for a microscale gas chromatograph (GC). The device enables parts-per-billion detection and quantitative analysis of volatile organic compounds (VOCs) in indoor air with size and power advantages over macro-scale systems. The 44 mm{sup 3} preconcentrator extracts VOCs using highly adsorptive, granular forms of graphitized carbon black and carbon molecular sieves. The micron-sized silicon cavities have integrated heating and temperature sensing allowing low power, yet rapid heating to thermally desorb the collected VOCs (GC injection). The keys to device construction are a new adsorbent-solvent filling technique and solvent-tolerant wafer-level silicon-gold eutectic bonding technology. The product is the first granular adsorbent preconcentrator integrated at the wafer level. Other advantages include exhaustive VOC extraction and injection peak widths an order of magnitude narrower than predecessor prototypes. A mass transfer model, the first for any microscale preconcentrator, is developed to describe both adsorption and desorption behaviors. The physically intuitive model uses implicit and explicit finite differences to numerically solve the required partial differential equations. The model is applied to the adsorption and desorption of decane at various concentrations to extract Langmuir adsorption isotherm parameters from effluent curve measurements where properties are unknown a priori.

  2. An Automated Approach to the Generation of Structured Building Information Models from Unstructured 3d Point Cloud Scans

    DEFF Research Database (Denmark)

    Tamke, Martin; Evers, Henrik Leander; Wessel, Raoul

    2016-01-01

    In this paper we present and evaluate an approach for the automatic generation of building models in IFC BIM format from unstructured Point Cloud scans, as they result from 3dlaser scans of buildings. While the actual measurement process is relatively fast, 85% of the overall time are spend...... on the interpretation and transformation of the resulting Point Cloud data into information, which can be used in architectural and engineering design workflows. Our approach to tackle this problem, is in contrast to existing ones which work on the levels of points, based on the detection of building elements...

  3. Analysis of Alphalactalbumin and Betalactoglobulin from the Rehydration of Bovine Colostrum Powder Using Cloud Point Extraction and Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2012-01-01

    Full Text Available Alphalactalbumin (α-La and betalactoglobulin (β-Lg in the rehydration of bovine colostrum powder were successfully separated by cloud point extraction using a nonionic surfactant Triton X-114. The effects of different factors, including the surfactant concentration, sample volume, electrolyte, and pH were discussed. The optimized conditions for cloud point extraction of alphalactalbumin (α-La and betalactoglobulin (β-Lg can be concluded that the best surfactant is 1% (w/v Triton X-114, 200 μL of sample volume, 150 mmol/L NaCl, and 6% (w/v sucrose. After cloud point extraction, the capillary electrophoresis is used to check the efficiency of the extraction procedure. The results had been effectively confirmed by the characterization with matrix-assisted laser desorption ionization time of flight mass spectrometry (MALDI-TOF MS.

  4. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Science.gov (United States)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  5. Cloud point extraction: an alternative to traditional liquid-liquid extraction for lanthanides(III) separation.

    Science.gov (United States)

    Favre-Réguillon, Alain; Draye, Micheline; Lebuzit, Gérard; Thomas, Sylvie; Foos, Jacques; Cote, Gérard; Guy, Alain

    2004-06-17

    Cloud point extraction (CPE) was used to extract and separate lanthanum(III) and gadolinium(III) nitrate from an aqueous solution. The methodology used is based on the formation of lanthanide(III)-8-hydroxyquinoline (8-HQ) complexes soluble in a micellar phase of non-ionic surfactant. The lanthanide(III) complexes are then extracted into the surfactant-rich phase at a temperature above the cloud point temperature (CPT). The structure of the non-ionic surfactant, and the chelating agent-metal molar ratio are identified as factors determining the extraction efficiency and selectivity. In an aqueous solution containing equimolar concentrations of La(III) and Gd(III), extraction efficiency for Gd(III) can reach 96% with a Gd(III)/La(III) selectivity higher than 30 using Triton X-114. Under those conditions, a Gd(III) decontamination factor of 50 is obtained.

  6. The potential of cloud point system as a novel two-phase partitioning system for biotransformation.

    Science.gov (United States)

    Wang, Zhilong

    2007-05-01

    Although the extractive biotransformation in two-phase partitioning systems have been studied extensively, such as the water-organic solvent two-phase system, the aqueous two-phase system, the reverse micelle system, and the room temperature ionic liquid, etc., this has not yet resulted in a widespread industrial application. Based on the discussion of the main obstacles, an exploitation of a cloud point system, which has already been applied in a separation field known as a cloud point extraction, as a novel two-phase partitioning system for biotransformation, is reviewed by analysis of some topical examples. At the end of the review, the process control and downstream processing in the application of the novel two-phase partitioning system for biotransformation are also briefly discussed.

  7. Room acoustics modeling using a point-cloud representation of the room geometry

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    Room acoustics modeling is usually based on the room geometry that is parametrically described prior to a sound transmission calculation. This is a highly room-specific task and rather time consuming if a complex geometry is to be described. Here, a run time generic method for an arbitrary room...... geometry acquisition is presented. The method exploits a depth sensor of the Kinect device that provides a point based information of a scanned room interior. After post-processing of the Kinect output data, a 3D point-cloud model of the room is obtained. Sound transmission between two selected points...... level of user immersion by a real time acoustical simulation of a dynamic scenes....

  8. An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage

    Directory of Open Access Journals (Sweden)

    Syed Ali Naqi Gilani

    2016-03-01

    Full Text Available The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2, building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian in contrast to the ISPRS benchmark, where it does better or equal to the counterparts.

  9. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  10. EXTRACTING TOPOLOGICAL RELATIONS BETWEEN INDOOR SPACES FROM POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. Tran

    2017-09-01

    Full Text Available 3D models of indoor environments are essential for many application domains such as navigation guidance, emergency management and a range of indoor location-based services. The principal components defined in different BIM standards contain not only building elements, such as floors, walls and doors, but also navigable spaces and their topological relations, which are essential for path planning and navigation. We present an approach to automatically reconstruct topological relations between navigable spaces from point clouds. Three types of topological relations, namely containment, adjacency and connectivity of the spaces are modelled. The results of initial experiments demonstrate the potential of the method in supporting indoor navigation.

  11. AUTOMATED CALIBRATION OF FEM MODELS USING LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    B. Riveiro

    2018-05-01

    Full Text Available In present work it is pretended to estimate elastic parameters of beams through the combined use of precision geomatic techniques (laser scanning and structural behaviour simulation tools. The study has two aims, on the one hand, to develop an algorithm able to interpret automatically point clouds acquired by laser scanning systems of beams subjected to different load situations on experimental tests; and on the other hand, to minimize differences between deformation values given by simulation tools and those measured by laser scanning. In this way we will proceed to identify elastic parameters and boundary conditions of structural element so that surface stresses can be estimated more easily.

  12. H-Ransac a Hybrid Point Cloud Segmentation Combining 2d and 3d Data

    Science.gov (United States)

    Adam, A.; Chatzilari, E.; Nikolopoulos, S.; Kompatsiaris, I.

    2018-05-01

    In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.

  13. The Complex Point Cloud for the Knowledge of the Architectural Heritage. Some Experiences

    Science.gov (United States)

    Aveta, C.; Salvatori, M.; Vitelli, G. P.

    2017-05-01

    The present paper aims to present a series of experiences and experimentations that a group of PhD from the University of Naples Federico II conducted over the past decade. This work has concerned the survey and the graphic restitution of monuments and works of art, finalized to their conservation. The targeted query of complex point cloud acquired by 3D scanners, integrated with photo sensors and thermal imaging, has allowed to explore new possibilities of investigation. In particular, we will present the scientific results of the experiments carried out on some important historical artifacts with distinct morphological and typological characteristics. According to aims and needs that emerged during the connotative process, with the support of archival and iconographic historical research, the laser scanner technology has been used in many different ways. New forms of representation, obtained directly from the point cloud, have been tested for the elaboration of thematic studies for documenting the pathologies and the decay of materials, for correlating visible aspects with invisible aspects of the artifact.

  14. THE COMPLEX POINT CLOUD FOR THE KNOWLEDGE OF THE ARCHITECTURAL HERITAGE. SOME EXPERIENCES

    Directory of Open Access Journals (Sweden)

    C. Aveta

    2017-05-01

    Full Text Available The present paper aims to present a series of experiences and experimentations that a group of PhD from the University of Naples Federico II conducted over the past decade. This work has concerned the survey and the graphic restitution of monuments and works of art, finalized to their conservation. The targeted query of complex point cloud acquired by 3D scanners, integrated with photo sensors and thermal imaging, has allowed to explore new possibilities of investigation. In particular, we will present the scientific results of the experiments carried out on some important historical artifacts with distinct morphological and typological characteristics. According to aims and needs that emerged during the connotative process, with the support of archival and iconographic historical research, the laser scanner technology has been used in many different ways. New forms of representation, obtained directly from the point cloud, have been tested for the elaboration of thematic studies for documenting the pathologies and the decay of materials, for correlating visible aspects with invisible aspects of the artifact.

  15. a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds

    Science.gov (United States)

    He, H.; Khoshelham, K.; Fraser, C.

    2017-09-01

    Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  16. a Super Voxel-Based Riemannian Graph for Multi Scale Segmentation of LIDAR Point Clouds

    Science.gov (United States)

    Li, Minglei

    2018-04-01

    Automatically segmenting LiDAR points into respective independent partitions has become a topic of great importance in photogrammetry, remote sensing and computer vision. In this paper, we cast the problem of point cloud segmentation as a graph optimization problem by constructing a Riemannian graph. The scale space of the observed scene is explored by an octree-based over-segmentation with different depths. The over-segmentation produces many super voxels which restrict the structure of the scene and will be used as nodes of the graph. The Kruskal coordinates are used to compute edge weights that are proportional to the geodesic distance between nodes. Then we compute the edge-weight matrix in which the elements reflect the sectional curvatures associated with the geodesic paths between super voxel nodes on the scene surface. The final segmentation results are generated by clustering similar super voxels and cutting off the weak edges in the graph. The performance of this method was evaluated on LiDAR point clouds for both indoor and outdoor scenes. Additionally, extensive comparisons to state of the art techniques show that our algorithm outperforms on many metrics.

  17. CONTOURS BASED APPROACH FOR THERMAL IMAGE AND TERRESTRIAL POINT CLOUD REGISTRATION

    Directory of Open Access Journals (Sweden)

    A. Bennis

    2013-07-01

    Full Text Available Building energetic performances strongly depend on the thermal insulation. However the performance of the insulation materials tends to decrease over time which necessitates the continuous monitoring of the building in order to detect and repair the anomalous zones. In this paper, it is proposed to couple 2D infrared images representing the surface temperature of the building with 3D point clouds acquired with Terrestrial Laser Scanner (TLS resulting in a semi-automatic approach allowing the texturation of TLS data with infrared image of buildings. A contour-based algorithm is proposed whose main features are : 1 the extraction of high level primitive is not required 2 the use of projective transform allows to handle perspective effects 3 a point matching refinement procedure allows to cope with approximate control point selection. The procedure is applied to test modules aiming at investigating the thermal properties of material.

  18. EFFICIENT LIDAR POINT CLOUD DATA MANAGING AND PROCESSING IN A HADOOP-BASED DISTRIBUTED FRAMEWORK

    Directory of Open Access Journals (Sweden)

    C. Wang

    2017-10-01

    Full Text Available Light Detection and Ranging (LiDAR is one of the most promising technologies in surveying and mapping,city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop’s storage and computing ability. At the same time, the Point Cloud Library (PCL, an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  19. Separation and recycling of nanoparticles using cloud point extraction with non-ionic surfactant mixtures.

    Science.gov (United States)

    Nazar, Muhammad Faizan; Shah, Syed Sakhawat; Eastoe, Julian; Khan, Asad Muhammad; Shah, Afzal

    2011-11-15

    A viable cost-effective approach employing mixtures of non-ionic surfactants Triton X-114/Triton X-100 (TX-114/TX-100), and subsequent cloud point extraction (CPE), has been utilized to concentrate and recycle inorganic nanoparticles (NPs) in aqueous media. Gold Au- and palladium Pd-NPs have been pre-synthesized in aqueous phases and stabilized by sodium 2-mercaptoethanesulfonate (MES) ligands, then dispersed in aqueous non-ionic surfactant mixtures. Heating the NP-micellar systems induced cloud point phase separations, resulting in concentration of the NPs in lower phases after the transition. For the Au-NPs UV/vis absorption has been used to quantify the recovery and recycle efficiency after five repeated CPE cycles. Transmission electron microscopy (TEM) was used to investigate NP size, shape, and stability. The results showed that NPs are preserved after the recovery processes, but highlight a potential limitation, in that further particle growth can occur in the condensed phases. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Efficient LIDAR Point Cloud Data Managing and Processing in a Hadoop-Based Distributed Framework

    Science.gov (United States)

    Wang, C.; Hu, F.; Sha, D.; Han, X.

    2017-10-01

    Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop's storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.

  1. Geospatial Field Methods: An Undergraduate Course Built Around Point Cloud Construction and Analysis to Promote Spatial Learning and Use of Emerging Technology in Geoscience

    Science.gov (United States)

    Bunds, M. P.

    2017-12-01

    Point clouds are a powerful data source in the geosciences, and the emergence of structure-from-motion (SfM) photogrammetric techniques has allowed them to be generated quickly and inexpensively. Consequently, applications of them as well as methods to generate, manipulate, and analyze them warrant inclusion in undergraduate curriculum. In a new course called Geospatial Field Methods at Utah Valley University, students in small groups use SfM to generate a point cloud from imagery collected with a small unmanned aerial system (sUAS) and use it as a primary data source for a research project. Before creating their point clouds, students develop needed technical skills in laboratory and class activities. The students then apply the skills to construct the point clouds, and the research projects and point cloud construction serve as a central theme for the class. Intended student outcomes for the class include: technical skills related to acquiring, processing, and analyzing geospatial data; improved ability to carry out a research project; and increased knowledge related to their specific project. To construct the point clouds, students first plan their field work by outlining the field site, identifying locations for ground control points (GCPs), and loading them onto a handheld GPS for use in the field. They also estimate sUAS flight elevation, speed, and the flight path grid spacing required to produce a point cloud with the resolution required for their project goals. In the field, the students place the GCPs using handheld GPS, and survey the GCP locations using post-processed-kinematic (PPK) or real-time-kinematic (RTK) methods. The students pilot the sUAS and operate its camera according to the parameters that they estimated in planning their field work. Data processing includes obtaining accurate locations for the PPK/RTK base station and GCPs, and SfM processing with Agisoft Photoscan. The resulting point clouds are rasterized into digital surface models

  2. Automated Reconstruction of Building LoDs from Airborne LiDAR Point Clouds Using an Improved Morphological Scale Space

    Directory of Open Access Journals (Sweden)

    Bisheng Yang

    2016-12-01

    Full Text Available Reconstructing building models at different levels of detail (LoDs from airborne laser scanning point clouds is urgently needed for wide application as this method can balance between the user’s requirements and economic costs. The previous methods reconstruct building LoDs from the finest 3D building models rather than from point clouds, resulting in heavy costs and inflexible adaptivity. The scale space is a sound theory for multi-scale representation of an object from a coarser level to a finer level. Therefore, this paper proposes a novel method to reconstruct buildings at different LoDs from airborne Light Detection and Ranging (LiDAR point clouds based on an improved morphological scale space. The proposed method first extracts building candidate regions following the separation of ground and non-ground points. For each building candidate region, the proposed method generates a scale space by iteratively using the improved morphological reconstruction with the increase of scale, and constructs the corresponding topological relationship graphs (TRGs across scales. Secondly, the proposed method robustly extracts building points by using features based on the TRG. Finally, the proposed method reconstructs each building at different LoDs according to the TRG. The experiments demonstrate that the proposed method robustly extracts the buildings with details (e.g., door eaves and roof furniture and illustrate good performance in distinguishing buildings from vegetation or other objects, while automatically reconstructing building LoDs from the finest building points.

  3. Application of fibrous complexing sorbents for trace elements preconcentration and separation

    International Nuclear Information System (INIS)

    Zakhartchenko, E.A.; Myasoedova, G.V.

    2003-01-01

    This article demonstrates the application of the 'filled' fibrous sorbents for preconcentration and separation of platinum metals, as well as heavy metals and radionuclides. The POLYORGS complexing sorbents and ion-exchangers were used as fillers. Dynamic preconcentration conditions should be set for complete sorption of the elements: diameter and mass of the sorbent disk or the column as well as flow rate of the solution. These conditions depend on specific features of materials to be analysed and the requirements of the experimental task or detection method. Extensive alteration of features as well as perfect kinetic properties and high selectivity of the 'filled' sorbents confirm their applicability for trace elements preconcentration and separation in technology and analytical chemistry. (authors)

  4. Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods

    Directory of Open Access Journals (Sweden)

    Huan Ni

    2016-09-01

    Full Text Available This paper presents an automated and effective method for detecting 3D edges and tracing feature lines from 3D-point clouds. This method is named Analysis of Geometric Properties of Neighborhoods (AGPN, and it includes two main steps: edge detection and feature line tracing. In the edge detection step, AGPN analyzes geometric properties of each query point’s neighborhood, and then combines RANdom SAmple Consensus (RANSAC and angular gap metric to detect edges. In the feature line tracing step, feature lines are traced by a hybrid method based on region growing and model fitting in the detected edges. Our approach is experimentally validated on complex man-made objects and large-scale urban scenes with millions of points. Comparative studies with state-of-the-art methods demonstrate that our method obtains a promising, reliable, and high performance in detecting edges and tracing feature lines in 3D-point clouds. Moreover, AGPN is insensitive to the point density of the input data.

  5. Curvature computation in volume-of-fluid method based on point-cloud sampling

    Science.gov (United States)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  6. Satellite remote sensing and cloud modeling of St. Anthony, Minnesota storm clouds and dew point depression

    Science.gov (United States)

    Hung, R. J.; Tsao, Y. D.

    1988-01-01

    Rawinsonde data and geosynchronous satellite imagery were used to investigate the life cycles of St. Anthony, Minnesota's severe convective storms. It is found that the fully developed storm clouds, with overshooting cloud tops penetrating above the tropopause, collapsed about three minutes before the touchdown of the tornadoes. Results indicate that the probability of producing an outbreak of tornadoes causing greater damage increases when there are higher values of potential energy storage per unit area for overshooting cloud tops penetrating the tropopause. It is also found that there is less chance for clouds with a lower moisture content to be outgrown as a storm cloud than clouds with a higher moisture content.

  7. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    Science.gov (United States)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  8. Object-Based Point Cloud Analysis of Full-Waveform Airborne Laser Scanning Data for Urban Vegetation Classification

    Directory of Open Access Journals (Sweden)

    Norbert Pfeifer

    2008-08-01

    Full Text Available Airborne laser scanning (ALS is a remote sensing technique well-suited for 3D vegetation mapping and structure characterization because the emitted laser pulses are able to penetrate small gaps in the vegetation canopy. The backscattered echoes from the foliage, woody vegetation, the terrain, and other objects are detected, leading to a cloud of points. Higher echo densities (> 20 echoes/m2 and additional classification variables from full-waveform (FWF ALS data, namely echo amplitude, echo width and information on multiple echoes from one shot, offer new possibilities in classifying the ALS point cloud. Currently FWF sensor information is hardly used for classification purposes. This contribution presents an object-based point cloud analysis (OBPA approach, combining segmentation and classification of the 3D FWF ALS points designed to detect tall vegetation in urban environments. The definition tall vegetation includes trees and shrubs, but excludes grassland and herbage. In the applied procedure FWF ALS echoes are segmented by a seeded region growing procedure. All echoes sorted descending by their surface roughness are used as seed points. Segments are grown based on echo width homogeneity. Next, segment statistics (mean, standard deviation, and coefficient of variation are calculated by aggregating echo features such as amplitude and surface roughness. For classification a rule base is derived automatically from a training area using a statistical classification tree. To demonstrate our method we present data of three sites with around 500,000 echoes each. The accuracy of the classified vegetation segments is evaluated for two independent validation sites. In a point-wise error assessment, where the classification is compared with manually classified 3D points, completeness and correctness better than 90% are reached for the validation sites. In comparison to many other algorithms the proposed 3D point classification works on the original

  9. Assessing the performance of aerial image point cloud and spectral metrics in predicting boreal forest canopy cover

    Science.gov (United States)

    Melin, M.; Korhonen, L.; Kukkonen, M.; Packalen, P.

    2017-07-01

    Canopy cover (CC) is a variable used to describe the status of forests and forested habitats, but also the variable used primarily to define what counts as a forest. The estimation of CC has relied heavily on remote sensing with past studies focusing on satellite imagery as well as Airborne Laser Scanning (ALS) using light detection and ranging (lidar). Of these, ALS has been proven highly accurate, because the fraction of pulses penetrating the canopy represents a direct measurement of canopy gap percentage. However, the methods of photogrammetry can be applied to produce point clouds fairly similar to airborne lidar data from aerial images. Currently there is little information about how well such point clouds measure canopy density and gaps. The aim of this study was to assess the suitability of aerial image point clouds for CC estimation and compare the results with those obtained using spectral data from aerial images and Landsat 5. First, we modeled CC for n = 1149 lidar plots using field-measured CCs and lidar data. Next, this data was split into five subsets in north-south direction (y-coordinate). Finally, four CC models (AerialSpectral, AerialPointcloud, AerialCombi (spectral + pointcloud) and Landsat) were created and they were used to predict new CC values to the lidar plots, subset by subset, using five-fold cross validation. The Landsat and AerialSpectral models performed with RMSEs of 13.8% and 12.4%, respectively. AerialPointcloud model reached an RMSE of 10.3%, which was further improved by the inclusion of spectral data; RMSE of the AerialCombi model was 9.3%. We noticed that the aerial image point clouds managed to describe only the outermost layer of the canopy and missed the details in lower canopy, which was resulted in weak characterization of the total CC variation, especially in the tails of the data.

  10. Section-Based Tree Species Identification Using Airborne LIDAR Point Cloud

    Science.gov (United States)

    Yao, C.; Zhang, X.; Liu, H.

    2017-09-01

    The application of LiDAR data in forestry initially focused on mapping forest community, particularly and primarily intended for largescale forest management and planning. Then with the smaller footprint and higher sampling density LiDAR data available, detecting individual tree overstory, estimating crowns parameters and identifying tree species are demonstrated practicable. This paper proposes a section-based protocol of tree species identification taking palm tree as an example. Section-based method is to detect objects through certain profile among different direction, basically along X-axis or Y-axis. And this method improve the utilization of spatial information to generate accurate results. Firstly, separate the tree points from manmade-object points by decision-tree-based rules, and create Crown Height Mode (CHM) by subtracting the Digital Terrain Model (DTM) from the digital surface model (DSM). Then calculate and extract key points to locate individual trees, thus estimate specific tree parameters related to species information, such as crown height, crown radius, and cross point etc. Finally, with parameters we are able to identify certain tree species. Comparing to species information measured on ground, the portion correctly identified trees on all plots could reach up to 90.65 %. The identification result in this research demonstrate the ability to distinguish palm tree using LiDAR point cloud. Furthermore, with more prior knowledge, section-based method enable the process to classify trees into different classes.

  11. Investigating Freezing Point Depression and Cirrus Cloud Nucleation Mechanisms Using a Differential Scanning Calorimeter

    Science.gov (United States)

    Bodzewski, Kentaro Y.; Caylor, Ryan L.; Comstock, Ashley M.; Hadley, Austin T.; Imholt, Felisha M.; Kirwan, Kory D.; Oyama, Kira S.; Wise, Matthew E.

    2016-01-01

    A differential scanning calorimeter was used to study homogeneous nucleation of ice from micron-sized aqueous ammonium sulfate aerosol particles. It is important to understand the conditions at which these particles nucleate ice because of their connection to cirrus cloud formation. Additionally, the concept of freezing point depression, a topic…

  12. ASSESSING TEMPORAL BEHAVIOR IN LIDAR POINT CLOUDS OF URBAN ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    J. Schachtschneider

    2017-05-01

    Full Text Available Self-driving cars and robots that run autonomously over long periods of time need high-precision and up-to-date models of the changing environment. The main challenge for creating long term maps of dynamic environments is to identify changes and adapt the map continuously. Changes can occur abruptly, gradually, or even periodically. In this work, we investigate how dense mapping data of several epochs can be used to identify the temporal behavior of the environment. This approach anticipates possible future scenarios where a large fleet of vehicles is equipped with sensors which continuously capture the environment. This data is then being sent to a cloud based infrastructure, which aligns all datasets geometrically and subsequently runs scene analysis on it, among these being the analysis for temporal changes of the environment. Our experiments are based on a LiDAR mobile mapping dataset which consists of 150 scan strips (a total of about 1 billion points, which were obtained in multiple epochs. Parts of the scene are covered by up to 28 scan strips. The time difference between the first and last epoch is about one year. In order to process the data, the scan strips are aligned using an overall bundle adjustment, which estimates the surface (about one billion surface element unknowns as well as 270,000 unknowns for the adjustment of the exterior orientation parameters. After this, the surface misalignment is usually below one centimeter. In the next step, we perform a segmentation of the point clouds using a region growing algorithm. The segmented objects and the aligned data are then used to compute an occupancy grid which is filled by tracing each individual LiDAR ray from the scan head to every point of a segment. As a result, we can assess the behavior of each segment in the scene and remove voxels from temporal objects from the global occupancy grid.

  13. Assessing Temporal Behavior in LIDAR Point Clouds of Urban Environments

    Science.gov (United States)

    Schachtschneider, J.; Schlichting, A.; Brenner, C.

    2017-05-01

    Self-driving cars and robots that run autonomously over long periods of time need high-precision and up-to-date models of the changing environment. The main challenge for creating long term maps of dynamic environments is to identify changes and adapt the map continuously. Changes can occur abruptly, gradually, or even periodically. In this work, we investigate how dense mapping data of several epochs can be used to identify the temporal behavior of the environment. This approach anticipates possible future scenarios where a large fleet of vehicles is equipped with sensors which continuously capture the environment. This data is then being sent to a cloud based infrastructure, which aligns all datasets geometrically and subsequently runs scene analysis on it, among these being the analysis for temporal changes of the environment. Our experiments are based on a LiDAR mobile mapping dataset which consists of 150 scan strips (a total of about 1 billion points), which were obtained in multiple epochs. Parts of the scene are covered by up to 28 scan strips. The time difference between the first and last epoch is about one year. In order to process the data, the scan strips are aligned using an overall bundle adjustment, which estimates the surface (about one billion surface element unknowns) as well as 270,000 unknowns for the adjustment of the exterior orientation parameters. After this, the surface misalignment is usually below one centimeter. In the next step, we perform a segmentation of the point clouds using a region growing algorithm. The segmented objects and the aligned data are then used to compute an occupancy grid which is filled by tracing each individual LiDAR ray from the scan head to every point of a segment. As a result, we can assess the behavior of each segment in the scene and remove voxels from temporal objects from the global occupancy grid.

  14. Exploring point-cloud features from partial body views for gender classification

    Science.gov (United States)

    Fouts, Aaron; McCoppin, Ryan; Rizki, Mateen; Tamburino, Louis; Mendoza-Schrock, Olga

    2012-06-01

    In this paper we extend a previous exploration of histogram features extracted from 3D point cloud images of human subjects for gender discrimination. Feature extraction used a collection of concentric cylinders to define volumes for counting 3D points. The histogram features are characterized by a rotational axis and a selected set of volumes derived from the concentric cylinders. The point cloud images are drawn from the CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. This database contains approximately 4400 high resolution LIDAR whole body scans of carefully posed human subjects. Success from our previous investigation was based on extracting features from full body coverage which required integration of multiple camera images. With the full body coverage, the central vertical body axis and orientation are readily obtainable; however, this is not the case with a one camera view providing less than one half body coverage. Assuming that the subjects are upright, we need to determine or estimate the position of the vertical axis and the orientation of the body about this axis relative to the camera. In past experiments the vertical axis was located through the center of mass of torso points projected on the ground plane and the body orientation derived using principle component analysis. In a natural extension of our previous work to partial body views, the absence of rotational invariance about the cylindrical axis greatly increases the difficulty for gender classification. Even the problem of estimating the axis is no longer simple. We describe some simple feasibility experiments that use partial image histograms. Here, the cylindrical axis is assumed to be known. We also discuss experiments with full body images that explore the sensitivity of classification accuracy relative to displacements of the cylindrical axis. Our initial results provide the basis for further

  15. Comparison of a UAV-derived point-cloud to Lidar data at Haig Glacier, Alberta, Canada

    Science.gov (United States)

    Bash, E. A.; Moorman, B.; Montaghi, A.; Menounos, B.; Marshall, S. J.

    2016-12-01

    The use of unmanned aerial vehicles (UAVs) is expanding rapidly in glaciological research as a result of technological improvements that make UAVs a cost-effective solution for collecting high resolution datasets with relative ease. The cost and difficult access traditionally associated with performing fieldwork in glacial environments makes UAVs a particularly attractive tool. In the small, but growing, body of literature using UAVs in glaciology the accuracy of UAV data is tested through the comparison of a UAV-derived DEM to measured control points. A field campaign combining simultaneous lidar and UAV flights over Haig Glacier in April 2015, provided the unique opportunity to directly compare UAV data to lidar. The UAV was a six-propeller Mikrokopter carrying a Panasonic Lumix DMC-GF1 camera with a 12 Megapixel Live MOS sensor and Lumix G 20 mm lens flown at a height of 90 m, resulting in sub-centimetre ground resolution per image pixel. Lidar data collection took place April 20, while UAV flights were conducted April 20-21. A set of 65 control points were laid out and surveyed on the glacier surface on April 19 and 21 using a RTK GPS with a vertical uncertainty of 5 cm. A direct comparison of lidar points to these control points revealed a 9 cm offset between the control points and the lidar points on average, but the difference changed distinctly from points collected on April 19 versus those collected April 21 (7 cm and 12 cm). Agisoft Photoscan was used to create a point-cloud from imagery collected with the UAV and CloudCompare was used to calculate the difference between this and the lidar point cloud, revealing an average difference of less than 17 cm. This field campaign also highlighted some of the benefits and drawbacks of using a rotary UAV for glaciological research. The vertical takeoff and landing capabilities, combined with quick responsiveness and higher carrying capacity, make the rotary vehicle favourable for high-resolution photos when

  16. Automatic Rail Extraction and Celarance Check with a Point Cloud Captured by Mls in a Railway

    Science.gov (United States)

    Niina, Y.; Honma, R.; Honma, Y.; Kondo, K.; Tsuji, K.; Hiramatsu, T.; Oketani, E.

    2018-05-01

    Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.

  17. Metal Recovery and Preconcentration by Edta and Dtpa Modified Silica Surfaces

    Directory of Open Access Journals (Sweden)

    Eveliina Repo

    2017-03-01

    Full Text Available This study focuses on the adsorption and preconcentration of various metals by silica gel surfaces modified with aminopolycarboxylic acids namely ethylenediaminetetraacetic acid or diethylenetriamine-pentaacetic acid. The adsorption performance of the studied materials was determined in mixed metal solutions and the adsorption isotherm studies were conducted for cobalt, nickel, cadmium, and lead. The results were modeled using various theoretical isotherm equations, which suggested that two different adsorption sites were involved in metal removal although lead showed clearly different adsorption behavior attributed to its lowest hydration tendency. Efficient regeneration of the adsorbents and preconcentration of metals was conducted with nitric acid. Results indicated that the metals under study could be analyzed rather accurately after preconcentration from both pure, saline and ground water samples.

  18. [Determination of biphenyl ether herbicides in water using HPLC with cloud-point extraction].

    Science.gov (United States)

    He, Cheng-Yan; Li, Yuan-Qian; Wang, Shen-Jiao; Ouyang, Hua-Xue; Zheng, Bo

    2010-01-01

    To determine residues of multiple biphenyl ether herbicides simultaneously in water using high performance liquid chromatography (HPLC) with cloud-point extraction. The residues of eight biphenyl ether herbicides (including bentazone, fomesafen, acifluorfen, aclonifen, bifenox, fluoroglycofenethy, nitrofen, oxyfluorfen) in water samples were extracted with cloud-point extraction of Triton X-114. The analytes were separated and determined using reverse phase HPLC with ultraviolet detector at 300 nm. Optimized conditions for the pretreatment of water samples and the parameters of chromatographic separation applied. There was a good linear correlation between the concentration and the peak area of the analytes in the range of 0.05-2.00 mg/L (r = 0.9991-0.9998). Except bentazone, the spiked recoveries of the biphenyl ether herbicides in the water samples ranged from 80.1% to 100.9%, with relative standard deviations ranging from 2.70% to 6.40%. The detection limit of the method ranged from 0.10 microg/L to 0.50 microg/L. The proposed method is simple, rapid and sensitive, and can meet the requirements of determination of multiple biphenyl ether herbicides simultaneously in natural waters.

  19. A TWO-STEP CLASSIFICATION APPROACH TO DISTINGUISHING SIMILAR OBJECTS IN MOBILE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. He

    2017-09-01

    Full Text Available Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  20. Low cost digital photogrammetry: From the extraction of point clouds by SFM technique to 3D mathematical modeling

    Science.gov (United States)

    Michele, Mangiameli; Giuseppe, Mussumeci; Salvatore, Zito

    2017-07-01

    The Structure From Motion (SFM) is a technique applied to a series of photographs of an object that returns a 3D reconstruction made up by points in the space (point clouds). This research aims at comparing the results of the SFM approach with the results of a 3D laser scanning in terms of density and accuracy of the model. The experience was conducted by detecting several architectural elements (walls and portals of historical buildings) both with a 3D laser scanner of the latest generation and an amateur photographic camera. The point clouds acquired by laser scanner and those acquired by the photo camera have been systematically compared. In particular we present the experience carried out on the "Don Diego Pappalardo Palace" site in Pedara (Catania, Sicily).

  1. Online preconcentration ICP-MS analysis of rare earth elements in seawater

    Science.gov (United States)

    Hathorne, Ed C.; Haley, Brian; Stichel, Torben; Grasse, Patricia; Zieringer, Moritz; Frank, Martin

    2012-01-01

    The rare earth elements (REEs) with their systematically varying properties are powerful tracers of continental inputs, particle scavenging intensity and the oxidation state of seawater. However, their generally low (˜pmol/kg) concentrations in seawater and fractionation potential during chemical treatment makes them difficult to measure. Here we report a technique using an automated preconcentration system, which efficiently separates seawater matrix elements and elutes the preconcentrated sample directly into the spray chamber of an ICP-MS instrument. The commercially available "seaFAST" system (Elemental Scientific Inc.) makes use of a resin with ethylenediaminetriacetic acid and iminodiacetic acid functional groups to preconcentrate REEs and other metals while anions and alkali and alkaline earth cations are washed out. Repeated measurements of seawater from 2000 m water depth in the Southern Ocean allows the external precision (2σ) of the technique to be estimated at mine water reference materials diluted with a NaCl matrix with recommended values in the literature. This makes the online preconcentration ICP-MS technique advantageous for the minimal sample preparation required and the relatively small sample volume consumed (7 mL) thus enabling large data sets for the REEs in seawater to be rapidly acquired.

  2. Coupling aerosol-cloud-radiative processes in the WRF-Chem model: Investigating the radiative impact of elevated point sources

    Directory of Open Access Journals (Sweden)

    E. G. Chapman

    2009-02-01

    Full Text Available The local and regional influence of elevated point sources on summertime aerosol forcing and cloud-aerosol interactions in northeastern North America was investigated using the WRF-Chem community model. The direct effects of aerosols on incoming solar radiation were simulated using existing modules to relate aerosol sizes and chemical composition to aerosol optical properties. Indirect effects were simulated by adding a prognostic treatment of cloud droplet number and adding modules that activate aerosol particles to form cloud droplets, simulate aqueous-phase chemistry, and tie a two-moment treatment of cloud water (cloud water mass and cloud droplet number to precipitation and an existing radiation scheme. Fully interactive feedbacks thus were created within the modified model, with aerosols affecting cloud droplet number and cloud radiative properties, and clouds altering aerosol size and composition via aqueous processes, wet scavenging, and gas-phase-related photolytic processes. Comparisons of a baseline simulation with observations show that the model captured the general temporal cycle of aerosol optical depths (AODs and produced clouds of comparable thickness to observations at approximately the proper times and places. The model overpredicted SO2 mixing ratios and PM2.5 mass, but reproduced the range of observed SO2 to sulfate aerosol ratios, suggesting that atmospheric oxidation processes leading to aerosol sulfate formation are captured in the model. The baseline simulation was compared to a sensitivity simulation in which all emissions at model levels above the surface layer were set to zero, thus removing stack emissions. Instantaneous, site-specific differences for aerosol and cloud related properties between the two simulations could be quite large, as removing above-surface emission sources influenced when and where clouds formed within the modeling domain. When summed spatially over the finest

  3. Cloud point extraction for the determination of lead and cadmium in urine by graphite furnace atomic absorption spectrometry with multivariate optimization using Box-Behnken design

    International Nuclear Information System (INIS)

    Maranhao, Tatiane de A; Martendal, Edmar; Borges, Daniel L.G.; Carasek, Eduardo; Welz, Bernhard; Curtius, Adilson J.

    2007-01-01

    Cloud point extraction (CPE) is proposed as a pre-concentration procedure for the determination of Pb and Cd in undigested urine by graphite furnace atomic absorption spectrometry (GF AAS). Aliquots of 0.5 mL urine were acidified with HCl and the chelating agent ammonium O,O-diethyl dithiophosphate (DDTP) was added along with the non-ionic surfactant Triton X-114 at the optimized concentrations. Phase separation was achieved by heating the mixture to 50 deg. C for 15 min. The surfactant-rich phase was analyzed by GF AAS, employing the optimized pyrolysis temperatures of 900 deg. C for Pb and 800 deg. C for Cd, using a graphite tube with a platform treated with 500 μg Ru as permanent modifier. The reagent concentrations for CPE (HCl, DDTP and Triton X-114) were optimized using a Box-Behnken design. The response surfaces and the optimum values were very similar for aqueous solutions and for the urine samples, demonstrating that aqueous standards submitted to CPE could be used for calibration. Detection limits of 40 and 2 ng L -1 for Pb and Cd, respectively, were obtained along with an enhancement factor of 16 for both analytes. Three control urine samples were analyzed using this approach, and good agreement was obtained at a 95% statistical confidence level between the certified and determined values. Five real samples have also been analyzed before and after spiking with Pb and Cd, resulting in recoveries ranging from 97 to 118%

  4. Topobathymetric LiDAR point cloud processing and landform classification in a tidal environment

    Science.gov (United States)

    Skovgaard Andersen, Mikkel; Al-Hamdani, Zyad; Steinbacher, Frank; Rolighed Larsen, Laurids; Brandbyge Ernstsen, Verner

    2017-04-01

    Historically it has been difficult to create high resolution Digital Elevation Models (DEMs) in land-water transition zones due to shallow water depth and often challenging environmental conditions. This gap of information has been reflected as a "white ribbon" with no data in the land-water transition zone. In recent years, the technology of airborne topobathymetric Light Detection and Ranging (LiDAR) has proven capable of filling out the gap by simultaneously capturing topographic and bathymetric elevation information, using only a single green laser. We collected green LiDAR point cloud data in the Knudedyb tidal inlet system in the Danish Wadden Sea in spring 2014. Creating a DEM from a point cloud requires the general processing steps of data filtering, water surface detection and refraction correction. However, there is no transparent and reproducible method for processing green LiDAR data into a DEM, specifically regarding the procedure of water surface detection and modelling. We developed a step-by-step procedure for creating a DEM from raw green LiDAR point cloud data, including a procedure for making a Digital Water Surface Model (DWSM) (see Andersen et al., 2017). Two different classification analyses were applied to the high resolution DEM: A geomorphometric and a morphological classification, respectively. The classification methods were originally developed for a small test area; but in this work, we have used the classification methods to classify the complete Knudedyb tidal inlet system. References Andersen MS, Gergely Á, Al-Hamdani Z, Steinbacher F, Larsen LR, Ernstsen VB (2017). Processing and performance of topobathymetric lidar data for geomorphometric and morphological classification in a high-energy tidal environment. Hydrol. Earth Syst. Sci., 21: 43-63, doi:10.5194/hess-21-43-2017. Acknowledgements This work was funded by the Danish Council for Independent Research | Natural Sciences through the project "Process-based understanding and

  5. Mobile Laser Scanning along Dieppe coastal cliffs: reliability of the acquired point clouds applied to rockfall assessments

    Science.gov (United States)

    Michoud, Clément; Carrea, Dario; Augereau, Emmanuel; Cancouët, Romain; Costa, Stéphane; Davidson, Robert; Delacourt, Chirstophe; Derron, Marc-Henri; Jaboyedoff, Michel; Letortu, Pauline; Maquaire, Olivier

    2013-04-01

    Dieppe coastal cliffs, in Normandy, France, are mainly formed by sub-horizontal deposits of chalk and flintstone. Largely destabilized by an intense weathering and the Channel sea erosion, small and large rockfalls are regularly observed and contribute to retrogressive cliff processes. During autumn 2012, cliff and intertidal topographies have been acquired with a Terrestrial Laser Scanner (TLS) and a Mobile Laser Scanner (MLS), coupled with seafloor bathymetries realized with a multibeam echosounder (MBES). MLS is a recent development of laser scanning based on the same theoretical principles of aerial LiDAR, but using smaller, cheaper and portable devices. The MLS system, which is composed by an accurate dynamic positioning and orientation (INS) devices and a long range LiDAR, is mounted on a marine vessel; it is then possible to quickly acquire in motion georeferenced LiDAR point clouds with a resolution of about 15 cm. For example, it takes about 1 h to scan of shoreline of 2 km long. MLS is becoming a promising technique supporting erosion and rockfall assessments along the shores of lakes, fjords or seas. In this study, the MLS system used to acquire cliffs and intertidal areas of the Cap d'Ailly was composed by the INS Applanix POS-MV 320 V4 and the LiDAR Optech Ilirs LR. On the same day, three MLS scans with large overlaps (J1, J21 and J3) have been performed at ranges from 600 m at 4 knots (low tide) up to 200 m at 2.2 knots (up tide) with a calm sea at 2.5 Beaufort (small wavelets). Mean scan resolutions go from 26 cm for far scan (J1) to about 8.1 cm for close scan (J3). Moreover, one TLS point cloud on this test site has been acquired with a mean resolution of about 2.3 cm, using a Riegl LMS Z390i. In order to quantify the reliability of the methodology, comparisons between scans have been realized with the software Polyworks™, calculating shortest distances between points of one cloud and the interpolated surface of the reference point cloud. A Mat

  6. Filterless preconcentration, flow injection analysis and detection by inductively-coupled plasma mass spectrometry

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    The influence of interferences in the analysis of elements by inductively-coupled-plasma mass-spectrometry (ICP-MS) may be significantly diminished by utilising a protocol of flow-injection analysis (FIA). The method is based on filterless preconcentration of metallic elements at the walls...... of a knotted reactor that was made of nylon tubings. In the load mode, the preconcentration was accomplished by precipitation of metallic species in alkaline-buffered carriers onto the inner walls of the hydrofilic tube. After a preconcen-tration period of 40-120 seconds using sample volumes of 4-10 m...... of 10-30 were obtained in the analysis of aluminium, of chromium and of iron, which resulted in detection limits (3) down to 20 g/L at a sampling frequency of 50 per hour. The preconcentration protocol improves the selectivity thus allowing direct determination of the elements in saline media. Anionic...

  7. Global Registration of 3D LiDAR Point Clouds Based on Scene Features: Application to Structured Environments

    Directory of Open Access Journals (Sweden)

    Julia Sanchez

    2017-09-01

    Full Text Available Acquiring 3D data with LiDAR systems involves scanning multiple scenes from different points of view. In actual systems, the ICP algorithm (Iterative Closest Point is commonly used to register the acquired point clouds together to form a unique one. However, this method faces local minima issues and often needs a coarse initial alignment to converge to the optimum. This paper develops a new method for registration adapted to indoor environments and based on structure priors of such scenes. Our method works without odometric data or physical targets. The rotation and translation of the rigid transformation are computed separately, using, respectively, the Gaussian image of the point clouds and a correlation of histograms. To evaluate our algorithm on challenging registration cases, two datasets were acquired and are available for comparison with other methods online. The evaluation of our algorithm on four datasets against six existing methods shows that the proposed method is more robust against sampling and scene complexity. Moreover, the time performances enable a real-time implementation.

  8. Preconcentration of uranium in water samples using dispersive ...

    African Journals Online (AJOL)

    Preconcentration of uranium in water samples using dispersive liquid-liquid micro- extraction coupled with solid-phase extraction and determination with inductively coupled plasma-optical emission spectrometry.

  9. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds.

    Science.gov (United States)

    Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.

  10. AUTOMATED VOXEL MODEL FROM POINT CLOUDS FOR STRUCTURAL ANALYSIS OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    G. Bitelli

    2016-06-01

    Full Text Available In the context of cultural heritage, an accurate and comprehensive digital survey of a historical building is today essential in order to measure its geometry in detail for documentation or restoration purposes, for supporting special studies regarding materials and constructive characteristics, and finally for structural analysis. Some proven geomatic techniques, such as photogrammetry and terrestrial laser scanning, are increasingly used to survey buildings with different complexity and dimensions; one typical product is in form of point clouds. We developed a semi-automatic procedure to convert point clouds, acquired from laserscan or digital photogrammetry, to a filled volume model of the whole structure. The filled volume model, in a voxel format, can be useful for further analysis and also for the generation of a Finite Element Model (FEM of the surveyed building. In this paper a new approach is presented with the aim to decrease operator intervention in the workflow and obtain a better description of the structure. In order to achieve this result a voxel model with variable resolution is produced. Different parameters are compared and different steps of the procedure are tested and validated in the case study of the North tower of the San Felice sul Panaro Fortress, a monumental historical building located in San Felice sul Panaro (Modena, Italy that was hit by an earthquake in 2012.

  11. Use of dispersive liquid-liquid microextraction for simultaneous preconcentration of samarium, europium, gadolinium and dysprosium

    International Nuclear Information System (INIS)

    Mallah, M.H.; Atomic Energy Organization of Iran, Tehran; Shemirani, F.; Ghannadi Maragheh, M.

    2008-01-01

    A new preconcentration method of dispersive liquid-liquid microextraction (DLLME) was developed for simultaneous preconcentration of samarium, europium, gadolinium and dysprosium. DLLME technique was successfully used as a sample preparation method. In this preconcentration method, an appropriate mixture of extraction solvent, disperser solvent was injected rapidly into an aqueous solution containing Sm, Eu, Gd and Dy after complex formation using chelating reagent of the 1-(2-pyridylazo)-2-naphthol (PAN). After phase separation, 0.5 mL of the settled phase containing enriched analytes was determined by inductively coupled plasma optical emission spectrometry (ICP-OES). The main factors affected the preconcentration of Sm, Eu, Gd and Dy were extraction and dispersive solvent type and their volume, extraction time, volume of chelating agent (PAN), centrifuge speed and drying temperature of the samples. Under the best operating condition simultaneous preconcentration factors of 80, 100, 103 and 78 were obtained for Sm, Eu, Gd and Dy, respectively. (author)

  12. Preconcentration NAA for simultaneous multielemental determination in water sample

    International Nuclear Information System (INIS)

    Chatt, A.

    1999-01-01

    Full text: Environment concerns with water, air, land and their interrelationship viz., human beings, fauna and flora. One of the important environmental compartments is water. Elements present in water might face a whole lot of physico-chemical conditions. This poses challenges to measure their total concentrations as well as different species. Preconcentration of the elements present in water samples is a necessary requisites in water analysis. For multi elements concentration measurements, Neutron Activation Analysis (NAA) is one of the preferred analytical techniques due to its sensitivity and selectivity. In this talk preconcentration NAA for multielemental determination in water sample determination will be discussed

  13. Gold volatile species atomization and preconcentration in quartz devices for atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Arslan, Yasin [Institute of Analytical Chemistry of the ASCR, v. v. i., Veveří 97, 602 00 Brno (Czech Republic); Mehmet Akif Ersoy University, Faculty of Arts & Sciences, Chemistry Department, 15030 Burdur (Turkey); Musil, Stanislav; Matoušek, Tomáš; Kratzer, Jan [Institute of Analytical Chemistry of the ASCR, v. v. i., Veveří 97, 602 00 Brno (Czech Republic); Dědina, Jiří, E-mail: dedina@biomed.cas.cz [Institute of Analytical Chemistry of the ASCR, v. v. i., Veveří 97, 602 00 Brno (Czech Republic)

    2015-01-01

    The on-line atomization of gold volatile species was studied and the results were compared with thermodynamic calculations in several quartz atomizers, namely: diffusion flame, flame-in-gas-shield, flame-in-plain-tube, externally heated T-tube and externally heated flame-in-T-tube. Atomization mechanism in the explored devices is proposed, where volatile species are converted to thermodynamically stable AuH at elevated temperature over 500 °C and then atomized by an interaction with a cloud of hydrogen radicals. Because of its inherent simplicity and robustness, diffusion flame was employed as a reference atomizer. It yielded atomization efficiency of 70 to 100% and a very good long time reproducibility of peak area sensitivity: 1.6 to 1.8 s μg{sup −1}. Six and eleven times higher sensitivity, respectively, was provided by atomizers with longer light paths in the observation volume, i.e. externally heated T-tube and externally heated flame-in-T-tube. The latter one, offering limit of detection below 0.01 μg ml{sup −1}, appeared as the most prospective for on-line atomization. Insight into the mechanism of atomization of gold volatile species, into the fate of free atoms and into subsequent analyte transfer allowed to assess possibilities of in-atomizer preconcentration of gold volatile species: it is unfeasible with quartz atomizers but a sapphire tube atomizer could be useful in this respect. - Highlights: • On-line atomization of gold volatile species for AAS in quartz devices was studied. • Atomization mechanism was proposed and atomization efficiency was estimated. • Possibilities of in-atomizer preconcentration of gold volatile species were assessed.

  14. Spatially explicit spectral analysis of point clouds and geospatial data

    Science.gov (United States)

    Buscombe, Daniel D.

    2015-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software packagePySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is

  15. Spatially explicit spectral analysis of point clouds and geospatial data

    Science.gov (United States)

    Buscombe, Daniel

    2016-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software package PySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is described

  16. EVALUATION MODEL FOR PAVEMENT SURFACE DISTRESS ON 3D POINT CLOUDS FROM MOBILE MAPPING SYSTEM

    Directory of Open Access Journals (Sweden)

    K. Aoki

    2012-07-01

    Full Text Available This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS. The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments’ specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.

  17. Uranium preconcentration from seawater using adsorptive membranes

    International Nuclear Information System (INIS)

    Das, Sadananda; Pandey, A.K.; Manchanda, V.K.; Athawale, A.A.

    2009-01-01

    Uranium recovery from bio-aggressive but lean feed like seawater is a challenging problem as it requires in situ preconcentration of uranium in presence of huge excess of competing ions with fast sorption kinetics. In our laboratory, widely used amidoxime membrane (AO-membrane) was evaluated for uranium sorption under seawater conditions. This study indicated that AO-membrane was inherently slow because of the complexation chemistry involved in transfer of U(VI) from (UO 2 (CO 3 ) 3 ) 4 - to AO sites in membrane. In order to search better options, several chemical compositions of membrane were scanned for their efficacy for uranium preconcentration from seawater, and concluded that EGMP-membrane offers several advantages over AO-membrane. In this paper, the comparison of EGMP-membrane with AO-membrane for uranium sorption under seawater conditions has been reviewed. (author)

  18. Methods and considerations to determine sphere center from terrestrial laser scanner point cloud data

    International Nuclear Information System (INIS)

    Rachakonda, Prem; Muralikrishnan, Bala; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel; Cournoyer, Luc; Cheok, Geraldine

    2017-01-01

    The Dimensional Metrology Group at the National Institute of Standards and Technology is performing research to support the development of documentary standards within the ASTM E57 committee. This committee is addressing the point-to-point performance evaluation of a subclass of 3D imaging systems called terrestrial laser scanners (TLSs), which are laser-based and use a spherical coordinate system. This paper discusses the usage of sphere targets for this effort, and methods to minimize the errors due to the determination of their centers. The key contributions of this paper include methods to segment sphere data from a TLS point cloud, and the study of some of the factors that influence the determination of sphere centers. (paper)

  19. Formation of massive, dense cores by cloud-cloud collisions

    Science.gov (United States)

    Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.

    2018-05-01

    We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.

  20. Stereovision-based integrated system for point cloud reconstruction and simulated brain shift validation.

    Science.gov (United States)

    Yang, Xiaochen; Clements, Logan W; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C; Dawant, Benoit M; Miga, Michael I

    2017-07-01

    Intraoperative soft tissue deformation, referred to as brain shift, compromises the application of current image-guided surgery navigation systems in neurosurgery. A computational model driven by sparse data has been proposed as a cost-effective method to compensate for cortical surface and volumetric displacements. We present a mock environment developed to acquire stereoimages from a tracked operating microscope and to reconstruct three-dimensional point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. When comparing our tracked microscope stereo-pair measure of mock vessel displacements to that of the measurement determined by the independent optically tracked stylus marking, the displacement error was [Formula: see text] on average. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to laser range scanners to collect sufficient intraoperative information for brain shift correction.

  1. Automated Detection of Geomorphic Features in LiDAR Point Clouds of Various Spatial Density

    Science.gov (United States)

    Dorninger, Peter; Székely, Balázs; Zámolyi, András.; Nothegger, Clemens

    2010-05-01

    LiDAR, also referred to as laser scanning, has proved to be an important tool for topographic data acquisition. Terrestrial laser scanning allows for accurate (several millimeter) and high resolution (several centimeter) data acquisition at distances of up to some hundred meters. By contrast, airborne laser scanning allows for acquiring homogeneous data for large areas, albeit with lower accuracy (decimeter) and resolution (some ten points per square meter) compared to terrestrial laser scanning. Hence, terrestrial laser scanning is preferably used for precise data acquisition of limited areas such as landslides or steep structures, while airborne laser scanning is well suited for the acquisition of topographic data of huge areas or even country wide. Laser scanners acquire more or less homogeneously distributed point clouds. These points represent natural objects like terrain and vegetation and artificial objects like buildings, streets or power lines. Typical products derived from such data are geometric models such as digital surface models representing all natural and artificial objects and digital terrain models representing the geomorphic topography only. As the LiDAR technology evolves, the amount of data produced increases almost exponentially even in smaller projects. This means a considerable challenge for the end user of the data: the experimenter has to have enough knowledge, experience and computer capacity in order to manage the acquired dataset and to derive geomorphologically relevant information from the raw or intermediate data products. Additionally, all this information might need to be integrated with other data like orthophotos. In all theses cases, in general, interactive interpretation is necessary to determine geomorphic structures from such models to achieve effective data reduction. There is little support for the automatic determination of characteristic features and their statistical evaluation. From the lessons learnt from automated

  2. Digital Investigations of AN Archaeological Smart Point Cloud: a Real Time Web-Based Platform to Manage the Visualisation of Semantical Queries

    Science.gov (United States)

    Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.

    2017-05-01

    While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.

  3. Carbon Sequestration Estimation of Street Trees Based on Point Cloud from Vehicle-Borne Laser Scanning System

    Science.gov (United States)

    Zhao, Y.; Hu, Q.

    2017-09-01

    Continuous development of urban road traffic system requests higher standards of road ecological environment. Ecological benefits of street trees are getting more attention. Carbon sequestration of street trees refers to the carbon stocks of street trees, which can be a measurement for ecological benefits of street trees. Estimating carbon sequestration in a traditional way is costly and inefficient. In order to solve above problems, a carbon sequestration estimation approach for street trees based on 3D point cloud from vehicle-borne laser scanning system is proposed in this paper. The method can measure the geometric parameters of a street tree, including tree height, crown width, diameter at breast height (DBH), by processing and analyzing point cloud data of an individual tree. Four Chinese scholartree trees and four camphor trees are selected for experiment. The root mean square error (RMSE) of tree height is 0.11m for Chinese scholartree and 0.02m for camphor. Crown widths in X direction and Y direction, as well as the average crown width are calculated. And the RMSE of average crown width is 0.22m for Chinese scholartree and 0.10m for camphor. The last calculated parameter is DBH, the RMSE of DBH is 0.5cm for both Chinese scholartree and camphor. Combining the measured geometric parameters and an appropriate carbon sequestration calculation model, the individual tree's carbon sequestration will be estimated. The proposed method can help enlarge application range of vehicle-borne laser point cloud data, improve the efficiency of estimating carbon sequestration, construct urban ecological environment and manage landscape.

  4. Speciation and Determination of Low Concentration of Iron in Beer Samples by Cloud Point Extraction

    Science.gov (United States)

    Khalafi, Lida; Doolittle, Pamela; Wright, John

    2018-01-01

    A laboratory experiment is described in which students determine the concentration and speciation of iron in beer samples using cloud point extraction and absorbance spectroscopy. The basis of determination is the complexation between iron and 2-(5-bromo-2- pyridylazo)-5-diethylaminophenol (5-Br-PADAP) as a colorimetric reagent in an aqueous…

  5. Simultaneous spectrophotometric determination of uranium and zirconium using cloud point extraction and multivariate methods

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Hashemi, Beshare; Shamsipur, Mojtaba

    2012-01-01

    A cloud point extraction (CPE) process using the nonionic surfactant Triton X-114 to simultaneous extraction and spectrophotometric determination of uranium and zirconium from aqueous solution using partial least squares (PLS) regression is investigated. The method is based on the complexation reaction of these cations with Alizarin Red S (ARS) and subsequent micelle-mediated extraction of products. The chemical parameters affecting the separation phase and detection process were studied and optimized. Under the optimum experimental conditions (i.e. pH 5.2, Triton X-114 = 0.20%, equilibrium time 10 min and cloud point 45 C), calibration graphs were linear in the range of 0.01-3 mg L -1 with detection limits of 2.0 and 0.80 μg L -1 for U and Zr, respectively. The experimental calibration set was composed of 16 sample solutions using an orthogonal design for two component mixtures. The root mean square error of predictions (RMSEPs) for U and Zr were 0.0907 and 0.1117, respectively. The interference effect of some anions and cations was also tested. The method was applied to the simultaneous determination of U and Zr in water samples.

  6. Sequential cloud point extraction for the speciation of mercury in seafood by inductively coupled plasma optical emission spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Li Yingjie [Department of Chemistry, Wuhan University, Wuhan 430072 (China); Hu Bin [Department of Chemistry, Wuhan University, Wuhan 430072 (China)], E-mail: binhu@whu.edu.cn

    2007-10-15

    A novel nonchromatographic speciation technique for the speciation of mercury by sequential cloud point extraction (CPE) combined with inductively coupled plasma optical emission spectrometry (ICP-OES) was developed. The method based on Hg{sup 2+} was complexed with I{sup -} to form HgI{sub 4}{sup 2-}, and the HgI{sub 4}{sup 2-} reacted with the methyl green (MG) cation to form hydrophobic ion-associated complex, and the ion-associated complex was then extracted into the surfactant-rich phase of the non-ionic surfactant octylphenoxypolyethoxyethanol (Triton X-114), which are subsequently separated from methylmercury (MeHg{sup +}) in the initial solution by centrifugation. The surfactant-rich phase containing Hg(II) was diluted with 0.5 mol L{sup -1} HNO{sub 3} for ICP-OES determination. The supernatant is also subjected to the similar CPE procedure for the preconcentration of MeHg{sup +} by the addition of a chelating agent, ammonium pyrrolidine dithiocarbamate (APDC), in order to form water-insolvable complex with MeHg{sup +}. The MeHg{sup +} in the micelles was directly analyzed after disposal as describe above. Under the optimized conditions, the extraction efficiency was 93.5% for Hg(II) and 51.5% for MeHg{sup +} with the enrichment factor of 18.7 for Hg(II) and 10.3 for MeHg{sup +}, respectively. The limits of detection (LODs) were 56.3 ng L{sup -1} for Hg(II) and 94.6 ng L{sup -1} for MeHg{sup +} (as Hg) with the relative standard deviations (RSDs) of 3.6% for Hg(II) and 4.5% for MeHg{sup +} (C = 10 {mu}g L{sup -1}, n = 7), respectively. The developed technique was applied to the speciation of mercury in real seafood samples and the recoveries for spiked samples were found to be in the range of 93.2-108.7%. For validation, a certified reference material of DORM-2 (dogfish muscle) was analyzed and the determined values are in good agreement with the certified values.

  7. Sequential cloud point extraction for the speciation of mercury in seafood by inductively coupled plasma optical emission spectrometry

    International Nuclear Information System (INIS)

    Li Yingjie; Hu Bin

    2007-01-01

    A novel nonchromatographic speciation technique for the speciation of mercury by sequential cloud point extraction (CPE) combined with inductively coupled plasma optical emission spectrometry (ICP-OES) was developed. The method based on Hg 2+ was complexed with I - to form HgI 4 2- , and the HgI 4 2- reacted with the methyl green (MG) cation to form hydrophobic ion-associated complex, and the ion-associated complex was then extracted into the surfactant-rich phase of the non-ionic surfactant octylphenoxypolyethoxyethanol (Triton X-114), which are subsequently separated from methylmercury (MeHg + ) in the initial solution by centrifugation. The surfactant-rich phase containing Hg(II) was diluted with 0.5 mol L -1 HNO 3 for ICP-OES determination. The supernatant is also subjected to the similar CPE procedure for the preconcentration of MeHg + by the addition of a chelating agent, ammonium pyrrolidine dithiocarbamate (APDC), in order to form water-insolvable complex with MeHg + . The MeHg + in the micelles was directly analyzed after disposal as describe above. Under the optimized conditions, the extraction efficiency was 93.5% for Hg(II) and 51.5% for MeHg + with the enrichment factor of 18.7 for Hg(II) and 10.3 for MeHg + , respectively. The limits of detection (LODs) were 56.3 ng L -1 for Hg(II) and 94.6 ng L -1 for MeHg + (as Hg) with the relative standard deviations (RSDs) of 3.6% for Hg(II) and 4.5% for MeHg + (C = 10 μg L -1 , n = 7), respectively. The developed technique was applied to the speciation of mercury in real seafood samples and the recoveries for spiked samples were found to be in the range of 93.2-108.7%. For validation, a certified reference material of DORM-2 (dogfish muscle) was analyzed and the determined values are in good agreement with the certified values

  8. Colour computer-generated holography for point clouds utilizing the Phong illumination model.

    Science.gov (United States)

    Symeonidou, Athanasia; Blinder, David; Schelkens, Peter

    2018-04-16

    A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

  9. Two cloud-point phenomena in tetrabutylammonium perfluorooctanoate aqueous solutions: anomalous temperature-induced phase and structure transitions.

    Science.gov (United States)

    Yan, Peng; Huang, Jin; Lu, Run-Chao; Jin, Chen; Xiao, Jin-Xin; Chen, Yong-Ming

    2005-03-24

    This paper reported the phase behavior and aggregate structure of tetrabutylammonium perfluorooctanoate (TBPFO), determined by differential scanning calorimeter, electrical conductivity, static/dynamic light scattering, and rheology methods. We found that above a certain concentration the TBPFO solution showed anomalous temperature-dependent phase behavior and structure transitions. Such an ionic surfactant solution exhibits two cloud points. When the temperature was increased, the solution turned from a homogeneous-phase to a liquid-liquid two-phase system, then to another homogeneous-phase, and finally to another liquid-liquid two-phase system. In the first homogeneous-phase region, the aggregates of TBPFO were rodlike micelles and the solution was Newtonian fluid. While in the second homogeneous-phase region, the aggregates of TBPFO were large wormlike micelles, and the solution behaved as pseudoplastic fluid that also exhibited viscoelastic behavior. We thought that the first cloud point might be caused by the "bridge" effect of the tetrabutylammonium counterion between the micelles and the second one by the formation of the micellar network.

  10. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  11. Effect of chemical structure on the cloud point of some new non-ionic surfactants based on bisphenol in relation to their surface active properties

    Directory of Open Access Journals (Sweden)

    A.M. Al-Sabagh

    2011-06-01

    Full Text Available A series of non-ionic surfactants were prepared from bisphenol derived from acetone (A, acetophenone (AC and cyclohexanone (CH. The prepared bisphenols were ethoxylated at different degrees of ethylene oxide (27, 35, 43. The ethoxylated bisphenols were non-esterified by fatty acids; decanoic, lauric, myristic, palmitic, stearic, oleic, linoloic and linolinic. Some surface active properties for these surfactants were measured and calculated such as, surface tension [γ], critical micelle concentration [CMC], minimum area per molecule [Amin], surface excess [Cmax], free energy of micellization and adsorption [ΔGmic] and [ΔGads]. At a certain temperature, the cloud point was measured for these surfactants. From the obtained data it was found that; the cloud point is very sensitive to the increase of the alkyl chain length, content of ethylene oxide and degree of unsaturation. The core of bisphenol affected the cloud point sharply and they are ranked regarding bisphenol structure as BA > BCH > BAC. By inspection of the surface active properties of these surfactants, a good relation was obtained with their cloud points. The data were discussed on the light of their chemical structures.

  12. Assessing the consistency of UAV-derived point clouds and images acquired at different altitudes

    Science.gov (United States)

    Ozcan, O.

    2016-12-01

    Unmanned Aerial Vehicles (UAVs) offer several advantages in terms of cost and image resolution compared to terrestrial photogrammetry and satellite remote sensing system. Nowadays, UAVs that bridge the gap between the satellite scale and field scale applications were initiated to be used in various application areas to acquire hyperspatial and high temporal resolution imageries due to working capacity and acquiring in a short span of time with regard to conventional photogrammetry methods. UAVs have been used for various fields such as for the creation of 3-D earth models, production of high resolution orthophotos, network planning, field monitoring and agricultural lands as well. Thus, geometric accuracy of orthophotos and volumetric accuracy of point clouds are of capital importance for land surveying applications. Correspondingly, Structure from Motion (SfM) photogrammetry, which is frequently used in conjunction with UAV, recently appeared in environmental sciences as an impressive tool allowing for the creation of 3-D models from unstructured imagery. In this study, it was aimed to reveal the spatial accuracy of the images acquired from integrated digital camera and the volumetric accuracy of Digital Surface Models (DSMs) which were derived from UAV flight plans at different altitudes using SfM methodology. Low-altitude multispectral overlapping aerial photography was collected at the altitudes of 30 to 100 meters and georeferenced with RTK-GPS ground control points. These altitudes allow hyperspatial imagery with the resolutions of 1-5 cm depending upon the sensor being used. Preliminary results revealed that the vertical comparison of UAV-derived point clouds with respect to GPS measurements pointed out an average distance at cm-level. Larger values are found in areas where instantaneous changes in surface are present.

  13. Investigation on the Weighted RANSAC Approaches for Building Roof Plane Segmentation from LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bo Xu

    2015-12-01

    Full Text Available RANdom SAmple Consensus (RANSAC is a widely adopted method for LiDAR point cloud segmentation because of its robustness to noise and outliers. However, RANSAC has a tendency to generate false segments consisting of points from several nearly coplanar surfaces. To address this problem, we formulate the weighted RANSAC approach for the purpose of point cloud segmentation. In our proposed solution, the hard threshold voting function which considers both the point-plane distance and the normal vector consistency is transformed into a soft threshold voting function based on two weight functions. To improve weighted RANSAC’s ability to distinguish planes, we designed the weight functions according to the difference in the error distribution between the proper and improper plane hypotheses, based on which an outlier suppression ratio was also defined. Using the ratio, a thorough comparison was conducted between these different weight functions to determine the best performing function. The selected weight function was then compared to the existing weighted RANSAC methods, the original RANSAC, and a representative region growing (RG method. Experiments with two airborne LiDAR datasets of varying densities show that the various weighted methods can improve the segmentation quality differently, but the dedicated designed weight functions can significantly improve the segmentation accuracy and the topology correctness. Moreover, its robustness is much better when compared to the RG method.

  14. On-line preconcentration and determination of mercury in biological and environmental samples by cold vapor-atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Ferrua, N.; Cerutti, S.; Salonia, J.A.; Olsina, R.A.; Martinez, L.D.

    2007-01-01

    An on-line procedure for the determination of traces of total mercury in environmental and biological samples is described. The present methodology combines cold vapor generation associated to atomic absorption spectrometry (CV-AAS) with preconcentration of the analyte on a minicolumn packed with activated carbon. The retained analyte was quantitatively eluted from the minicolumn with nitric acid. After that, volatile specie of mercury was generated by merging the acidified sample and sodium tetrahydroborate(III) in a continuous flow system. The gaseous analyte was subsequently introduced via a stream of Ar carrier into the atomizer device. Optimizations of both, preconcentration and mercury volatile specie generation variables were carried out using two level full factorial design (2 3 ) with 3 replicates of the central point. Considering a sample consumption of 25 mL, an enrichment factor of 13-fold was obtained. The detection limit (3σ) was 10 ng L -1 and the precision (relative standard deviation) was 3.1% (n = 10) at the 5 μg L -1 level. The calibration curve using the preconcentration system for mercury was linear with a correlation coefficient of 0.9995 at levels near the detection limit up to at least 1000 μg L -1 . Satisfactory results were obtained for the analysis of mercury in tap water and hair samples

  15. Reducing and filtering point clouds with enhanced vector quantization.

    Science.gov (United States)

    Ferrari, Stefano; Ferrigno, Giancarlo; Piuri, Vincenzo; Borghese, N Alberto

    2007-01-01

    Modern scanners are able to deliver huge quantities of three-dimensional (3-D) data points sampled on an object's surface, in a short time. These data have to be filtered and their cardinality reduced to come up with a mesh manageable at interactive rates. We introduce here a novel procedure to accomplish these two tasks, which is based on an optimized version of soft vector quantization (VQ). The resulting technique has been termed enhanced vector quantization (EVQ) since it introduces several improvements with respect to the classical soft VQ approaches. These are based on computationally expensive iterative optimization; local computation is introduced here, by means of an adequate partitioning of the data space called hyperbox (HB), to reduce the computational time so as to be linear in the number of data points N, saving more than 80% of time in real applications. Moreover, the algorithm can be fully parallelized, thus leading to an implementation that is sublinear in N. The voxel side and the other parameters are automatically determined from data distribution on the basis of the Zador's criterion. This makes the algorithm completely automatic. Because the only parameter to be specified is the compression rate, the procedure is suitable even for nontrained users. Results obtained in reconstructing faces of both humans and puppets as well as artifacts from point clouds publicly available on the web are reported and discussed, in comparison with other methods available in the literature. EVQ has been conceived as a general procedure, suited for VQ applications with large data sets whose data space has relatively low dimensionality.

  16. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    Science.gov (United States)

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  17. Automatic registration of iPhone images to laser point clouds of urban structures using shape features

    NARCIS (Netherlands)

    Sirmacek, B.; Lindenbergh, R.C.; Menenti, M.

    2013-01-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for

  18. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Science.gov (United States)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  19. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    International Nuclear Information System (INIS)

    Pereira, N F; Sitek, A

    2010-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  20. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)

    2010-09-21

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  1. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    Directory of Open Access Journals (Sweden)

    Zhiying Song

    2017-01-01

    Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.

  2. Extraction of Features from High-resolution 3D LiDaR Point-cloud Data

    Science.gov (United States)

    Keller, P.; Kreylos, O.; Hamann, B.; Kellogg, L. H.; Cowgill, E. S.; Yikilmaz, M. B.; Hering-Bertram, M.; Hagen, H.

    2008-12-01

    Airborne and tripod-based LiDaR scans are capable of producing new insight into geologic features by providing high-quality 3D measurements of the landscape. High-resolution LiDaR is a promising method for studying slip on faults, erosion, and other landscape-altering processes. LiDaR scans can produce up to several billion individual point returns associated with the reflection of a laser from natural and engineered surfaces; these point clouds are typically used to derive a high-resolution digital elevation model (DEM). Currently, there exist only few methods that can support the analysis of the data at full resolution and in the natural 3D perspective in which it was collected by working directly with the points. We are developing new algorithms for extracting features from LiDaR scans, and present method for determining the local curvature of a LiDaR data set, working directly with the individual point returns of a scan. Computing the curvature enables us to rapidly and automatically identify key features such as ridge-lines, stream beds, and edges of terraces. We fit polynomial surface patches via a moving least squares (MLS) approach to local point neighborhoods, determining curvature values for each point. The size of the local point neighborhood is defined by a user. Since both terrestrial and airborne LiDaR scans suffer from high noise, we apply additional pre- and post-processing smoothing steps to eliminate unwanted features. LiDaR data also captures objects like buildings and trees complicating greatly the task of extracting reliable curvature values. Hence, we use a stochastic approach to determine whether a point can be reliably used to estimate curvature or not. Additionally, we have developed a graph-based approach to establish connectivities among points that correspond to regions of high curvature. The result is an explicit description of ridge-lines, for example. We have applied our method to the raw point cloud data collected as part of the Geo

  3. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  4. Clustering, randomness, and regularity in cloud fields: 2. Cumulus cloud fields

    Science.gov (United States)

    Zhu, T.; Lee, J.; Weger, R. C.; Welch, R. M.

    1992-12-01

    During the last decade a major controversy has been brewing concerning the proper characterization of cumulus convection. The prevailing view has been that cumulus clouds form in clusters, in which cloud spacing is closer than that found for the overall cloud field and which maintains its identity over many cloud lifetimes. This "mutual protection hypothesis" of Randall and Huffman (1980) has been challenged by the "inhibition hypothesis" of Ramirez et al. (1990) which strongly suggests that the spatial distribution of cumuli must tend toward a regular distribution. A dilemma has resulted because observations have been reported to support both hypotheses. The present work reports a detailed analysis of cumulus cloud field spatial distributions based upon Landsat, Advanced Very High Resolution Radiometer, and Skylab data. Both nearest-neighbor and point-to-cloud cumulative distribution function statistics are investigated. The results show unequivocally that when both large and small clouds are included in the cloud field distribution, the cloud field always has a strong clustering signal. The strength of clustering is largest at cloud diameters of about 200-300 m, diminishing with increasing cloud diameter. In many cases, clusters of small clouds are found which are not closely associated with large clouds. As the small clouds are eliminated from consideration, the cloud field typically tends towards regularity. Thus it would appear that the "inhibition hypothesis" of Ramirez and Bras (1990) has been verified for the large clouds. However, these results are based upon the analysis of point processes. A more exact analysis also is made which takes into account the cloud size distributions. Since distinct clouds are by definition nonoverlapping, cloud size effects place a restriction upon the possible locations of clouds in the cloud field. The net effect of this analysis is that the large clouds appear to be randomly distributed, with only weak tendencies towards

  5. Determination of metallic elements in water by the combined preconcentration techniques of ion exchange and atomic absorption spectrophotometry

    International Nuclear Information System (INIS)

    Paula, M.H. de.

    1981-01-01

    Having as an aim the utilization of atomic absorption method with flame's excitement, the limits of detection in water of six metals (Ag, Co, Cr, Cu, Ni, Zn) were determined in synthetical samples through atomic absorption spectroscopy. Techniques to optimize the data have been pointed out and presented their statistical treatment. By means of the routine and the addition methods three 'real' samples have also been analysed in order to determine the contents of Cu and Zn. Aiming a pre-concentration and by utilizing the 60 Co obtained activating a sample of cobalt in the CDTN/NUCLEBRAS TRIGA MARK-I reactor, the retainement of this cobalt in ion exchange resin and the variation of the factor of elution within different concentration of HCl in water have been determined. The limits of detection are presented and so are the quantitative ones, with and without pre-concentration in an ion exchanger resin and latter elution. (Author) [pt

  6. An Automated Technique for Generating Georectified Mosaics from Ultra-High Resolution Unmanned Aerial Vehicle (UAV Imagery, Based on Structure from Motion (SfM Point Clouds

    Directory of Open Access Journals (Sweden)

    Christopher Watson

    2012-05-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are an exciting new remote sensing tool capable of acquiring high resolution spatial data. Remote sensing with UAVs has the potential to provide imagery at an unprecedented spatial and temporal resolution. The small footprint of UAV imagery, however, makes it necessary to develop automated techniques to geometrically rectify and mosaic the imagery such that larger areas can be monitored. In this paper, we present a technique for geometric correction and mosaicking of UAV photography using feature matching and Structure from Motion (SfM photogrammetric techniques. Images are processed to create three dimensional point clouds, initially in an arbitrary model space. The point clouds are transformed into a real-world coordinate system using either a direct georeferencing technique that uses estimated camera positions or via a Ground Control Point (GCP technique that uses automatically identified GCPs within the point cloud. The point cloud is then used to generate a Digital Terrain Model (DTM required for rectification of the images. Subsequent georeferenced images are then joined together to form a mosaic of the study area. The absolute spatial accuracy of the direct technique was found to be 65–120 cm whilst the GCP technique achieves an accuracy of approximately 10–15 cm.

  7. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    Science.gov (United States)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  8. The African Research Cloud - A Friendly Entry Point to Research Computing

    OpenAIRE

    Walt, Anelda Van der; Pretorius, Boeta

    2016-01-01

    The slides were presented at the first African Research Cloud Workshop in Pretoria, South Africa on 27 - 28 October 2016 [1].The slides formed part of an introduction for a session about using cloud infrastructure, and specifically the African Research Cloud, for training purposes.The session was co-facilitated with Dr Bradley Frank and Dr Michelle Cluver.[1] "African Research Cloud Workshop." The Institute for Data Intensive Astronomy (IDIA). IDIA, n.d. Web. 3 Nov. 2016.  

  9. Cloud point extraction of copper, lead, cadmium, and iron using 2,6-diamino-4-phenyl-1,3,5-triazine and nonionic surfactant, and their flame atomic absorption spectrometric determination in water and canned food samples.

    Science.gov (United States)

    Citak, Demirhan; Tuzen, Mustafa

    2012-01-01

    A cloud point extraction procedure was optimized for the separation and preconcentration of lead(II), cadmium(II), copper(II), and iron(III) ions in various water and canned food samples. The metal ions formed complexes with 2,6-diamino-4-phenyl-1,3,5-triazine that were extracted by surfactant-rich phases in the nonionic surfactant Triton X-114. The surfactant-rich phase was diluted with 1 M HNO3 in methanol prior to its analysis by flame atomic absorption spectrometry. The parameters affecting the extraction efficiency of the proposed method, such as sample pH, complexing agent concentration, surfactant concentration, temperature, and incubation time, were optimized. LOD values based on three times the SD of the blank (3Sb) were 0.38, 0.48, 1.33, and 1.85 microg/L for cadmium(II), copper(II), lead(II), and iron(III) ions, respectively. The precision (RSD) of the method was in the 1.86-3.06% range (n=7). Validation of the procedure was carried out by analysis of National Institute of Standards and Technology Standard Reference Material (NIST-SRM) 1568a Rice Flour and GBW 07605 Tea. The method was applied to water and canned food samples for determination of metal ions.

  10. A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front

    Science.gov (United States)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2016-04-01

    Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of

  11. Advanced Visualization and Interactive Display Rapid Innovation and Discovery Evaluation Research (VISRIDER) Program Task 6: Point Cloud Visualization Techniques for Desktop and Web Platforms

    Science.gov (United States)

    2017-04-01

    backend point cloud server ................... 34 List of Tables Table 1 - Measured frame rate from point cloud benchmark...variant, OpenGL ES, that is is used by all modern phones and tablets to provide hardware acceleration in the mobile space. WebGL is a JavaScript language...WebGL. These mainly reside in complex buffer operations that aren’t exposed in the mobile version OpenGL ES. The biggest omission from WebGL though

  12. 3D Documentation of Archaeological Excavations Using Image-Based Point Cloud

    Directory of Open Access Journals (Sweden)

    Umut Ovalı

    2017-03-01

    Full Text Available Rapid progress in digital technology enables us to create three-dimensional models using digital images. Low cost, time efficiency and accurate results of this method put to question if this technique can be an alternative to conventional documentation techniques, which generally are 2D orthogonal drawings. Accurate and detailed 3D models of archaeological features have potential for many other purposes besides geometric documentation. This study presents a recent image-based three-dimensional registration technique employed in 2013 at one of the ancient city in Turkey, using “Structure from Motion” (SfM algorithms. A commercial software is applied to investigate whether this method can be used as an alternative to other techniques. Mesh model of the some section of the excavation section of the site were produced using point clouds were produced from the digital photographs. Accuracy assessment of the produced model was realized using the comparison of the directly measured coordinates of the ground control points with produced from model. Obtained results presented that the accuracy is around 1.3 cm.

  13. High-pressure cloud point data for the system glycerol + olive oil + n-butane + AOT

    OpenAIRE

    Bender,J. P.; Junges,A.; Franceschi,E.; Corazza,F. C.; Dariva,C.; Oliveira,J. Vladimir; Corazza,M. L.

    2008-01-01

    This work reports high-pressure cloud point data for the quaternary system glycerol + olive oil + n-butane + AOT surfactant. The static synthetic method, using a variable-volume view cell, was employed for obtaining the experimental data at pressures up to 27 MPa. The effects of glycerol/olive oil concentration and surfactant addition on the pressure transition values were evaluated in the temperature range from 303 K to 343 K. For the system investigated, vapor-liquid (VLE), liquid-liquid (L...

  14. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  15. Point source atom interferometry with a cloud of finite size

    Energy Technology Data Exchange (ETDEWEB)

    Hoth, Gregory W., E-mail: gregory.hoth@nist.gov; Pelle, Bruno; Riedl, Stefan; Kitching, John; Donley, Elizabeth A. [National Institute of Standards and Technology, Boulder, Colorado 80305 (United States)

    2016-08-15

    We demonstrate a two axis gyroscope by the use of light pulse atom interferometry with an expanding cloud of atoms in the regime where the cloud has expanded by 1.1–5 times its initial size during the interrogation. Rotations are measured by analyzing spatial fringe patterns in the atom population obtained by imaging the final cloud. The fringes arise from a correlation between an atom's initial velocity and its final position. This correlation is naturally created by the expansion of the cloud, but it also depends on the initial atomic distribution. We show that the frequency and contrast of these spatial fringes depend on the details of the initial distribution and develop an analytical model to explain this dependence. We also discuss several challenges that must be overcome to realize a high-performance gyroscope with this technique.

  16. Preconcentration of plutonium and americium using the Actinide-CUTM Resin for human tissue analysis

    International Nuclear Information System (INIS)

    Qu, H.; Stuit, D.; Glover, S.E.; Love, S.F.; Filby, R.H.; Washington State Univ., Pullman, WA

    1998-01-01

    A method for the preconcentration of Am and Pu from human tissue solutions (liver, lung, bone etc) using the Actinide-CU Resin (EIChroM Industries) has been developed for their alpha-spectrometric determination. With near 100% recoveries were obtained by preconcentration, subsequent decomposition methods for eluent were developed. Good agreement for Pu and Am determination with the USTUR anion-exchange/solvent extraction method was demonstrated using previously analyzed human tissue solutions and NIST SRMs. The advantages of the preconcentration method applied to human tissue analysis are simplicity of operation, shorter analysis time compared to anion exchange/solvent extraction methods, and capacity to analyze large tissue samples (up to 15 g bone ash per analysis and 500 g soft tissue). (author)

  17. Grafting 3-mercaptopropyl trimethoxysilane on multi-walled carbon nanotubes surface for improving on-line cadmium(II) preconcentration from water samples

    Energy Technology Data Exchange (ETDEWEB)

    Corazza, Marcela Zanetti; Somera, Bruna Fabrin; Segatelli, Mariana Gava [Departamento de Quimica, Universidade Estadual de Londrina, Rodovia Celso Garcia Cid, PR 445, Km 380, Campus Universitario, Londrina-PR, CEP 86051-990 (Brazil); Tarley, Cesar Ricardo Teixeira, E-mail: tarley@uel.br [Departamento de Quimica, Universidade Estadual de Londrina, Rodovia Celso Garcia Cid, PR 445, Km 380, Campus Universitario, Londrina-PR, CEP 86051-990 (Brazil); Instituto Nacional de Ciencia e Tecnologia (INCT) de Bioanalitica, Universidade Estadual de Campinas (UNICAMP), Instituto de Quimica, Departamento de Quimica Analitica, Cidade Universitaria Zeferino, Vaz, s/n, CEP 13083-970, Campinas-SP (Brazil)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer 3-Mercaptopropyl trimethoxysilane grafted on MWCNT surface was prepared. Black-Right-Pointing-Pointer The material promoted an increase on performance of MWCNT for Cd{sup 2+} adsorption. Black-Right-Pointing-Pointer The life time of adsorbent was very high. Black-Right-Pointing-Pointer An improvement of 84% on the sensitivity was achieved. - Abstract: In the present study, the performance of multi-walled carbon nanotubes (MWCNTs) grafted with 3-mercaptopropyltrimethoxysilane (3-MPTMS), used as a solid phase extractor for Cd{sup 2+} preconcentration in a flow injection system coupled to flame atomic absorption spectrometry (FAAS), was evaluated. The procedure involved the preconcentration of 20.0 mL of Cd{sup 2+} solution at pH 7.5 (0.1 mol L{sup -1} buffer phosphate) through 70 mg of 3-MPTMS-grafted MWCNTs packed into a minicolumn at 6.0 mL min{sup -1}. The elution step was carried out with 1.0 mol L{sup -1} HCl. Fourier transform infrared (FTIR) spectroscopy, scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) were used to estimate the extent of the MWCNT chemical modification. The 3-MPTMS-grafted MWCNTs provided a 1.68 times improvement in the sensitivity of the Cd{sup 2+} FAAS determination compared to the unsilanized oxidized MWCNTs. The following parameters were obtained: preconcentration factor of 31.5, consumptive index of 0.635 mL, sample throughput of 14 h{sup -1}, and concentration efficiency of 9.46 min{sup -1}. The analytical curve was constructed in the range of 1.0-60.0 {mu}g L{sup -1} (r = 0.9988), and the detection and quantification limits were found to be 0.15 {mu}g L{sup -1} and 0.62 {mu}g L{sup -1}, respectively. Different types of water samples and cigarette sample were successfully analyzed, and the results were compared using electrothermal atomic absorption spectrometry (ETAAS) as reference technique. In addition, the accuracy of proposed method was also checked by analysis of

  18. A Fully-Integrated MEMS Preconcentrator for Rapid Gas Sampling (Preprint)

    National Research Council Canada - National Science Library

    Bae, Byunghoon; Yeom, Junghoon; Radadia, Adarsh D; Masel, Richard I; Shannon, Mark A

    2006-01-01

    .... The unprecedented speed of this preconcentrator (chemical warfare agents, toxic industrial compounds (TICs), and other volatile compounds in seconds, rather than tens of minutes with conventional systems.

  19. Preconcentration and extraction of copper(II) on activated carbon ...

    African Journals Online (AJOL)

    Activated carbon modified method was used for the preconcentration and ... in real samples such as tap water, wastewater and a synthetic water sample by flame ... KEY WORDS: Copper(II), Solid phase extraction, Activated carbon, Flame ...

  20. An open, interoperable, transdisciplinary approach to a point cloud data service using OGC standards and open source software.

    Science.gov (United States)

    Steer, Adam; Trenham, Claire; Druken, Kelsey; Evans, Benjamin; Wyborn, Lesley

    2017-04-01

    High resolution point clouds and other topology-free point data sources are widely utilised for research, management and planning activities. A key goal for research and management users is making these data and common derivatives available in a way which is seamlessly interoperable with other observed and modelled data. The Australian National Computational Infrastructure (NCI) stores point data from a range of disciplines, including terrestrial and airborne LiDAR surveys, 3D photogrammetry, airborne and ground-based geophysical observations, bathymetric observations and 4D marine tracers. These data are stored alongside a significant store of Earth systems data including climate and weather, ecology, hydrology, geoscience and satellite observations, and available from NCI's National Environmental Research Data Interoperability Platform (NERDIP) [1]. Because of the NERDIP requirement for interoperability with gridded datasets, the data models required to store these data may not conform to the LAS/LAZ format - the widely accepted community standard for point data storage and transfer. The goal for NCI is making point data discoverable, accessible and useable in ways which allow seamless integration with earth observation datasets and model outputs - in turn assisting researchers and decision-makers in the often-convoluted process of handling and analyzing massive point datasets. With a use-case of providing a web data service and supporting a derived product workflow, NCI has implemented and tested a web-based point cloud service using the Open Geospatial Consortium (OGC) Web Processing Service [2] as a transaction handler between a web-based client and server-side computing tools based on a native Linux operating system. Using this model, the underlying toolset for driving a data service is flexible and can take advantage of NCI's highly scalable research cloud. Present work focusses on the Point Data Abstraction Library (PDAL) [3] as a logical choice for

  1. A new algorithm combining geostatistics with the surrogate data approach to increase the accuracy of comparisons of point radiation measurements with cloud measurements

    Science.gov (United States)

    Venema, V. K. C.; Lindau, R.; Varnai, T.; Simmer, C.

    2009-04-01

    Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated on such a kriged field. Stochastic modelling aims at reproducing the structure of the data. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. However, while stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. Because radiative transfer through clouds is a highly nonlinear process it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately as well as the correlations in the cloud field because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. However, up to now we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. The algorithm is tested on cloud fields from large eddy simulations (LES). On these clouds a measurement is simulated. From the pseudo-measurement we estimated the distribution and power spectrum. Furthermore, the pseudo-measurement is kriged to a field the size of the final surrogate cloud. The distribution, spectrum and the kriged field are the inputs to the algorithm. This

  2. 3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds

    Directory of Open Access Journals (Sweden)

    Lucía Díaz-Vilariño

    2015-02-01

    Full Text Available 3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.

  3. Cloud computing strategies

    CERN Document Server

    Chorafas, Dimitris N

    2011-01-01

    A guide to managing cloud projects, Cloud Computing Strategies provides the understanding required to evaluate the technology and determine how it can be best applied to improve business and enhance your overall corporate strategy. Based on extensive research, it examines the opportunities and challenges that loom in the cloud. It explains exactly what cloud computing is, what it has to offer, and calls attention to the important issues management needs to consider before passing the point of no return regarding financial commitments.

  4. On-line preconcentration and determination of chromium in parenteral solutions by inductively coupled plasma optical emission spectrometry

    International Nuclear Information System (INIS)

    Gil, R.A.; Cerutti, S.; Gasquez, J.A.; Olsina, R.A.; Martinez, L.D.

    2005-01-01

    A method for the preconcentration and speciation of chromium was developed. On-line preconcentration and determination were obtained using inductively coupled plasma optical emission spectrometry (ICP-OES) coupled with flow injection. To determinate the chromium (III) present in parenteral solutions, chromium was retained on activated carbon at pH 5.0. On the other hand, a step of reduction was necessary in order to determine total chromium content. The Cr(VI) concentration was then determined by difference between the total chromium concentration and that of Cr(III). A sensitivity enrichment factor of 70-fold was obtained with respect to the chromium determination by ICP-OES without preconcentration. The detection limit for the preconcentration of 25 ml of sample was 29 ng l -1 . The precision for the 10 replicate determinations at the 5 μg l -1 Cr level was 2.3% relative standard deviation, calculated with the peak heights. The calibration graph using the preconcentration method for chromium species was linear with a correlation coefficient of 0.9995 at levels near the detection limits up to at least 60 μg l -1 . The method can be applied to the determination and speciation of chromium in parenteral solutions

  5. Point Cloud Analysis for Uav-Borne Laser Scanning with Horizontally and Vertically Oriented Line Scanners - Concept and First Results

    Science.gov (United States)

    Weinmann, M.; Müller, M. S.; Hillemann, M.; Reydel, N.; Hinz, S.; Jutzi, B.

    2017-08-01

    In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.

  6. Development of cloud point extraction - UV-visible spectrophotometric method for vanadium (V) determination in hydrogeochemical samples

    International Nuclear Information System (INIS)

    Durani, Smeer; Mathur, Neerja; Chowdary, G.S.

    2007-01-01

    The cloud point extraction behavior (CPE) of vanadium (V) using 5,7 dibromo 8-hydroxyquinoline (DBHQ) and triton X 100 was investigated. Vanadium (V) was extracted with 4 ml of 0.5 mg/ml DBHQ and 6 ml of 8% (V/V) triton X 100 at the pH 3.7. A few hydrogeochemical samples were analysed for vanadium using the above method. (author)

  7. An Automated Approach to the Generation of Structured Building Information Models from Unstructured 3d Point Cloud Scans

    DEFF Research Database (Denmark)

    Tamke, Martin; Evers, Henrik Leander; Wessel, Raoul

    2016-01-01

    In this paper we present and evaluate an approach for the automatic generation of building models in IFC BIM format from unstructured Point Cloud scans, as they result from 3dlaser scans of buildings. While the actual measurement process is relatively fast, 85% of the overall time are spend...

  8. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  9. Recording Approach of Heritage Sites Based on Merging Point Clouds from High Resolution Photogrammetry and Terrestrial Laser Scanning

    Science.gov (United States)

    Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J. F.; Assali, P.; Smigiel, E.

    2012-07-01

    Different approaches and tools are required in Cultural Heritage Documentation to deal with the complexity of monuments and sites. The documentation process has strongly changed in the last few years, always driven by technology. Accurate documentation is closely relied to advances of technology (imaging sensors, high speed scanning, automation in recording and processing data) for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research (Patias et al., 2008). We want to focus in this paper on the recording aspects of cultural heritage documentation, especially the generation of geometric and photorealistic 3D models for accurate reconstruction and visualization purposes. The selected approaches are based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons and recent advances have changed the way of the recording approach. The choice of the best workflow relies on the site configuration, the performances of the sensors, and criteria as geometry, accuracy, resolution, georeferencing, texture, and of course processing time. TLS techniques (time of flight or phase shift systems) are widely used for recording large and complex objects and sites. Point cloud generation from images by dense stereo or multi-view matching can be used as an alternative or as a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one, as the acquisition system is limited to a high-performance digital camera and a few accessories only. Indeed, the stereo or multi-view matching process offers a cheap, flexible and accurate solution to get 3D point clouds. Moreover, the captured images might also be used for models texturing. Several software packages are available, whether web-based, open source or commercial. The main advantage of this photogrammetric or computer vision based technology is to get

  10. An Automated Approach to the Generation of Structured Building Information Models from Unstructured 3d Point Cloud Scans

    DEFF Research Database (Denmark)

    Tamke, Martin; Evers, Henrik Leander; Wessel, Raoul

    2016-01-01

    In this paper we present and evaluate an approach for the automatic generation of building models in IFC BIM format from unstructured Point Cloud scans, as they result from 3dlaser scans of buildings. While the actual measurement process is relatively fast, 85% of the overall time are spend on th...

  11. ROBUST CYLINDER FITTING IN THREE-DIMENSIONAL POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    A. Nurunnabi

    2017-05-01

    Full Text Available This paper investigates the problems of cylinder fitting in laser scanning three-dimensional Point Cloud Data (PCD. Most existing methods require full cylinder data, do not study the presence of outliers, and are not statistically robust. But especially mobile laser scanning often has incomplete data, as street poles for example are only scanned from the road. Moreover, existence of outliers is common. Outliers may occur as random or systematic errors, and may be scattered and/or clustered. In this paper, we present a statistically robust cylinder fitting algorithm for PCD that combines Robust Principal Component Analysis (RPCA with robust regression. Robust principal components as obtained by RPCA allow estimating cylinder directions more accurately, and an existing efficient circle fitting algorithm following robust regression principles, properly fit cylinder. We demonstrate the performance of the proposed method on artificial and real PCD. Results show that the proposed method provides more accurate and robust results: (i in the presence of noise and high percentage of outliers, (ii for incomplete as well as complete data, (iii for small and large number of points, and (iv for different sizes of radius. On 1000 simulated quarter cylinders of 1m radius with 10% outliers a PCA based method fit cylinders with a radius of on average 3.63 meter (m; the proposed method on the other hand fit cylinders of on average 1.02 m radius. The algorithm has potential in applications such as fitting cylindrical (e.g., light and traffic poles, diameter at breast height estimation for trees, and building and bridge information modelling.

  12. Trends in preconcentration procedures for metal determination using atomic spectrometry techniques

    International Nuclear Information System (INIS)

    Godoi Pereira, M. de; Arruda, M.A.Z.

    2003-01-01

    Methods for metal preconcentration are often described in the literature. However, purposes are often different, depending on whether the methods are applied in environmental, clinical or technological fields. The respective method needs to be efficient, give high sensitivity, and ideally also is selective which is useful when used in combination with atomic spectroscopy. This review presents the actual tendencies in metal preconcentration using techniques such as flame atomic absorption spectrometry (FAAS), electrothermal atomic absorption spectrometry (ETAAS), hydride generation atomic absorption spectrometry (HGAAS), inductively coupled plasma optical emission spectrometry (ICP OES) and inductively coupled plasma mass spectrometry (ICP-MS). Procedures based on related to electrochemical, coprecipitation/precipitation, liquid-liquid and solid-liquid extraction and atom trapping mechanisms are presented. (author)

  13. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim

    Science.gov (United States)

    Becker, S.; Peter, M.; Fritsch, D.

    2015-03-01

    The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.

  14. Separation and Preconcentration of Trace Amounts of Nickel from Aqueous Samples

    Directory of Open Access Journals (Sweden)

    Reyhaneh Rahnama

    2018-05-01

    Full Text Available In this paper, a new method for preconcentration and measurement of trace amounts of nickel in aqueous samples by magnetic solid phase extraction (MSPE via magnetic carbon nanotubes (Mag-CNTs was developed. In order to increase selectivity, α-Furildioxime was used as chelating agent. In order to do extraction, optimum amount of ligand was added to the nickel sample and pH was set on 9, then 7 ml. of adsorbent was added and stirred for 15 minutes. After that, aqueous phase and adsorbent were separated by a strong magnet. Finally, the absorption was measured via flame atomic absorption spectrometry by analyte elution from the absorbent with an appropriate solution. Parameters affecting the extraction and preconcentration of nickel were investigated and optimized. Under optimum conditions, the calibration curve was linear in concentration range from 2.5 to 375 µg L-1 and the detection limit was 0.8 µg L-1 of nickel. The method was applied for determination of nickel in aqueous samples. The relative efficiency values of nickel measurement in aqueous samples were from 98.7% to 102.1%.  Results indicated that Mag-CNTs can be used as an effective and inexpensive absorbent for preconcentration and extraction of nickel from actual samples.

  15. Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.

    Science.gov (United States)

    Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi

    2018-03-24

    In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.

  16. POINT CLOUD ANALYSIS FOR UAV-BORNE LASER SCANNING WITH HORIZONTALLY AND VERTICALLY ORIENTED LINE SCANNERS – CONCEPT AND FIRST RESULTS

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2017-08-01

    Full Text Available In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.

  17. Force fields of charged particles in micro-nanofluidic preconcentration systems

    Science.gov (United States)

    Gong, Lingyan; Ouyang, Wei; Li, Zirui; Han, Jongyoon

    2017-12-01

    Electrokinetic concentration devices based on the ion concentration polarization (ICP) phenomenon have drawn much attention due to their simple setup, high enrichment factor, and easy integration with many subsequent processes, such as separation, reaction, and extraction etc. Despite significant progress in the experimental research, fundamental understanding and detailed modeling of the preconcentration systems is still lacking. The mechanism of the electrokinetic trapping of charged particles is currently limited to the force balance analysis between the electric force and fluid drag force in an over-simplified one-dimensional (1D) model, which misses many signatures of the actual system. This letter studies the particle trapping phenomena that are not explainable in the 1D model through the calculation of the two-dimensional (2D) force fields. The trapping of charged particles is shown to significantly distort the electric field and fluid flow pattern, which in turn leads to the different trapping behaviors of particles of different sizes. The mechanisms behind the protrusions and instability of the focused band, which are important factors determining overall preconcentration efficiency, are revealed through analyzing the rotating fluxes of particles in the vicinity of the ion-selective membrane. The differences in the enrichment factors of differently sized particles are understood through the interplay between the electric force and convective fluid flow. These results provide insights into the electrokinetic concentration effect, which could facilitate the design and optimization of ICP-based preconcentration systems.

  18. NAA using Cf-252 after preconcentration

    International Nuclear Information System (INIS)

    Panyo, O.; Moebius, S.; Keller, C.

    1988-01-01

    Neutron activation analysis (NAA) with thermal neutron using Cf-252 sources was applied to elemental analysis of elements in water samples. A high-resolution Ge(Li) detector was employed for gamma-radiation detection. Both suspended particulate matter and liquid fraction were investigated after filtration. Preconcentration method by co-precipitation using iron (III) hydroxide and oxine were chosen for use. Elements which were considered to be able to detect in the present study are Al, As, Cl, K, Mg, Mn, Na, Sr, Ti, U, V and Zn

  19. Performance and stability of low-cost dye-sensitized solar cell based crude and pre-concentrated anthocyanins: Combined experimental and DFT/TDDFT study

    Science.gov (United States)

    Chaiamornnugool, Phrompak; Tontapha, Sarawut; Phatchana, Ratchanee; Ratchapolthavisin, Nattawat; Kanokmedhakul, Somdej; Sang-aroon, Wichien; Amornkitbamrung, Vittaya

    2017-01-01

    The low cost DSSCs utilized by crude and pre-concentrated anthocyanins extracted from six anthocyanin-rich samples including mangosteen pericarp, roselle, red cabbage, Thai berry, black rice and blue pea were fabricated. Their photo-to-current conversion efficiencies and stability were examined. Pre-concentrated extracts were obtained by solid phase extraction (SPE) using C18 cartridge. The results obviously showed that all pre-concentrated extracts performed on photovoltaic performances in DSSCs better than crude extracts except for mangosteen pericarp. The DSSC sensitized by pre-concentrated anthocyanin from roselle and red cabbage showed maximum current efficiency η = 0.71% while DSSC sensitized by crude anthocyanin from mangosteen pericarp reached maximum efficiency η = 0.97%. In addition, pre-concentrated extract based cells possess more stability than those of crude extract based cells. This indicates that pre-concentration of anthocyanin via SPE method is very effective for DSSCs based on good photovoltaic performance and stability. The DFT/TDDFT calculations of electronic and photoelectrochemical properties of the major anthocyanins found in the samples are employed to support the experimental results.

  20. A voting-based statistical cylinder detection framework applied to fallen tree mapping in terrestrial laser scanning point clouds

    Science.gov (United States)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2017-07-01

    This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.

  1. Pre-Concentration of Vanadium from Stone Coal by Gravity Using Fine Mineral Spiral

    Directory of Open Access Journals (Sweden)

    Xin Liu

    2016-08-01

    Full Text Available Due to the low grade of V2O5 in stone coal, the existing vanadium extraction technologies face challenges in terms of large handling capacity, high acid consumption and production cost. The pre-concentration of vanadium from stone coal before the extraction process is an effective method to reduce cost. In this study, detailed mineral characterization of stone coal was investigated. It has been confirmed that the vanadium mainly occurs in muscovite and illite. A significant demand for an effective pre-concentration process with simple manipulation for discarding quartz and other gangue minerals is expected. Based on the mineralogical study, a new vanadium pre-concentration process using a fine mineral spiral was investigated. The experimental results showed that the separation process, which was comprised of a rougher and scavenger, could efficiently discard quartz, pyrite and apatite. A final concentrate with V2O5 grade of 1.02% and recovery of 89.6% could be obtained, with 26.9% of the raw ore being discarded as final tailings.

  2. 尖锐特征诱导的点云自动分片算法%Automatic Sharp Feature Based Segmentation of Point Clouds

    Institute of Scientific and Technical Information of China (English)

    邹冬; 庞明勇

    2012-01-01

    点云模型的分片技术是数字几何处理领域的基础技术之一.提出一种尖锐特征诱导的点云模型自动分片算法.算法首先计算点云模型的局部微分属性,并以此来识别模型上的尖锐特征点;然后采用改进的折线生长算法生成并完善特征折线,并基于特征折线采用三次B样条曲线来逼近的尖锐特征点;最后采用区域生长方法将点云模型分割成多个几何特征单一、边界整齐的点云数据面片.实验表明,本文算法运行稳定,可以准确地分割点云模型.该算法可用于点云模型的形状匹配、纹理映射、CAD建模、以及逆向工程等应用中.%Segmentation of point clouds is one of basic and key technologies in digital geometry processing. In this paper, based on extracted sharp features, we present a method for automatic ally segmenting point clouds. Our algorithm first calculates local surface differentials features and uses them to identify sharp feature points. And an improved feature-ployline propagation technique is employed to approximate the feature points by a set of polylines and optimize the feature curves. Then, based on feature ploy lines, we approximate the sharp feature points by cubic B-spline curve. Subsequently, based on the extracted feature curves, region growing algorithm was applied to segment the point clouds into multiple regions, the geometric feature of the region is consistent and the boundary of the patch is neat. Experiments show that the algorithm can segment the point clouds precisely and efficiently. Our algorithm can be used in shape matching, texture mapping, CAD modeling and reverse engineering.

  3. Preconcentration and Separation of Mixed-Species Samples Near a Nano-Junction in a Convergent Microchannel

    Directory of Open Access Journals (Sweden)

    Ping-Hsien Chiu

    2015-12-01

    Full Text Available A fluidic microchip incorporating a convergent microchannel and a Nafion-nanoporous membrane is proposed for the preconcentration and separation of multi-species samples on a single platform. In the device, sample preconcentration is achieved by means of the ion concentration polarization effect induced at the micro/nano interface under the application of an external electric field, while species separation is achieved by exploiting the different electrophoretic mobilities of the sample components. The experimental results show that the device is capable of detecting C-reactive protein (CRP with an initial concentration as low as 9.50 × 10−6 mg/L given a sufficient preconcentration time and driving voltage. In addition, it is shown that a mixed-species sample consisting of three negatively-charged components (bovine serum albumin (BSA, tetramethylrhodamine(TAMRA isothiocyanate-Dextran and fluorescent polymer beads can be separated and preconcentrated within 20 min given a driving voltage of 100 V across 1 cm microchannel in length. In general, the present results confirm the feasibility of the device for the immunoassay or detection of various multi-species samples under low concentration in the biochemical and biomedical fields. The novel device can therefore improve the detection limit of traditional medical facilities.

  4. Detection of Single Tree Stems in Forested Areas from High Density ALS Point Clouds Using 3d Shape Descriptors

    Science.gov (United States)

    Amiri, N.; Polewski, P.; Yao, W.; Krzystek, P.; Skidmore, A. K.

    2017-09-01

    Airborne Laser Scanning (ALS) is a widespread method for forest mapping and management purposes. While common ALS techniques provide valuable information about the forest canopy and intermediate layers, the point density near the ground may be poor due to dense overstory conditions. The current study highlights a new method for detecting stems of single trees in 3D point clouds obtained from high density ALS with a density of 300 points/m2. Compared to standard ALS data, due to lower flight height (150-200 m) this elevated point density leads to more laser reflections from tree stems. In this work, we propose a three-tiered method which works on the point, segment and object levels. First, for each point we calculate the likelihood that it belongs to a tree stem, derived from the radiometric and geometric features of its neighboring points. In the next step, we construct short stem segments based on high-probability stem points, and classify the segments by considering the distribution of points around them as well as their spatial orientation, which encodes the prior knowledge that trees are mainly vertically aligned due to gravity. Finally, we apply hierarchical clustering on the positively classified segments to obtain point sets corresponding to single stems, and perform ℓ1-based orthogonal distance regression to robustly fit lines through each stem point set. The ℓ1-based method is less sensitive to outliers compared to the least square approaches. From the fitted lines, the planimetric tree positions can then be derived. Experiments were performed on two plots from the Hochficht forest in Oberösterreich region located in Austria.We marked a total of 196 reference stems in the point clouds of both plots by visual interpretation. The evaluation of the automatically detected stems showed a classification precision of 0.86 and 0.85, respectively for Plot 1 and 2, with recall values of 0.7 and 0.67.

  5. Parameterized approximation of lacunarity functions derived from airborne laser scanning point clouds of forested areas

    Science.gov (United States)

    Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann

    2017-04-01

    Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas

  6. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    Science.gov (United States)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  7. 3D Sensor-Based Obstacle Detection Comparing Octrees and Point clouds Using CUDA

    Directory of Open Access Journals (Sweden)

    K.B. Kaldestad

    2012-10-01

    Full Text Available This paper presents adaptable methods for achieving fast collision detection using the GPU and Nvidia CUDA together with Octrees. Earlier related work have focused on serial methods, while this paper presents a parallel solution which shows that there is a great increase in time if the number of operations is large. Two different models of the environment and the industrial robot are presented, the first is Octrees at different resolutions, the second is a point cloud representation. The relative merits of the two different world model representations are shown. In particular, the experimental results show the potential of adapting the resolution of the robot and environment models to the task at hand.

  8. Vapor generation – atomic spectrometric techniques. Expanding frontiers through specific-species preconcentration. A review

    International Nuclear Information System (INIS)

    Gil, Raúl A.; Pacheco, Pablo H.; Cerutti, Soledad; Martinez, Luis D.

    2015-01-01

    This article reviews 120 articles found in SCOPUS and specific Journal cites corresponding to the terms ‘preconcentration’; ‘speciation’; ‘vapor generation techniques’ and ‘atomic spectrometry techniques’ in the last 5 years. - Highlights: • Recent advances in vapor generation and atomic spectrometry were reviewed. • Species-specific preconcentration strategies after and before VG were discussed. • New preconcentration and speciation analysis were evaluated within this framework. - Abstract: We review recent progress in preconcentration strategies associated to vapor generation techniques coupled to atomic spectrometric (VGT-AS) for specific chemical species detection. This discussion focuses on the central role of different preconcentration approaches, both before and after VG process. The former was based on the classical solid phase and liquid–liquid extraction procedures which, aided by automation and miniaturization strategies, have strengthened the role of VGT-AS in several research fields including environmental, clinical, and others. We then examine some of the new vapor trapping strategies (atom-trapping, hydride trapping, cryotrapping) that entail improvements in selectivity through interference elimination, but also they allow reaching ultra-low detection limits for a large number of chemical species generated in conventional VG systems, including complete separation of several species of the same element. This review covers more than 100 bibliographic references from 2009 up to date, found in SCOPUS database and in individual searches in specific journals. We finally conclude by giving some outlook on future directions of this field

  9. Vapor generation – atomic spectrometric techniques. Expanding frontiers through specific-species preconcentration. A review

    Energy Technology Data Exchange (ETDEWEB)

    Gil, Raúl A.; Pacheco, Pablo H.; Cerutti, Soledad [Área de Química Analítica, Facultad de Química Bioquímica y Farmacia, Universidad Nacional de San Luis, Ciudad de San Luis 5700 (Argentina); Instituto de Química de San Luis, INQUISAL, Centro Científico-Tecnológico de San Luis (CCT-San Luis), Consejo Nacional de Investigaciones Científicas y Universidad Nacional de San Luis, Ciudad de San Luis 5700 (Argentina); Martinez, Luis D., E-mail: ldm@unsl.edu.ar [Área de Química Analítica, Facultad de Química Bioquímica y Farmacia, Universidad Nacional de San Luis, Ciudad de San Luis 5700 (Argentina); Instituto de Química de San Luis, INQUISAL, Centro Científico-Tecnológico de San Luis (CCT-San Luis), Consejo Nacional de Investigaciones Científicas y Universidad Nacional de San Luis, Ciudad de San Luis 5700 (Argentina)

    2015-05-22

    This article reviews 120 articles found in SCOPUS and specific Journal cites corresponding to the terms ‘preconcentration’; ‘speciation’; ‘vapor generation techniques’ and ‘atomic spectrometry techniques’ in the last 5 years. - Highlights: • Recent advances in vapor generation and atomic spectrometry were reviewed. • Species-specific preconcentration strategies after and before VG were discussed. • New preconcentration and speciation analysis were evaluated within this framework. - Abstract: We review recent progress in preconcentration strategies associated to vapor generation techniques coupled to atomic spectrometric (VGT-AS) for specific chemical species detection. This discussion focuses on the central role of different preconcentration approaches, both before and after VG process. The former was based on the classical solid phase and liquid–liquid extraction procedures which, aided by automation and miniaturization strategies, have strengthened the role of VGT-AS in several research fields including environmental, clinical, and others. We then examine some of the new vapor trapping strategies (atom-trapping, hydride trapping, cryotrapping) that entail improvements in selectivity through interference elimination, but also they allow reaching ultra-low detection limits for a large number of chemical species generated in conventional VG systems, including complete separation of several species of the same element. This review covers more than 100 bibliographic references from 2009 up to date, found in SCOPUS database and in individual searches in specific journals. We finally conclude by giving some outlook on future directions of this field.

  10. Optimization of Palmitic Acid Composition in Crude Oleic Acid to Provide Specifications of Titer and Cloud Point of Distillate Oleic Acid using a Flash Distiller

    OpenAIRE

    Muhammad Yusuf Ritonga

    2010-01-01

    Titer and cloud point Distilled Oleic Acid’s higher than standard on feed composition palmitic acid (C15H31COOH) or C16 11.2 %. Feed composition C16, top temperature precut and bottom main distiller column were optimized to produce DOA. A factorial design 3 independent variables 3 X 2 X 3, twice repeating’s applied to observe effects of feed composition C16 to quality parameters. On the optimum C16 feed composition at 5.20 % was produced DOA with titer 6.8 oC, cloud point 5.0 oC (inside it...

  11. Application of Micro-cloud point extraction for spectrophotometric determination of Malachite green, Crystal violet and Rhodamine B in aqueous samples

    Science.gov (United States)

    Ghasemi, Elham; Kaykhaii, Massoud

    2016-07-01

    A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60 mg/L, 0.10-0.80 mg/L, and 0.03-0.30 mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1 μg/L.

  12. Triton X-114 based cloud point extraction: a thermoreversible approach for separation/concentration and dispersion of nanomaterials in the aqueous phase.

    Science.gov (United States)

    Liu, Jing-fu; Liu, Rui; Yin, Yong-guang; Jiang, Gui-bin

    2009-03-28

    Capable of preserving the sizes and shapes of nanomaterials during the phase transferring, Triton X-114 based cloud point extraction provides a general, simple, and cost-effective route for reversible concentration/separation or dispersion of various nanomaterials in the aqueous phase.

  13. Preconcentration and Determination of Antimony in Drinking Water Bottled by Modified Nano-Alumina

    Directory of Open Access Journals (Sweden)

    M Mohammad Zakizade

    2016-01-01

    Full Text Available Abstract Introduction: Antimony trioxide (Sb2O3 has been utilized as a catalyst in polyethylene terephtalate (PET production, and the studies conducted on the bottled water has demonstrated that antimony can be leached from PET bottles into drinking water. Methods: In this study, a simple method was applied in order to determine the trace amount of antimony in bottled drinking water based on preconcentration /solid phase extraction. The nano alumina modified with Schiff base ligand was used in regard with Sb preconcentration. The experiments were performed in a continuous system and HCI was used as eluent of Sb ion. Several chemical and flow variables were optimized for a quantitative preconcentration and determination of Sb ion. The atomic absorption spectroscopy was used to determine Sb ion concentration. In order to study the keeping conditions on the leaching of Sb ion from PET plastic, drinking water bottles were kept in different conditions(room temperature, sunny light and -18˚C. Results: The calibration graph was linear in the range of 0.5 to 15.0 ppm Sb with detection limit of 0.055 ppm. The flow rate of sample was optimized in range of 1.0-9.0 mLmin-1 and Sb ion can be quantitatively eluted at 90 Vsample: Veluent retio. Conclusion: The study results revealed that the modified nano alumina is an effective sorbent in regard with absorbing Sb ion from water and HCI 1M can be used as an appropriate eluent. Maximum leaching of Sb ion is observed when the bottled drinking water was exposed to the sun light. Keywords: Antimony; Bottled drinking water; Modified alumina; Preconcentration

  14. POINT CLOUD MAPPING METHODS FOR DOCUMENTING CULTURAL LANDSCAPE FEATURES AT THE WORMSLOE STATE HISTORIC SITE, SAVANNAH, GEORGIA, USA

    Directory of Open Access Journals (Sweden)

    T. R. Jordana

    2016-06-01

    Full Text Available Documentation of the three-dimensional (3D cultural landscape has traditionally been conducted during site visits using conventional photographs, standard ground surveys and manual measurements. In recent years, there have been rapid developments in technologies that produce highly accurate 3D point clouds, including aerial LiDAR, terrestrial laser scanning, and photogrammetric data reduction from unmanned aerial systems (UAS images and hand held photographs using Structure from Motion (SfM methods. These 3D point clouds can be precisely scaled and used to conduct measurements of features even after the site visit has ended. As a consequence, it is becoming increasingly possible to collect non-destructive data for a wide variety of cultural site features, including landscapes, buildings, vegetation, artefacts and gardens. As part of a project for the U.S. National Park Service, a variety of data sets have been collected for the Wormsloe State Historic Site, near Savannah, Georgia, USA. In an effort to demonstrate the utility and versatility of these methods at a range of scales, comparisons of the features mapped with different techniques will be discussed with regards to accuracy, data set completeness, cost and ease-of-use.

  15. Visualizing nD Point Clouds as Topological Landscape Profiles to Guide Local Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick [Univ. of Leipzig (Germany). Computer Science Dept.; Heine, Christian [Univ. of Leipzig (Germany). Computer Science Dept.; Federal Inst. of Technology (ETH), Zurich (Switzerland). Dept. of Computer Science; Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Scheuermann, Gerik [Univ. of Leipzig (Germany). Computer Science Dept.

    2012-05-04

    Analyzing high-dimensional point clouds is a classical challenge in visual analytics. Traditional techniques, such as projections or axis-based techniques, suffer from projection artifacts, occlusion, and visual complexity.We propose to split data analysis into two parts to address these shortcomings. First, a structural overview phase abstracts data by its density distribution. This phase performs topological analysis to support accurate and non-overlapping presentation of the high-dimensional cluster structure as a topological landscape profile. Utilizing a landscape metaphor, it presents clusters and their nesting as hills whose height, width, and shape reflect cluster coherence, size, and stability, respectively. A second local analysis phase utilizes this global structural knowledge to select individual clusters or point sets for further, localized data analysis. Focusing on structural entities significantly reduces visual clutter in established geometric visualizations and permits a clearer, more thorough data analysis. In conclusion, this analysis complements the global topological perspective and enables the user to study subspaces or geometric properties, such as shape.

  16. Application of polyurethane foam as a sorbent for trace metal pre-concentration — A review

    Science.gov (United States)

    Lemos, V. A.; Santos, M. S.; Santos, E. S.; Santos, M. J. S.; dos Santos, W. N. L.; Souza, A. S.; de Jesus, D. S.; das Virgens, C. F.; Carvalho, M. S.; Oleszczuk, N.; Vale, M. G. R.; Welz, B.; Ferreira, S. L. C.

    2007-01-01

    The first publication on the use of polyurethane foam (PUF) for sorption processes dates back to 1970, and soon after the material was applied for separation processes. The application of PUF as a sorbent for solid phase extraction of inorganic analytes for separation and pre-concentration purposes is reviewed. The physical and chemical characteristics of PUF (polyether and polyester type) are discussed and an introduction to the characterization of these sorption processes using different types of isotherms is given. Separation and pre-concentration methods using unloaded and loaded PUF in batch and on-line procedures with continuous flow and flow injection systems are presented. Methods for the direct solid sampling analysis of the PUF after pre-concentration are discussed as well as approaches for speciation analysis. Thermodynamic proprieties of some extraction processes are evaluated and the interpretation of determined parameters, such as enthalpy, entropy and Gibbs free energy in light of the physico-chemical processes is explained.

  17. Combining structure-from-motion derived point clouds from satellites and unmanned aircraft systems images with ground-truth data to create high-resolution digital elevation models

    Science.gov (United States)

    Palaseanu, M.; Thatcher, C.; Danielson, J.; Gesch, D. B.; Poppenga, S.; Kottermair, M.; Jalandoni, A.; Carlson, E.

    2016-12-01

    Coastal topographic and bathymetric (topobathymetric) data with high spatial resolution (1-meter or better) and high vertical accuracy are needed to assess the vulnerability of Pacific Islands to climate change impacts, including sea level rise. According to the Intergovernmental Panel on Climate Change reports, low-lying atolls in the Pacific Ocean are extremely vulnerable to king tide events, storm surge, tsunamis, and sea-level rise. The lack of coastal topobathymetric data has been identified as a critical data gap for climate vulnerability and adaptation efforts in the Republic of the Marshall Islands (RMI). For Majuro Atoll, home to the largest city of RMI, the only elevation dataset currently available is the Shuttle Radar Topography Mission data which has a 30-meter spatial resolution and 16-meter vertical accuracy (expressed as linear error at 90%). To generate high-resolution digital elevation models (DEMs) in the RMI, elevation information and photographic imagery have been collected from field surveys using GNSS/total station and unmanned aerial vehicles for Structure-from-Motion (SfM) point cloud generation. Digital Globe WorldView II imagery was processed to create SfM point clouds to fill in gaps in the point cloud derived from the higher resolution UAS photos. The combined point cloud data is filtered and classified to bare-earth and georeferenced using the GNSS data acquired on roads and along survey transects perpendicular to the coast. A total station was used to collect elevation data under tree canopies where heavy vegetation cover blocked the view of GNSS satellites. A subset of the GPS / total station data was set aside for error assessment of the resulting DEM.

  18. Preconcentration of uranium ores by radio-metric sorting; Preconcentration des minerais d'uranium par triage radiometrique

    Energy Technology Data Exchange (ETDEWEB)

    Avril, R; Grenier, J [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires

    1964-07-01

    The uranium ore chemical treatment plant at Bessines-sur-Gartempe is supplied entirely by the La Crouzille Mining Division of the French Atomic Energy Commission mainly from mining districts of Fanay, Margnac and Le BRUGEAUD in the Limousin province and also, for the remainder, by a certain amount of private production in the 'Massif Central'. The supply mixture, which is very heterogeneous, is enriched before being treated chemically. The pre-concentration operation is carried out in the divisions ore treatment work-shop. It consists in a stone removal operation using radiometric sorting along a continuous belt; this makes it possible to eliminate 50 pour cent of the only fraction which is thus treated - that from 50 to 120 mm; it represents 15 to 20 per cent of the total tonnage supplied to the plant. (authors) [French] L'usine chimique de traitement des minerais d'uranium de Bessines-sur-Gartempe est entierement alimentee par la Division Miniere de La Crouzille, du Commissariat a l'Energie Atomique, principalement a partir des ensembles miniers limousins de Fanay, Margnac et du Brugenud et, pour le complement, par une certaine production privee en provenance du Massif Central. Le melange d'alimentation, tres heterogene, est enrichi avant d'etre livre a la chimie. L'operation de preconcentration est realisee dans l'atelier de preparation des minerais de la division. Il s'agit d'un epierrage par triage radiometrique sur bande, en continu, qui permet d'eliminer 50 pour cent de la seule fraction granulometrique qui le subit - le 50-120 mm - soit encore 15 a 20 pour cent du tonnage global d'alimentation livre a l'usine. (auteurs)

  19. Determination and preconcentration of natural and radio-cesium from aqueous solution

    International Nuclear Information System (INIS)

    Gueclue, K.; Apak, R.; Tuetem, E.; Atun, G.

    2004-01-01

    A modified atomic emission spectrometric (AES) method to determine cesium(I), based on the measurement of emission intensity at 455.5 nm with a limit of quantitation (LOQ) of 5.5 mg/l and a linear range up to 100 mg/l is reported. In order to increase the sensitivity and lower the detection limits, potential sorbents were investigated for preconcentrating Cs from natural waters. Among the various ion-exchange materials synthesized, potassium hexanitrocobaltate (PHNCo) yielded the highest capacity for 137 Cs tagged Cs + solutions as measured by gamma-spectrometry with a HPGe detector, showing the potential of a cesium preconcentration sorbent. As an alternative to AES determination, the PHNCo sorbent may be used for Cs + collection from radiocesium tagged solutions and the retained activity in the dry solid exchanger be determined by gamma-spectrometry. (author)

  20. Thermoresponsive Poly(2-Oxazoline) Molecular Brushes by Living Ionic Polymerization: Modulation of the Cloud Point by Random and Block Copolymer Pendant Chains

    KAUST Repository

    Zhang, Ning; Luxenhofer, Robert; Jordan, Rainer

    2012-01-01

    random and block copolymers. Their aqueous solutions displayed a distinct thermoresponsive behavior as a function of the side-chain composition and sequence. The cloud point (CP) of MBs with random copolymer side chains is a linear function