WorldWideScience

Sample records for sals image processing

  1. Synthetic Aperture Imaging Polarimeter: Postprint

    Science.gov (United States)

    2010-02-01

    mechanical design of the SAlP prototype revol .... es around the concept of a modular array. The modular aspect allows for the array to be built in...imagery of source . The top row images are of the actual fringe pattern incident on the SAlP prototype array. These pictures were taken through the...processed images associated with each of the inputs. The results demonstrated that the SAlP prototype array works in conjunction with the algorithm

  2. The C-2 derivatives of salvinorin A, ethoxymethyl ether Sal B and β-tetrahydropyran Sal B, have anti-cocaine properties with minimal side effects.

    Science.gov (United States)

    Ewald, Amy W M; Bosch, Peter J; Culverhouse, Aimee; Crowley, Rachel Saylor; Neuenswander, Benjamin; Prisinzano, Thomas E; Kivell, Bronwyn M

    2017-08-01

    Kappa-opioid receptor (KOPr) agonists have pre-clinical anti-cocaine and analgesic effects. However, side effects including sedation, dysphoria, aversion, anxiety and depression limit their therapeutic development. The unique structure of salvinorin A has been used to develop longer acting KOPr agonists. We evaluate two novel C-2 analogues of salvinorin A, ethoxymethyl ether Sal B (EOM Sal B) and β-tetrahydropyran Sal B (β-THP Sal B) alongside U50,488 for their ability to modulate cocaine-induced behaviours and side effects, pre-clinically. Anti-cocaine properties of EOM Sal B were evaluated using the reinstatement model of drug seeking in self-administering rats. EOM Sal B and β-THP Sal B were evaluated for effects on cocaine-induced hyperactivity, spontaneous locomotor activity and sucrose self-administration. EOM Sal B and β-THP Sal B were evaluated for aversive, anxiogenic and depressive-like effects using conditioned place aversion (CPA), elevated plus maze (EPM) and forced swim tests (FSTs), respectively. EOM Sal B (0.1, 0.3 mg/kg, intraperitoneally (i.p.)) dose dependently attenuated drug seeking, and EOM Sal B (0.1 mg/kg, i.p.) and β-THP Sal B (1 mg/kg, i.p.) attenuated cocaine-induced hyperactivity. No effects on locomotor activity, open arm times (EPM) or swimming behaviours (FST) were seen with EOM (0.1 or 0.3 mg/kg, i.p.) or β-THP Sal B (1 or 2 mg/kg, i.p.). However, β-THP Sal B decreased time spent in the drug-paired chamber. EOM Sal B is more potent than Sal A and β-THP Sal B in reducing drug-seeking behaviour with fewer side effects. EOM Sal B showed no effects on sucrose self-administration (0.1 mg/kg), locomotor, depressive-like, aversive-like or anxiolytic effects.

  3. Modeling and Analysis of Asynchronous Systems Using SAL and Hybrid SAL

    Science.gov (United States)

    Tiwari, Ashish; Dutertre, Bruno

    2013-01-01

    We present formal models and results of formal analysis of two different asynchronous systems. We first examine a mid-value select module that merges the signals coming from three different sensors that are each asynchronously sampling the same input signal. We then consider the phase locking protocol proposed by Daly, Hopkins, and McKenna. This protocol is designed to keep a set of non-faulty (asynchronous) clocks phase locked even in the presence of Byzantine-faulty clocks on the network. All models and verifications have been developed using the SAL model checking tools and the Hybrid SAL abstractor.

  4. Preparation and infrared spectra of the Schiff base solid complexes [UO2(sal-O-phdn)(H2O)] and [UO2(sal-O-phdn) (Et3N)] (sal-O-phdn=n, n'-o-phenylenebissalicylideniminato)

    International Nuclear Information System (INIS)

    Sadeek, S.A.; Teleb, S.M.; Al-Kority, A.M.

    1993-01-01

    In the present communication, we report the preparation of the related two new complexes, [UO 2 (sal-o-phdn)(H 2 O)] and LUO 2 (sal-o-phdn)(Et 3 N)], where sal-o-phdn=N, N'-o-phenylenebis (salicylideneiminato); here U VI is seven-coordinate. The infrared spectra of these two complexes are recorded and assigned. (author). 10 refs., 1 tab

  5. Study of Wide Swath Synthetic Aperture Ladar Imaging Techology

    Directory of Open Access Journals (Sweden)

    Zhang Keshu

    2017-02-01

    Full Text Available Combining synthetic-aperture imaging and coherent-light detection technology, the weak signal identification capacity of Synthetic Aperture Ladar (SAL reaches the photo level, and the image resolution exceeds the diffraction limit of the telescope to obtain high-resolution images irrespective to ranges. This paper introduces SAL, including the development path, technology characteristics, and the restriction of imaging swath. On the basis of this, we propose to integrate the SAL technology for extending its swath. By analyzing the scanning-operation mode and the signal model, the paper explicitly proposes that the former mode will be the developmental trend of the SAL technology. This paper also introduces the flight demonstrations of the SAL and the imaging results of remote targets, showing the potential of the SAL in long-range, high-resolution, and scanning-imaging applications. The technology and the theory of the scanning mode of SAL compensates for the defects related to the swath and operation efficiency of the current SAL. It provides scientific foundation for the SAL system applied in wide swath, high resolution earth observation, and the ISAL system applied in space-targets imaging.

  6. Synthesis, structural, thermal and optical studies of rare earth coordinated complex: Tb(Sal){sub 3}Phen

    Energy Technology Data Exchange (ETDEWEB)

    Kaur, Gagandeep; Dwivedi, Y. [Laser and Spectroscopy Laboratory, Department of Physics, Banaras Hindu University, Varanasi 221005 (India); Rai, S.B., E-mail: sbrai49@yahoo.co.in [Laser and Spectroscopy Laboratory, Department of Physics, Banaras Hindu University, Varanasi 221005 (India)

    2011-11-01

    Highlights: {yields} RE coordinated complex of Tb(Sal){sub 3}Phen in crystalline phases were synthesized. {yields} Enhancement in luminescence of Tb{sup 3+} was observed in complex on 355 nm excitation. {yields} Fluorescence enhancement is due to the efficient energy transfer from Sal to Tb{sup 3+}. {yields} An observed increase in lifetime of Tb{sup 3+} is due to encapsulation in Sal/Phen network. {yields} The present system is a deserving candidate for LSC when coupled with solar cells. - Abstract: Complexes of salicylic acid (Sal) and 1,10-phenanthroline (Phen) were synthesized coordinated with terbium ion (Tb{sup 3+}) in crystalline phases. The structural characterizations of the lanthanide complex were made using FT-IR, NMR ({sup 1}H and {sup 13}C) and XRD techniques. These measurements confirm the formation of Tb(Sal){sub 3}Phen complex structure. The thermal aspects of the complex were examined using DTA and TGA techniques. An enhancement in luminescence intensity of Tb{sup 3+} ion bands were observed in Tb(Sal){sub 3}Phen complex as compared to TbCl{sub 3} crystals on 355 nm laser excitation. Enhancement is reported due to the efficient energy transfer process from Sal to Tb{sup 3+} ions. This is also confirmed by the time resolved photoluminescence spectroscopy with increase in lifetime of Tb{sup 3+} ions due to encapsulation in Sal/Phen network. Our system in itself can be a deserving candidate for luminescent solar collector material when coupled with solar cells.

  7. La sal en el queso: diversas interacciones.

    Directory of Open Access Journals (Sweden)

    Juan Sabastián Ramírez-Navas

    2016-12-01

    Full Text Available El objetivo de este trabajo fue analizar el efecto de la sal sobre algunas propiedades físicas del queso, su interacción con los componentes del queso, y el efecto del contenido de sodio sobre la salud de los consumidores. La sal es un ingrediente importante, ya que determina en gran parte la calidad del producto y la aceptación del consumidor. El salado del queso tiene in uencia en la calidad debido a sus efectos sobre la composición, el crecimiento microbiano y la actividad enzimática. Ejerce una in uencia signi cativa sobre la reología y textura, así como en la maduración, principalmente a través de sus efectos sobre la actividad del agua. Los niveles de sal en queso van desde aproximadamente 0,6% p/p hasta aproximadamente 7% p/p. Debido a que el consumo de queso está aumentando en todo el mundo, se debe dar importancia a la reducción de la sal sin afectar su consumo. Entre las estrategias que se han planteado con tal n está la sustitución parcial de la sal por otros compuestos. Pero el inconveniente de la sustitución del NaCl es su efecto sobre las propiedades sensoriales, la composición química, la proteólisis y la textura del queso. Otra interesante alternativa para el reemplazo del NaCl, es el uso de la tecnología de membranas para obtener permeado rico en sales provenientes del lactosuero; la adición de estas sales en la elaboración quesera, produce quesos bajos en sodio, con buena textura.

  8. “ÁGUA VIRA SAL LÁ NA SALINA”: O GLOSSÁRIO DOS TERMOS DO SAL NO RIO GRANDE DO NORTE NUMA PERSPECTIVA SOCIOTERMINOLÓGICA

    Directory of Open Access Journals (Sweden)

    Moisés Batista da Silva

    2016-01-01

    Full Text Available O presente trabalho tem como objetivo apresentarresultados da pesquisa sobre os termos usados na Indústriado sal, em três municípios que fazem parte da região salineirado Rio Grande do Norte. Primeiramente, abordamosas diferentes ciências do Léxico estudadas por Boutin--Quesnel (1985, Barbosa (1994,1995, Barros (2004,dando ênfase maior nas orientações teórico-metodológicasda Socioterminologia (FAULSTICH, 1995, 1996, 1998,2006. Em seguida, expomos os procedimentos tanto dametodologia de campo quanto da metodologia da organizaçãodo Glossário da Terminologia do Sal - GLOSSAL(SILVA, 2007. Depois, apresentamos uma amostra dosverbetes, as análises feitas a partir desse repertório e, porfim, as considerações finais quanto aos fatos linguísticos,observados nesse glossário. Desse modo, essa pesquisa sejustifica por possibilitar a divulgação de um produto terminográficodestinado, não só aos especialistas e pesquisadoresdas Ciências do Léxico, como também ao grandepúblico e aos interessados em aprofundar seus estudos naterminologia do sal.Palavras-chave: Sal. Indústria do Sal. Terminologia. Socioterminologia.Glossário.

  9. Stakeholder perceptions of decision-making process on marine biodiversity conservation on sal island (Cape Verde

    Directory of Open Access Journals (Sweden)

    Jorge Ramos

    2011-01-01

    Full Text Available In the Sal Island (Cape Verde there is a growing involvement, will and investment in the creation of tourism synergies. However, much of the economic potential of the island can be found submerged in the sea: it is its intrinsic 'biodiversity'. Due to this fact, and in order to balance environmental safety and human pressure, it has been developed a strategy addressing both diving and fishing purposes. That strategy includes the deployment of several artificial reefs (ARs around the island. In order to allocate demand for diving and fishing purposes, we have developed a socio-economic research approach addressing the theme of biodiversity and reefs (both natural and artificial and collected expectations from AR users by means of an inquiry method. It is hypothesized a project where some management measures are proposed aiming marine biodiversity conservation. Using the methodology named as analytic hierarchy process (AHP it was scrutinized stakeholders' perception on the best practice for marine biodiversity conservation in the Sal Island. The results showed that to submerge obsolete structures in rocky or mixed areas have a high potential, but does not gathers consensuality. As an overall conclusion, it seems that limitation of activities is the preferred management option to consider in the future.Na Ilha do Sal (Cabo Verde existe um crescente envolvimento, vontade e investimento na criação de sinergias turísticas. Contudo, muito do potencial económico da ilha está submerso - a biodiversidade marinha. Devido a este facto, e tendo em vista promover a sustentabilidade ambiental associada ao eco-turismo, vem sendo desenvolvida uma estratégia direccionada, quer ao mergulho, quer à pesca. Esta estratégia inclui a implantação de vários recifes artificiais (RA na Baía de Santa Maria. De modo a alocar a procura para propósitos como o mergulho e a pesca, desenvolvemos um plano de pesquisa socio-económica relativo ao tema da biodiversidade

  10. Memory, Art and Mourning: the Case of the 'Salón del Nunca Más' of Granada (Antioquia, Colombia

    Directory of Open Access Journals (Sweden)

    Elkin Rubiano Pinilla

    2017-07-01

    Full Text Available This document examines the work produced in the 'Salón del Nunca Más', located in Granada (Antioquia on the subject of collective memory. In this rural town, the 'Salón' has articulated different practices that, along with the construction of memory, have allowed survivors of violence and family members of killed and disappeared individuals to symbolize loss by means of public rituals. On the other hand, the article explores the visual settings that lay down the event, not only the exposure of violent events but the practices of the local community: what happens in the 'Salón', the journalistic covering (written press, the documental photography (Jesús Abad Colorado and the artistic work (Erika Diettes. For this purpose, archival material, a historical interdisciplinary approach, psychoanalysis and image and communication theories, as well as interviews is referenced through this article.

  11. Feasibility of salt reduction in processed foods in Argentina Factibilidad de reducir el contenido de sal de los alimentos procesados en la Argentina

    Directory of Open Access Journals (Sweden)

    Daniel Ferrante

    2011-02-01

    Full Text Available OBJECTIVE: To assess an intervention to reduce salt intake based on an agreement with the food industry. METHODS: Salt content was measured in bakery products through a national survey and biochemical analyses. Low-salt bread was evaluated by a panel of taste testers to determine whether a reduced salt bread could remain undetected. French bread accounts for 25% of the total salt intake in Argentina; hence, reducing its salt concentration from 2% to 1.4% was proposed and tested. A crossover trial was conducted to evaluate the reduction in urinary sodium and blood pressure in participants during consumption of the low-salt bread compared with ordinary bread. RESULTS: Average salt content in bread was 2%. This study evaluated low-salt bread containing 1.4% salt. This reduction remained mostly undetected by the panels of taste testers. In the crossover trial, which included 58 participants, a reduction of 25 milliequivalents in 24hour urine sodium excretion, a reduction in systolic blood pressure of 1.66 mmHg, and a reduction in diastolic blood pressure of 0.76 mmHg were found during the low-salt bread intake. CONCLUSIONS: The study showed that dietary salt reduction was feasible and well accepted in the population studied through a reduction of salt content in bread. Although the effects on urinary sodium and blood pressure were moderate, a countrywide intervention could have a greater public health impact.OBJETIVO: Evaluar una intervención destinada a reducir el consumo de sal a partir de un convenio con la industria alimentaria. MÉTODOS: Se midió el contenido de sal de los productos de panadería por medio de una encuesta nacional y análisis bioquímicos. Un grupo de catadores evaluó el pan con bajo contenido de sal para determinar si la disminución pasaba inadvertida. Dado que el pan francés representa 25% del consumo total de sal en la Argentina, se propuso someter a prueba este tipo de pan con una disminución de la concentración de

  12. Characterisation of SalRAB a salicylic acid inducible positively regulated efflux system of Rhizobium leguminosarum bv viciae 3841.

    Directory of Open Access Journals (Sweden)

    Adrian J Tett

    Full Text Available Salicylic acid is an important signalling molecule in plant-microbe defence and symbiosis. We analysed the transcriptional responses of the nitrogen fixing plant symbiont, Rhizobium leguminosarum bv viciae 3841 to salicylic acid. Two MFS-type multicomponent efflux systems were induced in response to salicylic acid, rmrAB and the hitherto undescribed system salRAB. Based on sequence similarity salA and salB encode a membrane fusion and inner membrane protein respectively. salAB are positively regulated by the LysR regulator SalR. Disruption of salA significantly increased the sensitivity of the mutant to salicylic acid, while disruption of rmrA did not. A salA/rmrA double mutation did not have increased sensitivity relative to the salA mutant. Pea plants nodulated by salA or rmrA strains did not have altered nodule number or nitrogen fixation rates, consistent with weak expression of salA in the rhizosphere and in nodule bacteria. However, BLAST analysis revealed seventeen putative efflux systems in Rlv3841 and several of these were highly differentially expressed during rhizosphere colonisation, host infection and bacteroid differentiation. This suggests they have an integral role in symbiosis with host plants.

  13. O Debate do Salário Mínimo no Brasil

    Directory of Open Access Journals (Sweden)

    Sara Eloisa Vilmar da Silva Lemos

    2011-04-01

    Full Text Available Num contexto de estabilidade econômica, o salário mínimo pode voltar a desempenhar o seu papel original de tentar garantir ao trabalhador um salário que é o mínimo para a sua sobrevivência. Para a análise do papel do salario mínimo no mercado de trabalho brasileiro, uma resenha da literatura especifica existente parece ser um bom ponto de partida.

  14. Estrategias mundiales en la reducción de sal/sodio en el pan

    OpenAIRE

    Mónica Valverde Guillén; Jennifer Picado Pérez

    2013-01-01

    Objetivo: Proporcionar información sobre las acciones mundiales en la reducción de sal/sodio en el pan para generar datos útiles en la implementación de estrategias que busquen la disminución del consumo de sal/sodio a partir de productos panificados. Método: Se realizó una búsqueda de información en las bases de datos de Binass, PubMed, Scielo e instituciones gubernamentales. Las palabras claves fueron: contenido de sodio en el pan, menos sodio en pan, acciones para reducir sal en el pan, co...

  15. SalB inactivation modulates culture supernatant exoproteins and affects autolysis and viability in Enterococcus faecalis OG1RF.

    Science.gov (United States)

    Shankar, Jayendra; Walker, Rachel G; Wilkinson, Mark C; Ward, Deborah; Horsburgh, Malcolm J

    2012-07-01

    The culture supernatant fraction of an Enterococcus faecalis gelE mutant of strain OG1RF contained elevated levels of the secreted antigen SalB. Using differential fluorescence gel electrophoresis (DIGE) the salB mutant was shown to possess a unique complement of exoproteins. Differentially abundant exoproteins were identified using matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry. Stress-related proteins including DnaK, Dps family protein, SOD, and NADH peroxidase were present in greater quantity in the OG1RF salB mutant culture supernatant. Moreover, several proteins involved in cell wall synthesis and cell division, including d-Ala-d-Lac ligase and EzrA, were present in reduced quantity in OG1RF salB relative to the parent strain. The salB mutant displayed reduced viability and anomalous cell division, and these phenotypes were exacerbated in a gelE salB double mutant. An epistatic relationship between gelE and salB was not identified with respect to increased autolysis and cell morphological changes observed in the salB mutant. SalB was purified as a six-histidine-tagged protein to investigate peptidoglycan hydrolytic activity; however, activity was not evident. High-pressure liquid chromatography (HPLC) analysis of reduced muropeptides from peptidoglycan digested with mutanolysin revealed that the salB mutant and OG1RF were indistinguishable.

  16. Molecular and neurochemical substrates of the audiogenic seizure strains: The GASH:Sal model.

    Science.gov (United States)

    Prieto-Martín, Ana I; Aroca-Aguilar, J Daniel; Sánchez-Sánchez, Francisco; Muñoz, Luis J; López, Dolores E; Escribano, Julio; de Cabo, Carlos

    2017-06-01

    Animal models of audiogenic epilepsy are useful tools to understand the mechanisms underlying human reflex epilepsies. There is accumulating evidence regarding behavioral, anatomical, electrophysiological, and genetic substrates of audiogenic seizure strains, but there are still aspects concerning their neurochemical basis that remain to be elucidated. Previous studies have shown the involved of γ-amino butyric acid (GABA) in audiogenic seizures. The aim of our research was to clarify the role of the GABAergic system in the generation of epileptic seizures in the genetic audiogenic seizure-prone hamster (GASH:Sal) strain. We studied the K + /Cl - cotransporter KCC2 and β2-GABAA-type receptor (GABAAR) and β3-GABAAR subunit expressions in the GASH:Sal both at rest and after repeated sound-induced seizures in different brain regions using the Western blot technique. We also sequenced the coding region for the KCC2 gene both in wild- type and GASH:Sal hamsters. Lower expression of KCC2 protein was found in GASH:Sal when compared with controls at rest in several brain areas: hippocampus, cortex, cerebellum, hypothalamus, pons-medulla, and mesencephalon. Repeated induction of seizures caused a decrease in KCC2 protein content in the inferior colliculus and hippocampus and an increase in the pons-medulla. When compared to controls, the basal β 2 -GABA A R subunit in the GASH:Sal was overexpressed in the inferior colliculus, rest of the mesencephalon, and cerebellum, whereas basal β 3 subunit levels were lower in the inferior colliculus and rest of the mesencephalon. Repeated seizures increased β2 both in the inferior colliculus and in the hypothalamus and β 3 in the hypothalamus. No differences in the KCC2 gene-coding region were found between GASH:Sal and wild-type hamsters. These data indicate that GABAergic system functioning is impaired in the GASH:Sal strain, and repeated seizures seem to aggravate this dysfunction. These results have potential clinical

  17. The evolutionary history of the SAL1 gene family in eutherian mammals

    Directory of Open Access Journals (Sweden)

    Callebaut Isabelle

    2011-05-01

    Full Text Available Abstract Background SAL1 (salivary lipocalin is a member of the OBP (Odorant Binding Protein family and is involved in chemical sexual communication in pig. SAL1 and its relatives may be involved in pheromone and olfactory receptor binding and in pre-mating behaviour. The evolutionary history and the selective pressures acting on SAL1 and its orthologous genes have not yet been exhaustively described. The aim of the present work was to study the evolution of these genes, to elucidate the role of selective pressures in their evolution and the consequences for their functions. Results Here, we present the evolutionary history of SAL1 gene and its orthologous genes in mammals. We found that (1 SAL1 and its related genes arose in eutherian mammals with lineage-specific duplications in rodents, horse and cow and are lost in human, mouse lemur, bushbaby and orangutan, (2 the evolution of duplicated genes of horse, rat, mouse and guinea pig is driven by concerted evolution with extensive gene conversion events in mouse and guinea pig and by positive selection mainly acting on paralogous genes in horse and guinea pig, (3 positive selection was detected for amino acids involved in pheromone binding and amino acids putatively involved in olfactory receptor binding, (4 positive selection was also found for lineage, indicating a species-specific strategy for amino acid selection. Conclusions This work provides new insights into the evolutionary history of SAL1 and its orthologs. On one hand, some genes are subject to concerted evolution and to an increase in dosage, suggesting the need for homogeneity of sequence and function in certain species. On the other hand, positive selection plays a role in the diversification of the functions of the family and in lineage, suggesting adaptive evolution, with possible consequences for speciation and for the reinforcement of prezygotic barriers.

  18. Simon van der Stel en Constantia sal in my gedagtes bly. Die ...

    African Journals Online (AJOL)

    Simon van der Stel en Constantia sal in my gedagtes bly. Die byfigure sal nie vervaag nie en bo dit alles is, vir almal wat in die boeien- de verhaal 'n rol speel, die vraag aangaande die lotsbestemming van die mens gestel, 'n vraag waarop die skrywer Monica Dacosta laat antwoord: "Dutchmen never speak about destiny" [ ...

  19. Exploration of Shorea robusta (Sal seeds, kernels and its oil

    Directory of Open Access Journals (Sweden)

    Shashi Kumar C.

    2016-12-01

    Full Text Available Physical, mechanical, and chemical properties of Shorea robusta seed with wing, seed without wing, and kernel were investigated in the present work. The physico-chemical composition of sal oil was also analyzed. The physico-mechanical properties and proximate composition of seed with wing, seed without wing, and kernel at three moisture contents of 9.50% (w.b, 9.54% (w.b, and 12.14% (w.b, respectively, were studied. The results show that the moisture content of the kernel was highest as compared to seed with wing and seed without wing. The sphericity of the kernel was closer to that of a sphere as compared to seed with wing and seed without wing. The hardness of the seed with wing (32.32, N/mm and seed without wing (42.49, N/mm was lower than the kernels (72.14, N/mm. The proximate composition such as moisture, protein, carbohydrates, oil, crude fiber, and ash content were also determined. The kernel (30.20%, w/w contains higher oil percentage as compared to seed with wing and seed without wing. The scientific data from this work are important for designing of equipment and processes for post-harvest value addition of sal seeds.

  20. Erigeron mancus (Asteraceae) density as a baseline to detect future climate change in La Sal Mountain habitats

    Science.gov (United States)

    James F. Fowler; Barb Smith

    2010-01-01

    The La Sal Daisy, Erigeron mancus Rydb., is endemic to timberline and alpine habitats of the La Sal Mountains in Utah, an insular, laccolithic mountain range on the Colorado Plateau in southeastern Utah. It occurs in alpine herbaceous communities from timberline to the crestline of the La Sals. Our primary goal in this study was to measure basic population biology...

  1. Striving for Diversity, Accessibility and Quality: Evaluating SiSAL Journal

    Directory of Open Access Journals (Sweden)

    Jo Mynard

    2014-06-01

    Full Text Available After establishing a journal, it is important to evaluate its progress to ensure that the principles that underpin its existence continue to be a priority. In this article, the author reports on measures that were used to evaluate Studies in Self-Access Learning (SiSAL Journal. The research was designed to investigate the three principles that the journal values: diversity, accessibility and quality. The results identified some successful factors such as accessibility and favourable perceptions of SiSAL Journal’s quality. However, the results also identified areas that could be improved to further increase diversity and to encourage submissions from more authors based in different locations.

  2. EJB 3.0 un EJB 2.0 salīdzinājums

    OpenAIRE

    Karnigins, Vadims

    2013-01-01

    Bakalaura darbā „EJB 3.0 un EJB 2.0 salīdzinājums” ir apskatītas atšķirības starp EJB 2.0 un EJB 3.0 tehnoloģijām - uzlabojumi tehnoloģijas trešajā versijā, salīdzinot ar otro, kā arī atšķirības programmēšanas pieejās, kas tiek diktētas ar EJB 3.0, salīdzinot ar EJB 2.0, un atšķirības starp 2. un 3. EJB tehnoloģijas versiju komponentēm. Darba praktiskajā daļā tiek apskatīta konkrētās Java EE lietotnes migrācija uz EJB 3.0 arhitektūru no EJB 2.0. Lietotne ir bankas karšu sistēmas sastāvdaļa...

  3. The Safeguards Analytical Laboratory (SAL) in the Agency's safeguards measurement system activity in 1990

    International Nuclear Information System (INIS)

    Bagliano, G.; Cappis, J.; Deron, S.; Parus, J.L.

    1991-05-01

    The IAEA applies Safeguards at the request of a Member State to whole or part of its nuclear materials. The verification of nuclear material accountability still constitutes the fundamental method of control, although sealing and surveillance procedures play an important complementary and increasing role in Safeguards. A small fraction of samples must still be analyzed at independent analytical laboratories using conventional Destructive Analytical (DA) methods of highest accuracy in order to verify that small potential biases in the declarations of the State are not masking protracted diversions of significant quantities of fissile materials. The Safeguards Analytical Laboratory (SAL) is operated by the Agency's Laboratories at Seibersdorf to provide to the Department of Safeguards and its inspectors such off-site Analytical Services, in collaboration with the Network of Analytical Laboratories (NWAL) of the Agency. In the last years SAL and the Safeguards DA Services have become more directly involved in the qualification and utilization of on-site analytical instrumentation such as K-edge X-Ray absorptiometers and quadrupole mass spectrometers. The nature and the origin of the samples analyzed, the measurements usually requested by the IAEA inspectors, the methods and the analytical techniques available at SAL and at the Network of Analytical Laboratories (NWAL) with the performances achieved during the past years are described and discussed in several documents. This report gives an evaluation compared with 1989 of the volume and the quality of the analyses reported in 1990 by SAL and by the NWAL in reply to requests of IAEA Safeguards inspectors. The reports summarizes also on-site DA developments and support provided by SAL to the Division of Safeguards Operation and special training courses to the IAEA Safeguards inspectors. 55 refs, 7 figs, 15 tabs

  4. Marx e a crítica à forma salário

    Directory of Open Access Journals (Sweden)

    Carlos Prado

    2011-09-01

    Full Text Available

    O objetivo desse artigo é descrever, a partir da leitura de “O Capital”, a crítica de Marx a forma salário. Tal categoria é fundamental para encobrir e mistificar as relações entre capital e trabalho. A partir de uma feroz crítica a Economia Política, Marx afirma que fetiche da forma salário cumpre a função de ocultar a apropriação da mais-valia pelo capitalista. Por conseguinte, trata-se de uma forma fundamental para a manutenção das relações capitalistas de produção.

  5. SalMar ASA : Strategic analysis and valuation

    OpenAIRE

    Augenstein, Daniel

    2017-01-01

    The objective of this thesis is to estimate the theoretical value of equity for SalMar ASA and thereby the value per share at 27.11.2017. Fundamental valuation through a two-stage discounted cash flow model is chosen as the main method, while a valuation using comparable firms is performed as a supplement. In the fundamental valuation I have estimated the enterprise value by discounting the expected future cash flows to present value. To find the value of equity, the net-intere...

  6. Reparation and validation of a large size dried spike: Batch SAL-9951

    International Nuclear Information System (INIS)

    Doubek, N.; Jammet, G.; Zoigner, A.

    1991-02-01

    To determine uranium and plutonium concentration using isotope dilution mass spectrometry, weighed aliquands of a synthetic mixture containing about 2mg of Pu (with a 239 Pu abundance of about 98%) and 37mg of U (with a 235 U enrichment of about 19%) have been prepared by the IAEA-SAL and verified by three analytical laboratories: NMCC-SAL, OEFZS, IAEA-SAL; they will be used to spike samples of concentrated spent fuel solutions with a high burnup and a low 235 U enrichment. Certified Reference Materials Pu-NBL-126, natural U-NBL-112A and 93% enriched U-NBL-116 were used to prepare a stock solution containing about 3.2 mg/ml of Pu and 64.3 mg/ml of 18.7% enriched U. Before shipment to the Reprocessing Plant, aliquands of the stock solution are dried to give Large Size Dried (LSD) Spikes which resist shocks encountered during transportation, so that they can readily be recovered quantitatively at the plant. This paper describes the preparation and the validation of a fifth batch of LSD-spike which is intended to be used as a common spike by the plant operator, the national and the IAEA inspectorates. 7 refs, 6 tabs

  7. EFFECTS OF LAND-USE CHANGE ON THE PROPERTIES OF TOP SOIL OF DECIDUOUS SAL FOREST IN BANGLADESH

    Directory of Open Access Journals (Sweden)

    M. A. Kashem

    2016-08-01

    Full Text Available This study examined the effects of land use change on the physico-chemical properties of top soil in the deciduous Sal forest of Bangladesh. Relatively less disturbed Sal (Shorea robusta Roxb. Ex Gaertn. forest stands and the nearby stands those were converted into Acacia (Acacia auriculiformis Benth. plantation and pineapple (Ananus comosus (L. Merr. cultivation were selected to examine the effects of land use change on soil properties. For each land use type, soil samples were collected from 4 locations, 50m distant from each other, as replicates. Soil samples were collected at 0-5, 5-10, and 10-15 cm depths. Soil moisture content, conductivity, pH organic C, total N and total P were determined as soil properties. Leaf litter of Sal, Acacia and pineapple was incubation for 90 and 180 days in independent identical soil in order to examine the effects of plant species through leaf litter on the soil chemical nutrient (N and P status. Data showed that soil moisture content, conductivity and pH were significantly affected by land use but not by depth. However, soil organic C was affected by both land-use type (P< 0.02 and soil depth (P< 0.003, although no significant interactions appeared between these two factors. Soil total N and P did not differ between land use types but by depth and, N and P contents decreased with the increase of depth. Rates of nutrients (N and P released from Sal, Acacia and pineapple did not differ significantly among them during incubation. Results of the present study reveal that properties of the top soil of the Madhupur Sal forest are different in their responses to the varying land uses. The findings of this study are thus relevant for the sustainable management of the deciduous Sal forest ecosystems.

  8. Reducing salt intake to prevent hypertension and cardiovascular disease Reducción del consumo de sal para prevenir la hipertensión y las enfermedades cardiovasculares

    Directory of Open Access Journals (Sweden)

    Feng J. He

    2012-10-01

    Full Text Available There is compelling evidence that dietary salt intake is the major cause of raised blood pressure (BP and that a reduction in salt intake from the current level of ≈ 9 - 12 g/day in most countries to the recommended level of 9 g/day is highly prevalent. Sources of salt in the diet vary hugely among countries; in developed countries, 75% of salt comes from processed foods, whereas in developing countries such as parts of Brazil, 70% comes from salt added during cooking or at the table. To reduce population salt intake, the food industry needs to implement a gradual and sustained reduction in the amount of salt added to foods in developed countries. In developing countries, a public health campaign plays a more important role in encouraging consumers to use less salt coupled with widespread replacement of salt with substitutes that are low in sodium and high in potassium. Numerous countries in the Americas have started salt reduction programs. The challenge now is to engage other countries. A reduction in population salt intake will result in a major improvement in public health along with major health-related cost savings.Hay datos probatorios irrefutables de que la ingesta de sal alimentaria es la principal causa de hipertensión y de que una reducción del consumo de sal, del nivel actual de aproximadamente 9 a 12 g/d en la mayor parte de los países al nivel recomendado de menos de 5 g/d, disminuye la presión arterial. Una reducción adicional hasta 3 a 4 g/d tiene un mayor efecto y es necesario seguir teniendo en cuenta la posibilidad de metas de consumo de sal inferiores en la población. Los estudios de cohortes y los ensayos clínicos han demostrado que el menor consumo de sal se asocia con una reducción del riesgo de padecer enfermedades cardiovasculares. La reducción de sal es una de las medidas más rentables para mejorar la salud pública a escala mundial. En la Región de las Américas, hay una alta prevalencia de un consumo de

  9. PC image processing

    International Nuclear Information System (INIS)

    Hwa, Mok Jin Il; Am, Ha Jeng Ung

    1995-04-01

    This book starts summary of digital image processing and personal computer, and classification of personal computer image processing system, digital image processing, development of personal computer and image processing, image processing system, basic method of image processing such as color image processing and video processing, software and interface, computer graphics, video image and video processing application cases on image processing like satellite image processing, color transformation of image processing in high speed and portrait work system.

  10. Domates Salçalarının Mikroflorası ve Depolama Sürecinde Miktarlarındaki Değişiklikler

    Directory of Open Access Journals (Sweden)

    Fikri Başoğlu

    2015-02-01

    Full Text Available - Ticari sterilize edilen domates salçasının muhafazası esnasında sterilliği azalmaktadır. 12 aylık muhafaza süresinde sterillik %100’den %15 ve 8’e hatta daha aza düşebilir. - Pastörizasyonda tatbik edilen sıcaklık derecesi (89-93 oC laktik asit bakterileri ile maya ve küflerin yaşamasına imkan vermemektedir. - Domates salçasının kalıntı mikroflorasını umumiyetle spor yapan bakteriler (B. subtilis, B. mesentericus, B. cereus ..... temsil etmektedir. - Domates salçasının kuru maddesi %28-30 veya %38-40 olması B. subtilis ve B. mesentericus’un ölmesine tesir etmemektedir. - Salçanın 10-15 oC ta muhafaza edilmesi termofil mikroorganizmaların çoğalmasına engel olmaktadır. - Salçanın pH’sı yükseltilirse sporlar çabuk gelişmektedir. - Hermetikli kapatmadaki hatalarda laktik asit bakterileri ve mayalar salçayı bozmaktadırlar. - Salçada bulunan bazı sporlu ve sporsuz mikroorganizmaların termal ölüm müddetlerine ortamın pH’sı, organik asitler ve spor konsantrasyonu etki etmektedir.

  11. A recombinant Sal k 1 isoform as an alternative to the polymorphic allergen from Salsola kali pollen for allergy diagnosis.

    Science.gov (United States)

    Mas, Salvador; Boissy, Patrice; Monsalve, Rafael I; Cuesta-Herranz, Javier; Díaz-Perales, Araceli; Fernández, Javier; Colás, Carlos; Rodríguez, Rosalía; Barderas, Rodrigo; Villalba, Mayte

    2015-01-01

    The incidence of Amaranthaceae pollen allergy has increased due to the desertification occurring in many countries. In some regions of Spain, Salsola kali is the main cause of pollinosis, at almost the same level as olive and grass pollen. Sal k 1 - the sensitization marker of S. kali pollinosis - is used in clinical diagnosis, but is purified at a low yield from pollen. We aimed to produce a recombinant (r)Sal k 1 able to span the structural and immunological properties of the natural isoforms from pollen, and validate its potential use for diagnosis. Specific cDNA was amplified by PCR, cloned into the pET41b vector and used to transform BL21 (DE3) Escherichia coli cells. Immunoblotting, ELISA, basophil activation and skin-prick tests were used to validate the recombinant protein against Sal k 1 isolated from pollen. Sera and blood cells from S. kali pollen-sensitized patients and specific monoclonal and polyclonal antisera were used. rSal k 1 was produced in bacteria with a yield of 7.5 mg/l of cell culture. The protein was purified to homogeneity and structural and immunologically validated against the natural form. rSal k 1 exhibited a higher IgE cross-reactivity with plant-derived food extracts such as peanut, almond or tomato than with pollen sources such as Platanus acerifolia and Oleaceae members. rSal k 1 expressed in bacteria retains intact structural and immunological properties in comparison to the pollen-derived allergen. It spans the immunological properties of most of the isoforms found in pollen, and it might substitute natural Sal k 1 in clinical diagnosis. © 2015 S. Karger AG, Basel.

  12. Performance and emission characteristics of an agricultural diesel engine fueled with blends of Sal methyl esters and diesel

    International Nuclear Information System (INIS)

    Pali, Harveer S.; Kumar, N.; Alhassan, Y.

    2015-01-01

    Highlights: • Sal seed oil is unexplored biodiesel feedstock which is abundantly found in India. • Sal seed oil has good oxidation stability. • Performance and emission characteristics of the blends of Sal methyl esters with diesel evaluated. • At higher loads, CO, HC and smoke emissions of SME blends were lower than diesel. - Abstract: The present work deals with an underutilized vegetable oil; Sal seed oil (Shorea robusta) as a feedstock for biodiesel production. The production potential of Sal seed oil is very promising (1.5 million tons in a year) in India. The pressure filtered Sal seed oil was transesterified into Sal Methyl Ester (SME). The kinematic viscosity (5.89 cSt), density (0.8764 g/cc) and calorific value (39.65 MJ/kg) of the SME were well within the ASTM/EN standard limits. Various test fuels were prepared for the engine trials by blending 10%, 20%, 30% and 40% of SME in diesel on volumetric basis and designated as SME10, SME20, SME30 and SME40 respectively. The BTE, in general, was found to be decreased with increased volume fraction of SME in the blends. At full load, BSEC for SME10, SME20, SME30 and SME40 were 13.6 MJ/kW h, 14.3 MJ/kW h, 14.7 MJ/kW h and 14.8 MJ/kW h respectively as compared to 13.9 MJ/kW h in case of diesel. At higher load conditions, CO, UHC and smoke emissions were found lower for all SME blends in comparison to neat diesel due to oxygenated nature of fuel. SME10, SME20, SME30 and SME40 showed 51 ppm, 44 ppm, 46 ppm and 48 ppm of UHC emissions respectively as compared to 60 ppm of diesel. The NOx emissions were found to be increased for SME based fuel in comparison to neat diesel operation. At peak load condition, SME10, SME20, SME30 and SME40 had NOx emissions of 612 ppm, 644 ppm, 689 ppm and 816 ppm as compared to 499 ppm for diesel. It may be concluded from the experimental investigations that Sal seed biodiesel is a potential alternative to diesel fuel for reducing dependence on crude petroleum derived fuels and

  13. Avaliação dos efeitos da adição de sal e da densidade no transporte de tambaqui

    Directory of Open Access Journals (Sweden)

    Gomes Levy de Carvalho

    2003-01-01

    Full Text Available Os objetivos deste trabalho foram testar a eficiência do sal como redutor de estresse e verificar a melhor densidade de transporte de juvenis de tambaqui (Colossoma macropomun em caixas de plástico adaptadas. No primeiro experimento foram testadas diferentes concentrações de sal de cozinha (NaCl na água; no segundo, o transporte foi realizado por três horas em caixas de plástico de 200 L estocadas com diferentes densidades de peixe, com 8 g de sal/L de água. O cortisol plasmático dos peixes sofreu aumento significativo após o transporte no tratamento sem sal e com 2 g de sal/L de água, retornando para níveis normais após 96 horas. A glicose plasmática dos peixes sofreu aumento após o transporte em todas as concentrações de sal testadas, com exceção da com 8 g/L de água, retornando para níveis normais em 24 horas. Nos peixes transportados no segundo experimento, com 8 g de sal/L de água, não foi verificada mudança significativa no cortisol plasmático, mas a glicose aumentou significativamente em todas as densidades após o transporte, retornando para níveis normais em 24 horas. Houve mortalidade de 11% em uma das repetições da densidade de 200 kg/m³ de água. Para o transporte com 8 g de sal/L de água, a densidade máxima deve ser de 150 kg/m³ de água. Nesta densidade os parâmetros físico-químicos de qualidade de água se mantêm com características adequadas, as respostas ao estresse são mínimas e não há mortalidade.

  14. Procesos electrolíticos industriales. Electrolisis de la sal NaCl en disolución acuosa y como sal fundida.

    OpenAIRE

    Milla González, Miguel

    2014-01-01

    Se explica en este ejercicio de electroquímica los procesos correspondientes a la electrolisis de una disolución de cloruro de sodio para obtener cloro, hidrógeno y disolución de hidróxido sódico y la electrolisis de esta misma sal a la temperatura de fusión (electrolisis de sales fundidas) en presencia de cloruro de calcio. Esta última es la base para la obtención industrial de sodio metal y de gas cloro. Mediante animaciones se explican ambos procesos electrolíticos. Asimismo se justifican ...

  15. Markov Processes in Image Processing

    Science.gov (United States)

    Petrov, E. P.; Kharina, N. L.

    2018-05-01

    Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.

  16. Tsunami vulnerability and damage assessment in the coastal area of Rabat and Salé, Morocco

    Directory of Open Access Journals (Sweden)

    A. Atillah

    2011-12-01

    Full Text Available This study, a companion paper to Renou et al. (2011, focuses on the application of a GIS-based method to assess building vulnerability and damage in the event of a tsunami affecting the coastal area of Rabat and Salé, Morocco. This approach, designed within the framework of the European SCHEMA project (www.schemaproject.org is based on the combination of hazard results from numerical modelling of the worst case tsunami scenario (inundation depth based on the historical Lisbon earthquake of 1755 and the Portugal earthquake of 1969, together with vulnerability building types derived from Earth Observation data, field surveys and GIS data. The risk is then evaluated for this highly concentrated population area characterized by the implementation of a vast project of residential and touristic buildings within the flat area of the Bouregreg Valley separating the cities of Rabat and Salé. A GIS tool is used to derive building damage maps by crossing layers of inundation levels and building vulnerability. The inferred damage maps serve as a base for elaborating evacuation plans with appropriate rescue and relief processes and to prepare and consider appropriate measures to prevent the induced tsunami risk.

  17. Litter decomposing fungi in sal (Shorea robusta forests of central India

    Directory of Open Access Journals (Sweden)

    RAM KEERTI VERMA

    2011-11-01

    Full Text Available Soni KK, Pyasi A, Verma RK. 2011. Litter decomposing fungi in sal (Shorea robusta forests of central India. Nusantara Bioscience 3: 136-144. The present study aim on isolation and identification of fungi associated with decomposition of litter of sal forest in central India. Season wise successional changes in litter mycoflora were determined for four main seasons of the year namely, March-May, June-August, September-November and December-February. Fungi like Aspergillus flavus, A. niger and Rhizopus stolonifer were associated with litter decomposition throughout the year, while Aspergillus fumigatus, Cladosporium cladosporioides, C. oxysporum, Curvularia indica, and C. lunata were recorded in three seasons. Some fungi including ectomycorrhiza forming occur only in the rainy season (June-August these are Astraeus hygrometricus, Boletus fallax, Calvatia elata, Colletotrichum dematium, Corticium rolfsii, Mycena roseus, Periconia minutissima, Russula emetica, Scleroderma bovista, S. geaster, S. verrucosum, Scopulariopsis alba and four sterile fungi. Fungi like Alternaria citri, Gleocladium virens, Helicosporium phragmitis and Pithomyces cortarum were rarely recorded only in one season.

  18. Hepatoprotective Activity of Herbal Composition SAL, a Standardize Blend Comprised of Schisandra chinensis, Artemisia capillaris, and Aloe barbadensis

    Directory of Open Access Journals (Sweden)

    Mesfin Yimam

    2016-01-01

    Full Text Available Some botanicals have been reported to possess antioxidative activities acting as scavengers of free radicals rendering their usage in herbal medicine. Here we describe the potential use of “SAL,” a standardized blend comprised of three extracts from Schisandra chinensis, Artemisia capillaris, and Aloe barbadensis, in mitigating chemically induced acute liver toxicities. Acetaminophen and carbon tetrachloride induced acute liver toxicity models in mice were utilized. Hepatic functional tests from serum collected at T24 and hepatic glutathione and superoxide dismutases from liver homogenates were evaluated. Histopathology analysis and merit of blending 3 standardized extracts were also confirmed. Statistically significant and dose-correlated inhibitions in serum ALT ranging from 52.5% (p=0.004 to 34.6% (p=0.05 in the APAP and 46.3% (p<0.001 to 29.9% (p=0.02 in the CCl4 models were observed for SAL administered at doses of 400–250 mg/kg. Moreover, SAL resulted in up to 60.6% and 80.2% reductions in serums AST and bile acid, respectively. The composition replenished depleted hepatic glutathione in association with an increase of hepatic superoxide dismutase. Unexpected synergistic protection from liver damage was also observed. Therefore, the composition SAL could be potentially utilized as an effective hepatic-detoxification agent for the protection from liver damage.

  19. Avances en la reducción del consumo de sal y sodio en Costa Rica Advances in reducing salt and sodium intake in Costa Rica

    Directory of Open Access Journals (Sweden)

    Adriana Blanco-Metzler

    2012-10-01

    Full Text Available En el presente artículo se describen los avances logrados en Costa Rica -así como los desafíos y limitaciones- en la reducción del consumo de sal. El establecimiento del Plan Nacional para la Reducción del Consumo de Sal/sodio en la Población de Costa Rica 2011 - 2021 se complementó con programas y proyectos multisectoriales específicos dirigidos a: 1 conocer la ingesta de sodio y el contenido de sal o sodio en los alimentos de mayor consumo; identificar los conocimientos, actitudes y comportamientos del consumidor respecto a la sal/sodio, su relación con la salud y el etiquetado nutricio-nal; evaluar la relación costo-efectividad de las medidas dirigidas a reducir la prevalencia de hipertensión arterial; 2 implementar estrategias para disminuir el contenido de sal/sodio en los alimentos procesados y los preparados en casa; 3 promover cambios de conducta en la población para reducir el consumo de sal en la alimentación; y 4 monitorear y evaluar las acciones dirigidas a reducir el consumo de sal o sodio en la población. Para alcanzar las metas propuestas se debe lograr una exitosa coordinación interinstitucional con los actores estratégicos, negociar compromisos con la industria alimentaria y los servicios de alimentación, y mejorar la regulación de los nutrientes críticos asociados con las enfermedades crónicas no transmisibles, en los alimentos. Se espera que a partir de los avances logrados durante la ejecución del Plan Nacional, Costa Rica logre alcanzar la meta internacional de reducción del consumo de sal.This article describes the progress-as well as the challenges and limitations-in reducing salt intake in Costa Rica. The National Plan to Reduce Public Consumption of Salt/Sodium in Costa Rica 2011 - 2021 was complemented with multisectoral programs and projects specifically designed to: 1 determine sodium intake and the salt/sodium content of the most widely consumed foods; identify the consumer knowledge, attitudes

  20. Impact of LbSapSal Vaccine in Canine Immunological and Parasitological Features before and after Leishmania chagasi-Challenge.

    Directory of Open Access Journals (Sweden)

    Lucilene Aparecida Resende

    Full Text Available Dogs represent the most important domestic reservoir of L. chagasi (syn. L. infantum. A vaccine against canine visceral leishmaniasis (CVL would be an important tool for decreasing the anxiety related to possible L. chagasi infection and for controlling human visceral leishmaniasis (VL. Because the sand fly salivary proteins are potent immunogens obligatorily co-deposited during transmission of Leishmania parasites, their inclusion in an anti-Leishmania vaccine has been investigated in past decades. We investigated the immunogenicity of the "LbSapSal" vaccine (L. braziliensis antigens, saponin as adjuvant, and Lutzomyia longipalpis salivary gland extract in dogs at baseline (T0, during the post-vaccination protocol (T3rd and after early (T90 and late (T885 times following L. chagasi-challenge. Our major data indicated that immunization with "LbSapSal" is able to induce biomarkers characterized by enhanced amounts of type I (tumor necrosis factor [TNF]-α, interleukin [IL]-12, interferon [IFN]-γ cytokines and reduction in type II cytokines (IL-4 and TGF-β, even after experimental challenge. The establishment of a prominent pro-inflammatory immune response after "LbSapSal" immunization supported the increased levels of nitric oxide production, favoring a reduction in spleen parasitism (78.9% and indicating long-lasting protection against L. chagasi infection. In conclusion, these results confirmed the hypothesis that the "LbSapSal" vaccination is a potential tool to control the Leishmania chagasi infection.

  1. Toxicidade aguda ao sal comum e larvicultura intensiva do jundiá Rhamdia quelen em água salobra

    Directory of Open Access Journals (Sweden)

    T.E.H.P. Fabregat

    2015-04-01

    Full Text Available A tolerância de peixes de água doce à salinidade e os níveis adequados de náuplios de Artemia na alimentação durante a larvicultura são de extrema importância para a padronização dos manejos em ambientes de criação intensiva. Dessa forma, o objetivo do trabalho foi estimar a salinidade letal (SL50 para larvas de jundiá Rhamdia quelen e determinar o efeito da salinidade e da concentração de presas vivas na larvicultura intensiva. No primeiro ensaio, larvas ao final do período lecitotrófico (1,1±0,8mg foram submetidas às salinidades de 0, 2, 4, 6, 8, 10, 15 e 20g de sal/L por um período de 96h. No segundo experimento, as larvas de jundiá, no início da alimentação exógena (1,2±0,3mg, foram submetidas a três salinidades (água doce 0, 2 e 4g de sal/L e três concentrações de presas vivas (início: 300, 500, 700 náuplios de Artemia/larvas/dia, sendo esse montante aumentado a cada cinco dias. O experimento foi realizado em delineamento inteiramente ao acaso, em esquema fatorial 3x3, por um período de 15 dias. No experimento 1, as larvas de jundiá submetidas às salinidades de 10, 15 e 20g de sal/L morreram após 12, duas e uma hora de exposição, respectivamente. As SL50 de 72 e 96h foram estimadas em 9,93 e 4,95g de sal/L, respectivamente. No final do teste de toxicidade, não houve diferença na sobrevivência entre as salinidades de 0, 2 e 4g de sal/L. No experimento 2, não foi observado efeito da interação entre salinidade e concentração de presas para o peso e o comprimento. Quanto maior a quantidade de presas, maior o crescimento das larvas. A sobrevivência apresentou interação entre os fatores. O aumento da salinidade proporcionou uma diminuição da sobrevivência, independentemente da concentração de presas. Dessa forma, conclui-se que a SL50 diminuiu com o aumento do tempo de exposição à água salinizada e que a larvicultura da espécie pode ser realizada em salinidades de até 2g de sal/L, com

  2. Contenido de yodo en sal a nivel de puestos de venta provenientes de distintas localidades en tres regiones argentinas

    OpenAIRE

    López Linares, S; Heer I, Martín

    2014-01-01

    Introducción: La deficiencia de yodo es la principal causa prevenible de retardo mental y daño cerebral en la población. La yodación de la sal en Argentina es obligatoria según Ley 17259/67. El objetivo fue determinar el contenido de yodo en sal obtenida de puestos de venta de distintas localidades de provincias que integran la región del noroeste (NOA), nordeste (NEA) y Cuyo. Material y métodos: Estudio descriptivo y transversal. Se analizaron 80 sales adquiridas por compra directa, mercado ...

  3. Image processing and recognition for biological images.

    Science.gov (United States)

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  4. Rutas de circulación e intercambio de sal en la provincia de Tunja, segunda mitad del siglo XVI

    Directory of Open Access Journals (Sweden)

    Blanca Ofelia Acuña Rodriguez

    2018-02-01

    Full Text Available A través de este texto se pretende dar a conocer cómo las rutas de circulación de sal en la Provincia de Tunja contribuyeron a configurar un gran espacio económico, que integró a las provincias de Santa Fe, Pamplona y Tunja en el Nuevo Reino de Granada, durante el siglo XVI. Se partió de una reflexión historiográfica relacionada con la circulación y el comercio de la sal, los medios de transporte y las rutas usadas por los indígenas y los españoles; a partir de las cuales se consolidó un espacio económico mediado por la producción y distribución de la sal, que convirtió a la provincia de Tunja en un eje articulador de relaciones entre los sitios productores de sal en la provincia de Santa Fe y los consumidores de las provincias de Tunja y Pamplona. Esta articulación regional facilitó tanto la circulación de productos de distintos pisos térmicos, como la integración de un amplio territorio colonial. Esto debido a que la localización de Tunja como lugar de paso y conexión entre las Provincias de Santafé, Pamplona y los Llanos de San Juan, hizo de esta ciudad y sus términos un atractivo para el asentamiento hispano, a la vez que generó, durante la segunda mitad del siglo XVI, la posibilidad para que se organizaran en toda la Provincia de Tunja, sitios de aposento, venta y reventa de productos importados de España y productos de la tierra, para el sustento y abastecimiento de las necesidades básicas, y de esta forma sacar provecho de las producciones locales como la sal, el hayo, el algodón, las mantas y otros productos.

  5. Pré-Sal: Petróleo e políticas públicas no Brasil (2007-2016

    Directory of Open Access Journals (Sweden)

    Paulo Henrique Martinez

    2016-06-01

    Full Text Available O artigo analisa a relação entre políticas sociais e economia no Brasil a partir da exploração do Pré-sal até os dias de hoje. O anúncio das reservas de petróleo na camada de Pré-sal resultou em rápidas alterações nas perspectivas econômicas, sociais e legais do país nas primeiras duas décadas do século XXI. A nova escala da produção petrolífera nacional seria a base para uma agenda de desenvolvimento econômico e social nos governos dos presidentes Lula e Dilma. Em seus respectivos governos ambos promoveram reformulações no marco regulatório do petróleo. A legislação foi alterada no tocante aos modelos de contrato, a distribuição dos royalties e o controle das reservas, assegurando ao Governo Federal maior soberania na extração, refino e distribuição do petróleo. As modificações legislativas e políticas visavam à destinação de parte dos lucros do Pré-sal para investimentos em infraestrutura e para políticas setoriais de desenvolvimento humano no Brasil.

  6. O salário na obra de Frederick Winslow Taylor Frederick Winslow Taylor's oeuvre: an analysis of wages

    Directory of Open Access Journals (Sweden)

    Victor Paulo Gomes da Silva

    2011-08-01

    Full Text Available O presente artigo analisa e explica a perspectiva de Frederick Winslow Taylor sobre o salário, tal como enunciada em suas duas grandes obras: Shop management (1903 e Principles of scientific management (1911. A primeira parte consubstancia-se na apresentação de aspectos econômicos relevantes que caracterizaram o tempo em que ele viveu e o quanto influenciaram suas obras. Na segunda parte, é efetuada uma análise da forma como o salário é apresentado nas duas obras de F. W. Taylor. O artigo termina com um comentário sobre as obras supracitadas no que se refere à perspectiva taylorista do salário.This paper analyses and explains Frederick Winslow Taylor's perspective on wages, as it is presented in his main literary works: Shop management (1903 and Principles of scientific management (1911. The first part presents the main economic aspects that characterized his lifetime, which undoubtedly influenced his literary works. The second part analyses F. W. Taylor's two main books in which the author's perspective about wages is discussed. The paper concludes with a critical view of F. W. Taylor's view on wages.

  7. Een onwrikbaar geloof in zijn gelijk : Sal Tas (1905-1976): journalist van de wereld

    NARCIS (Netherlands)

    de Vries, Tity

    Biography of the Dutch Sal Tas, activist, political writer and reporter/foreign correspondent in Paris of the newspaper Het Parool and the American non-communist left journal The New Leader during the 1950s. Tas was a controversial man with outspoken opinions. Radical left in his young years, later

  8. scikit-image: image processing in Python.

    Science.gov (United States)

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  9. scikit-image: image processing in Python

    Directory of Open Access Journals (Sweden)

    Stéfan van der Walt

    2014-06-01

    Full Text Available scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  10. Methods in Astronomical Image Processing

    Science.gov (United States)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  11. Estudo microbiológico do sal (Cloreto de sódio de origem marinha Microbiological study of marine salt (sodium chloride

    Directory of Open Access Journals (Sweden)

    Niber da Paz Moreira da Silva

    1976-01-01

    Full Text Available Foram analisadas 19 amostras de SAL (NaCl, tipo grosso, oriundas de diferentes salinas, para verificação da flora microbiana e presença de bactérias nocivas em Microbiologia Alimentar. O material apresentou grande contaminação por microrganismos saprófitas, bactérias aeróbias e anaeróbias, Gram positivas e negativas, proteolíticas, pigmentadas, esporuladas, leveduras e fungos. A alta incidência das bactérias halofílicas "vermelhas", responsáveis pela deterioração de carnes, pescados e outros produtos salgados foi estudada. A freqüência, em 15 amostras de SAL, de bactérias esporuladas termorresistentes foi calculada em 33%, possuindo um SAL germe termofílico. Para anaeróbios a positividade foi de 80%, havendo esporulação em 40% das culturas isoladas. Os índices para leveduras e fungos foram de 73% e 93%, respectivamente.19 samples of marine salt (NaCl, granular type, from different salines, were analized to verify the microbial flora and the presence of "food-poisoning" bacteria. The material presented a great contamination by saprofit microorganisms, aerobic and anaerobic bacteria Gram positives and negatives, proteolitics, pigmented, sporulated, yeast and fungi. The high incidence of "red halophilic bacteria", responsible for meat deterioration, fish and other salted products, was studied. The frequence, in 15 samples of salt, of sporulated heat-resistant bacteria was estimated at 33% containing one salt termophilic germ. For anaerobic microorganisms the positiveness reached 80%, sporulation processed in 40% of the isolated cultures. The index for yeast and fungi was 73% and 93%, respectively.

  12. Changements climatiques et infiltrations d'eau salée le long du ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    La Méditerranée orientale est très exposée aux infiltrations d'eau salée dans les aquifères (qui contiennent de l'eau douce) se trouvant sur son littoral. La dégradation des aquifères pourrait entraîner de graves conséquences socioéconomiques pour les personnes qui vivent en ces lieux. Le projet consistera à étudier ...

  13. Validación del método Potenciométrico por Ión Selectivo para la determinación de Flúor en sal, agua y orina

    Directory of Open Access Journals (Sweden)

    Patricia Aguilar R

    2001-01-01

    Full Text Available Objetivo: Validar el método potenciómetro por ión selectivo en la determinación de fluoruro. Materiales y métodos: Se analizaron 3 tipos de muestras (sal, agua, y orina, trabajándose con 3 analistas para el agua y sal, y 2 para la orina. La validación se realizó en 2 días, realizándose 10 ensayos por día. Se calculó la precisión (en condiciones de repetibilidad y reproducibilidad y exactitud del método (en términos de recuperación del analito adicionado a la muestra. Resultados: Se obtuvo una desviación estándar relativa (RSD de 2.68%, 3.29% y 2.52% en sal, agua y orina, respectivamente, y se logró recuperar 98.20%, 99.42% y 98.11 % del analito en las mismas muestras de sal, agua y orina, respectivamente. Conclusión: El método potenciómetro, por ión selectivo, realizado en condiciones óptimas y apropiadas, puede aplicarse para la determinación de flúor en muestras de sal, agua y orina.

  14. Economic contribution of participatory agroforestry program to poverty alleviation: a case from Sal forests, Bangladesh

    NARCIS (Netherlands)

    Islam, K.K.; Hoogstra, M.A.; Ullah, M.O.; Sato, N.

    2012-01-01

    In the Forest Department of Bangladesh, a Participatory Agroforestry Program (PAP) was initiated at a denuded Sal forests area to protect the forest resources and to alleviate poverty amongst the local poor population. We explored whether the PAP reduced poverty and what factors might be responsible

  15. Mobility Analysis of the Population of Rabat-Salé-Zemmour-Zaer

    OpenAIRE

    F. Ghaiti

    2007-01-01

    In this paper, we present the 2006 survey study origin destination and price that we carried out during 2006 fall in the area in the Moroccan region of Rabat-Salé-Zemmour-Zaer. The survey concerns the people-s characteristics, their displacements behavior and the price that they will be able to pay for a tramway ticket. The main objective is to study a set of relative features to the households and to their displacement's habits and to their choices among public and privet transport modes. A ...

  16. An Experiment to Study Sporadic Atom Layers in the Earth's Mesosphere and Lower Thermosphere (SAL)

    Science.gov (United States)

    Kelley, Michael C.

    1999-01-01

    The Sudden Atom Layer (SAL) Rocket was successfully launched in February 1998. All instruments worked well except those supplied by NASA Goddard Space Flight Center. (A dummy weight was launched for the neutral mass spectrometer and the ion version died shortly after lift-off.) A paper has already been published in GRL concerning the dust layer detected by an on board instrument and compared to ground-based observations made at the Arecibo Observatory by Cornell graduate student S. Collins (lidar) and Q. Zhou (radar). Collins presented a comparison of the sodium lidar data and onboard observations with a theoretical model by Plane and Cox at the Fan AGU Meeting. In addition Gelinas and Kelley presented a review paper dealing with the entire SAL instrument complement at the same meeting. An unexpected new explanation for the outer scale of E region plasma irregularities has come out of the data set. We anticipate at least a total of four papers will be published within a year of launch.

  17. Image processing with ImageJ

    CERN Document Server

    Pascau, Javier

    2013-01-01

    The book will help readers discover the various facilities of ImageJ through a tutorial-based approach.This book is targeted at scientists, engineers, technicians, and managers, and anyone who wishes to master ImageJ for image viewing, processing, and analysis. If you are a developer, you will be able to code your own routines after you have finished reading this book. No prior knowledge of ImageJ is expected.

  18. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  19. Novel acyloxy derivatives of branched mono- and polyol esters of sal fat: multiviscosity grade lubricant base stocks.

    Science.gov (United States)

    Kamalakar, Kotte; Sai Manoj, Gorantla N V T; Prasad, Rachapudi B N; Karuna, Mallampalli S L

    2014-12-10

    Sal fat, a nontraditional seed oil, was chemically modified to obtain base stocks with a wide range of specifications that can replace mineral oil base stocks. Sal fatty acids were enriched to 72.6% unsaturation using urea adduct method and reacted with branched mono alcohol, 2-ethylhexanol (2-EtH), and polyols namely neopentyl glycol (NPG) and trimethylolpropane (TMP) to obtain corresponding esters. The esters were hydroxylated and then acylated using propionic, butyric, and hexanoic anhydrides to obtain corresponding acylated derivatives. The acylated TMP esters exhibited very high viscosities (427.35-471.93 cSt at 40 °C) similar to those of BS 150 mineral oil base stock range, ISO VG 460, while the acylated NPG esters (268.81-318.84 cSt at 40 °C) and 2-EtH esters viscosities (20.94-24.44 cSt at 40 °C) exhibited viscosities in the range of ISO VG 320 and 22 respectively with good viscosity indices. Acylated NPG esters were found suitable for high temperature and acylated 2-ethylhexyl esters for low viscosity grade industrial applications. It was observed that the thermo-oxidative stabilities of all acylated products were found better compared to other vegetable oil based base stocks. Overall, all the sal fat based lubricant base stocks are promising candidates with a wide range of properties, which can replace most of the mineral oil base stocks with appropriate formulations.

  20. Determinantes da satisfação e atributos da qualidade em serviços de salão de beleza

    Directory of Open Access Journals (Sweden)

    José Luis Duarte Ribeiro

    2013-09-01

    Full Text Available Este artigo apresenta um modelo de representação dos determinantes da satisfação dos clientes de serviços em salões de beleza e dos atributos de qualidade percebidos por eles. Para isso, foram aplicadas duas pesquisas junto a usuários de salões de beleza com a finalidade de: (i determinar as relações entre os determinantes da satisfação dos clientes; e (ii identificar e hierarquizar atributos de qualidade percebidos, de acordo com sua importância para os clientes do serviço em questão. A confirmação de expectativas e a qualidade percebida aparecem como os principais determinantes da satisfação dos clientes. Competência técnica, limpeza do ambiente e dos utensílios, cumprimento de horários e localização conveniente aparecem como os principais atributos de qualidade percebidos. Os resultados desta pesquisa podem ser usados pelos gerentes de salões de beleza para aprimorar a qualidade do serviço e a satisfação dos clientes, estabelecendo um diferencial competitivo para sua empresa.

  1. Domates Pulpu ve Salçasında Viskozite (Konsistens ve Renk Üzerine Proses Koşullarının Etkisi

    Directory of Open Access Journals (Sweden)

    Aziz Ekşi

    2015-02-01

    Full Text Available Kıvam ve renk, domates pulpu ve salçada kaliteyi belirleyen ve ticarette üzerinde en çok durulan iki önemli etkendir. Domates salçasında renk ve kıvam ile hammaddenin durumu arasında yakın bir ilişki bulunduğu bilinmektedir. Ancak her iki kalite öğesini ve özellikle kıvamı, hammadde olduğu kadar, proses koşulları da etkilemektedir.

  2. using fuzzy logic in image processing

    International Nuclear Information System (INIS)

    Ashabrawy, M.A.F.

    2002-01-01

    due to the unavoidable merge between computer and mathematics, the signal processing in general and the processing in particular have greatly improved and advanced. signal processing deals with the processing of any signal data for use by a computer, while image processing deals with all kinds of images (just images). image processing involves the manipulation of image data for better appearance and viewing by people; consequently, it is a rapidly growing and exciting field to be involved in today . this work takes an applications - oriented approach to image processing .the applications; the maps and documents of the first egyptian research reactor (ETRR-1), the x-ray medical images and the fingerprints image. since filters, generally, work continuous ranges rather than discrete values, fuzzy logic techniques are more convenient.thee techniques are powerful in image processing and can deal with one- dimensional, 1-D and two - dimensional images, 2-D images as well

  3. A formação dos salários nos setores público e privado

    OpenAIRE

    Marconi, Nelson

    2010-01-01

    Este trabalho visa comprovar a existência de segmentação entre os mercados de trabalho público e privado, evidenciada através dos diferenciais salariais e das distintas regras de formação de salários em ambos, buscando discutir de modo mais detalhado quais seriam estas regras no setor público.

  4. ELABORAÇÃO DE PÃO DE SAL UTILIZANDO FARINHA MISTA DE TRIGO E LINHAÇA

    Directory of Open Access Journals (Sweden)

    T. M. OLIVEIRA

    2008-11-01

    Full Text Available

    A linhaça (Linum usitatissimum L. é uma semente que possui compostos fisiologicamente ativos, sendo fonte de fibras, ômega-3 e lignanas. Seu consumo vem sendo associado à prevenção de algumas doenças e a benefícios nutricionais. O objetivo desse trabalho foi testar a viabilidade de utilização de uma farinha mista, composta de trigo e linhaça na produção de pão de sal através da avaliação das características físico-químicas, sensoriais e tecnológicas do pão. Testes preliminares de farinografia, extensografia e panificação experimental foram realizados para determinaçãoda melhor formulação, que foi utilizada para produção do pão em uma panificadora comercial. Esse pão foi avaliado quanto a características físico-químicas, sensoriais e de textura. A adição de farinha de linhaça ao pão de sal na proporção de 10% mostrou ser viável tecnicamente, com uma excelente aceitação pelos consumidores, sabor agradável e características físico-químicas similares ao pão de sal tradicional, representando uma opção mais nutritiva e saborosa para a alimentação diária de vários consumidores.

  5. Methods of digital image processing

    International Nuclear Information System (INIS)

    Doeler, W.

    1985-01-01

    Increasing use of computerized methods for diagnostical imaging of radiological problems will open up a wide field of applications for digital image processing. The requirements set by routine diagnostics in medical radiology point to picture data storage and documentation and communication as the main points of interest for application of digital image processing. As to the purely radiological problems, the value of digital image processing is to be sought in the improved interpretability of the image information in those cases where the expert's experience and image interpretation by human visual capacities do not suffice. There are many other domains of imaging in medical physics where digital image processing and evaluation is very useful. The paper reviews the various methods available for a variety of problem solutions, and explains the hardware available for the tasks discussed. (orig.) [de

  6. Apmierinātības ar dzīvi saistība ar sociālo salīdzināšanu.

    OpenAIRE

    Tamsone, Liliāna

    2011-01-01

    Šajā bakalaura darbā tika izvirzīti trīs pētījuma jautājumi: 1)Kāda saistība pastāv starp apmierinātību ar dzīvi un sociālās salīdzināšanas līmeni? 2)Kāda saistība pastāv starp apmierinātību ar dzīvi un augšupejošo salīdzināšanu? 3)Kāda saistība pastāv starp apmierinātību ar dzīvi un lejupejošo salīdzināšanu? Pētījumā piedalījās 58 respondenti – 12 vīrieši un 46 sievietes vecumā no 25 līdz 45 gadiem. Respondentu vidējais vecums ir M = 31,67, SD = 5,11. Pētījuma instrumentārijs iekļauj divas m...

  7. Stable image acquisition for mobile image processing applications

    Science.gov (United States)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  8. Fast processing of foreign fiber images by image blocking

    OpenAIRE

    Yutao Wu; Daoliang Li; Zhenbo Li; Wenzhu Yang

    2014-01-01

    In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extra...

  9. Biomedical Image Processing

    CERN Document Server

    Deserno, Thomas Martin

    2011-01-01

    In modern medicine, imaging is the most effective tool for diagnostics, treatment planning and therapy. Almost all modalities have went to directly digital acquisition techniques and processing of this image data have become an important option for health care in future. This book is written by a team of internationally recognized experts from all over the world. It provides a brief but complete overview on medical image processing and analysis highlighting recent advances that have been made in academics. Color figures are used extensively to illustrate the methods and help the reader to understand the complex topics.

  10. Image perception and image processing

    International Nuclear Information System (INIS)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures. (orig.) [de

  11. Image perception and image processing

    Energy Technology Data Exchange (ETDEWEB)

    Wackenheim, A.

    1987-01-01

    The author develops theoretical and practical models of image perception and image processing, based on phenomenology and structuralism and leading to original perception: fundamental for a positivistic approach of research work for the development of artificial intelligence that will be able in an automated system fo 'reading' X-ray pictures.

  12. Optoelectronic imaging of speckle using image processing method

    Science.gov (United States)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  13. Introduction to digital image processing

    CERN Document Server

    Pratt, William K

    2013-01-01

    CONTINUOUS IMAGE CHARACTERIZATION Continuous Image Mathematical Characterization Image RepresentationTwo-Dimensional SystemsTwo-Dimensional Fourier TransformImage Stochastic CharacterizationPsychophysical Vision Properties Light PerceptionEye PhysiologyVisual PhenomenaMonochrome Vision ModelColor Vision ModelPhotometry and ColorimetryPhotometryColor MatchingColorimetry ConceptsColor SpacesDIGITAL IMAGE CHARACTERIZATION Image Sampling and Reconstruction Image Sampling and Reconstruction ConceptsMonochrome Image Sampling SystemsMonochrome Image Reconstruction SystemsColor Image Sampling SystemsImage QuantizationScalar QuantizationProcessing Quantized VariablesMonochrome and Color Image QuantizationDISCRETE TWO-DIMENSIONAL LINEAR PROCESSING Discrete Image Mathematical Characterization Vector-Space Image RepresentationGeneralized Two-Dimensional Linear OperatorImage Statistical CharacterizationImage Probability Density ModelsLinear Operator Statistical RepresentationSuperposition and ConvolutionFinite-Area Superp...

  14. Observaciones histopatologicas de juveniles penaeus vannamei sometidos a dietas artificiales con diferentes concentraciones de una sal de ácido ascórbico (vitamina c)

    OpenAIRE

    Vera Muñoz, L.

    1995-01-01

    Observaciones histopatologicas de juveniles Penaeus Vannamei sometidos a dietas artificiales con diferentes concentraciones de una sal de ácido ascórbico (vitamina C) Se realizaron análisis histológicos y comprobación mediante métodos histoquímicos de la presencia de melanina, a juveniles Penaeus vannamei sometidos a cinco dietas experimentales con diferentes concentraciones de una sal de L-Ascorbato-2-Fosfato de Mg (APM) usada como fuente de ácido ascórbico (AA).

  15. Fundamentals of electronic image processing

    CERN Document Server

    Weeks, Arthur R

    1996-01-01

    This book is directed to practicing engineers and scientists who need to understand the fundamentals of image processing theory and algorithms to perform their technical tasks. It is intended to fill the gap between existing high-level texts dedicated to specialists in the field and the need for a more practical, fundamental text on image processing. A variety of example images are used to enhance reader understanding of how particular image processing algorithms work.

  16. Image processing technology for nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Beom; Kim, Woong Ki; Park, Soon Young

    1993-05-01

    Digital image processing technique is being actively studied since microprocessors and semiconductor memory devices have been developed in 1960's. Now image processing board for personal computer as well as image processing system for workstation is developed and widely applied to medical science, military, remote inspection, and nuclear industry. Image processing technology which provides computer system with vision ability not only recognizes nonobvious information but processes large information and therefore this technique is applied to various fields like remote measurement, object recognition and decision in adverse environment, and analysis of X-ray penetration image in nuclear facilities. In this report, various applications of image processing to nuclear facilities are examined, and image processing techniques are also analysed with the view of proposing the ideas for future applications. (Author)

  17. [Imaging center - optimization of the imaging process].

    Science.gov (United States)

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  18. 76 FR 9403 - Finding That the Lebanese Canadian Bank SAL Is a Financial Institution of Primary Money...

    Science.gov (United States)

    2011-02-17

    ... Bank SAL Is a Financial Institution of Primary Money Laundering Concern AGENCY: Financial Crimes...'') is a financial institution of primary money laundering concern. DATES: The finding made in this... Law 107-56. Title III of the USA PATRIOT Act amended the anti- money laundering provisions of the Bank...

  19. Fast processing of foreign fiber images by image blocking

    Directory of Open Access Journals (Sweden)

    Yutao Wu

    2014-08-01

    Full Text Available In the textile industry, it is always the case that cotton products are constitutive of many types of foreign fibers which affect the overall quality of cotton products. As the foundation of the foreign fiber automated inspection, image process exerts a critical impact on the process of foreign fiber identification. This paper presents a new approach for the fast processing of foreign fiber images. This approach includes five main steps, image block, image pre-decision, image background extraction, image enhancement and segmentation, and image connection. At first, the captured color images were transformed into gray-scale images; followed by the inversion of gray-scale of the transformed images ; then the whole image was divided into several blocks. Thereafter, the subsequent step is to judge which image block contains the target foreign fiber image through image pre-decision. Then we segment the image block via OSTU which possibly contains target images after background eradication and image strengthening. Finally, we connect those relevant segmented image blocks to get an intact and clear foreign fiber target image. The experimental result shows that this method of segmentation has the advantage of accuracy and speed over the other segmentation methods. On the other hand, this method also connects the target image that produce fractures therefore getting an intact and clear foreign fiber target image.

  20. Diabetes gestacional, hipotiroidismo y concentración urinaria de yodo en embarazadas. Yodurias en escolares en Paraguay: Exceso de yodo en la sal y riesgo de hiper e hipotiroidismo

    OpenAIRE

    Jara Yorg, Jorge Antonio; Pretell, Eduardo A; Ovelar, Elsi; Sánchez Bernal, S; Mendoza, L; Jara Mark, A; Jara Ruiz, Jessica M; Jara Ruiz, Elías; Ortellado, José; Acuña, Vicente; Brizuela, Félix; Rodriguez, Amada; Santos, Jorge; Peña, Giuliana; Arevalos, Cecilia

    2016-01-01

    El principal indicador del impacto de la yodación de la sal de consumo humano es la concentración urinaria de yodo la cual es útil en el monitoreo de la sal. En la encuesta del año 1988 realizada en el Paraguay, se alcanzó una prevalencia de bocio de 48,6% en la población escolar con un déficit de yodo en la sal, pero el año 2000 en el estudio del proyecto de Tiroides Móvil, se redujo por el método ecográfico a 17%. Ese mismo año la mediana de los niveles urinarios en niños escolares de 6-12 ...

  1. AUTOMATION OF IMAGE DATA PROCESSING

    Directory of Open Access Journals (Sweden)

    Preuss Ryszard

    2014-12-01

    Full Text Available This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft . At present, image data obtained by various registration systems (metric and non - metric cameras placed on airplanes , satellites , or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images . For fast images georeferencing automatic image matching algorithms are currently applied . They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage . Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object ( area. In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic , DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules . I mage processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters . The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system.

  2. SalHUD--A Graphical Interface to Public Health Data in Puerto Rico.

    Science.gov (United States)

    Ortiz-Zuazaga, Humberto G; Arce-Corretjer, Roberto; Solá-Sloan, Juan M; Conde, José G

    2015-12-22

    This paper describes SalHUD, a prototype web-based application for visualizing health data from Puerto Rico. Our initial focus was to provide interactive maps displaying years of potential life lost (YPLL). The public-use mortality file for year 2008 was downloaded from the Puerto Rico Institute of Statistics website. Data was processed with R, Python and EpiInfo to calculate years of potential life lost for the leading causes of death on each of the 78 municipalities in the island. Death records were classified according to ICD-10 codes. YPLL for each municipality was integrated into AtlasPR, a D3 Javascript map library. Additional Javascript, HTML and CSS programing was required to display maps as a web-based interface. YPLL for all municipalities are displayed on a map of Puerto Rico for each of the ten leading causes of death and for all causes combined, so users may dynamically explore the impact of premature mortality. This work is the first step in providing the general public in Puerto Rico with user-friendly, interactive, visual access to public health data that is usually published in numerical, text-based media.

  3. SalHUD—A Graphical Interface to Public Health Data in Puerto Rico

    Directory of Open Access Journals (Sweden)

    Humberto G. Ortiz-Zuazaga

    2015-12-01

    Full Text Available Purpose: This paper describes SalHUD, a prototype web-based application for visualizing health data from Puerto Rico. Our initial focus was to provide interactive maps displaying years of potential life lost (YPLL. Methods: The public-use mortality file for year 2008 was downloaded from the Puerto Rico Institute of Statistics website. Data was processed with R, Python and EpiInfo to calculate years of potential life lost for the leading causes of death on each of the 78 municipalities in the island. Death records were classified according to ICD-10 codes. YPLL for each municipality was integrated into AtlasPR, a D3 Javascript map library. Additional Javascript, HTML and CSS programing was required to display maps as a web-based interface. Results: YPLL for all municipalities are displayed on a map of Puerto Rico for each of the ten leading causes of death and for all causes combined, so users may dynamically explore the impact of premature mortality. Discussion: This work is the first step in providing the general public in Puerto Rico with user-friendly, interactive, visual access to public health data that is usually published in numerical, text-based media.

  4. mSalUV: un nuevo sistema de mensajería móvil para el control de la diabetes en México

    OpenAIRE

    Néstor Iván Cabrera Mendoza; Pedro Pablo Castro Enriquez; Verónica Patricia Demeneghi Marini; Luis Fernández Luque; Jaime Morales Romero; Luis Sainz Vazquez; María Cristina Ortiz León

    2014-01-01

    OBJETIVO: Diseñar y desarrollar un sistema de mensajería móvil llamado mSalUV, que permita recordar a pacientes con diabetes mellitus tipo 2 la toma de medicación y la asistencia a citas y que promueva estilos de vida saludables, así como explorar su opinión con respecto al uso del sistema. MÉTODOS: Se consideraron tres etapas: la primera incluyó el diseño y desarrollo de mSalUV. La segunda abarcó el diseño y construcción de los mensajes de texto. La tercera exploró la opinión de los usuarios...

  5. Motion-compensated processing of image signals

    NARCIS (Netherlands)

    2010-01-01

    In a motion-compensated processing of images, input images are down-scaled (scl) to obtain down-scaled images, the down-scaled images are subjected to motion- compensated processing (ME UPC) to obtain motion-compensated images, the motion- compensated images are up-scaled (sc2) to obtain up-scaled

  6. Medical image processing

    CERN Document Server

    Dougherty, Geoff

    2011-01-01

    This book is designed for end users in the field of digital imaging, who wish to update their skills and understanding with the latest techniques in image analysis. This book emphasizes the conceptual framework of image analysis and the effective use of image processing tools. It uses applications in a variety of fields to demonstrate and consolidate both specific and general concepts, and to build intuition, insight and understanding. Although the chapters are essentially self-contained they reference other chapters to form an integrated whole. Each chapter employs a pedagogical approach to e

  7. An effective parameter optimization technique for vibration flow field characterization of PP melts via LS-SVM combined with SALS in an electromagnetism dynamic extruder

    Science.gov (United States)

    Xian, Guangming

    2018-03-01

    A method for predicting the optimal vibration field parameters by least square support vector machine (LS-SVM) is presented in this paper. One convenient and commonly used technique for characterizing the the vibration flow field of polymer melts films is small angle light scattering (SALS) in a visualized slit die of the electromagnetism dynamic extruder. The optimal value of vibration vibration frequency, vibration amplitude, and the maximum light intensity projection area can be obtained by using LS-SVM for prediction. For illustrating this method and show its validity, the flowing material is used with polypropylene (PP) and fifteen samples are tested at the rotation speed of screw at 36rpm. This paper first describes the apparatus of SALS to perform the experiments, then gives the theoretical basis of this new method, and detail the experimental results for parameter prediction of vibration flow field. It is demonstrated that it is possible to use the method of SALS and obtain detailed information on optimal parameter of vibration flow field of PP melts by LS-SVM.

  8. Image quality dependence on image processing software in ...

    African Journals Online (AJOL)

    Image quality dependence on image processing software in computed radiography. ... Agfa CR readers use MUSICA software, and an upgrade with significantly different image ... Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  9. Concentraciones de yodo en orina y en sal de consumo en mujeres entre 12 a 49 años del Perú

    Directory of Open Access Journals (Sweden)

    Carolina Tarqui-Mamani

    Full Text Available Objetivos. Determinar las concentraciones de yodo en orina y sal de consumo en mujeres entre 12 a 49 años en Perú. Materiales y métodos. Se realizó un estudio observacional y transversal. Durante el 2012 y 2013, se incluyó mujeres entre 12 a 49 años residentes en los hogares peruanos seleccionadas mediante un muestreo probabilístico, estratificado y multietápico. La determinación de yodo en orina se realizó por espectrofotometría basada en la reacción de Sandell-Kolthoff. La evaluación cualitativa de yodo en sal se realizó por yoditest y la cuantitativa por volumetría. El procesamiento se realizó mediante muestras complejas con ponderaciones. Se obtuvo medianas, rango intercuartílico y percentiles. Resultados. La mediana de yoduria en las participantes fue 250,4 ug/L; los departamentos con medianas de yoduria elevadas fueron: Moquegua (389,3 ug/L; Tacna (320,5 ug/L; Madre de Dios (319,8 ug/L, y Ucayali (306,0 ug/L; mientras que Puno (192,9 ug/L; Piura (188,1 ug/L y Tumbes (180,5 ug/L tuvieron medianas dentro de lo recomendado por la OMS. La mediana de yoduria en gestantes fue 274,6 ug/L (RIQ: 283 ug/L. El 82,5% de las muestras de sal tuvieron yodo ≥30 ppm y 1,9% tuvo valores de 0 ppm. Conclusiones. La mediana de yoduria en las mujeres peruanas está por encima de lo recomendado por la OMS y la mayoría de las muestras de sal tuvieron concentraciones adecuadas de yodo según la OMS

  10. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    To demonstrate the importance of the image processing of fingerprint images prior to image enrolment or comparison, the set of fingerprint images in databases (a) and (b) of the FVC (Fingerprint Verification Competition) 2000 database were analyzed using a features extraction algorithm. This paper presents the results of ...

  11. Biomedical signal and image processing

    CERN Document Server

    Najarian, Kayvan

    2012-01-01

    INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview

  12. Industrial Applications of Image Processing

    Science.gov (United States)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  13. Image processing in radiology

    International Nuclear Information System (INIS)

    Dammann, F.

    2002-01-01

    Medical imaging processing and analysis methods have significantly improved during recent years and are now being increasingly used in clinical applications. Preprocessing algorithms are used to influence image contrast and noise. Three-dimensional visualization techniques including volume rendering and virtual endoscopy are increasingly available to evaluate sectional imaging data sets. Registration techniques have been developed to merge different examination modalities. Structures of interest can be extracted from the image data sets by various segmentation methods. Segmented structures are used for automated quantification analysis as well as for three-dimensional therapy planning, simulation and intervention guidance, including medical modelling, virtual reality environments, surgical robots and navigation systems. These newly developed methods require specialized skills for the production and postprocessing of radiological imaging data as well as new definitions of the roles of the traditional specialities. The aim of this article is to give an overview of the state-of-the-art of medical imaging processing methods, practical implications for the ragiologist's daily work and future aspects. (orig.) [de

  14. Microprocessor based image processing system

    International Nuclear Information System (INIS)

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  15. Image Processing: Some Challenging Problems

    Science.gov (United States)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  16. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  17. TECHNOLOGIES OF BRAIN IMAGES PROCESSING

    Directory of Open Access Journals (Sweden)

    O.M. Klyuchko

    2017-12-01

    Full Text Available The purpose of present research was to analyze modern methods of processing biological images implemented before storage in databases for biotechnological purposes. The databases further were incorporated into web-based digital systems. Examples of such information systems were described in the work for two levels of biological material organization; databases for storing data of histological analysis and of whole brain were described. Methods of neuroimaging processing for electronic brain atlas were considered. It was shown that certain pathological features can be revealed in histological image processing. Several medical diagnostic techniques (for certain brain pathologies, etc. as well as a few biotechnological methods are based on such effects. Algorithms of image processing were suggested. Electronic brain atlas was conveniently for professionals in different fields described in details. Approaches of brain atlas elaboration, “composite” scheme for large deformations as well as several methods of mathematic images processing were described as well.

  18. Image processing in medical ultrasound

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian

    This Ph.D project addresses image processing in medical ultrasound and seeks to achieve two major scientific goals: First to develop an understanding of the most significant factors influencing image quality in medical ultrasound, and secondly to use this knowledge to develop image processing...... multiple imaging setups. This makes the system well suited for development of new processing methods and for clinical evaluations, where acquisition of the exact same scan location for multiple methods is important. The second project addressed implementation, development and evaluation of SASB using...... methods for enhancing the diagnostic value of medical ultrasound. The project is an industrial Ph.D project co-sponsored by BK Medical ApS., with the commercial goal to improve the image quality of BK Medicals scanners. Currently BK Medical employ a simple conventional delay-and-sum beamformer to generate...

  19. A novel data processing technique for image reconstruction of penumbral imaging

    Science.gov (United States)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  20. O Mercado interno na América portuguesa: "exclusivo" metropolitano do comércio colonial e os "descaminhos do sal" na Capitania de São Paulo na primeira metade do século XVIII

    Directory of Open Access Journals (Sweden)

    Artur José Renda Vitorino

    2012-12-01

    Full Text Available Este artigo visa analisar as atividades mercantis de Francisco Pinheiro, mercador português de grosso trato, e seus agentes comerciais na capitania de São Paulo na primeira metade do século XVIII, com o comércio do sal, produto monopolizado pela Coroa portuguesa. Contudo, o contrato do sal ofereceu à sua agência muito mais embaraços do que um meio seguro de amealhar riquezas, pois se deparou com a concorrência ilegal dos "donos do sal" presentes na região e a interferência das Câmaras Municipais em seus negócios.

  1. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    International Nuclear Information System (INIS)

    Sensakovic, William F.; O'Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-01-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA"2 by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing

  2. Use of personal computer image for processing a magnetic resonance image (MRI)

    International Nuclear Information System (INIS)

    Yamamoto, Tetsuo; Tanaka, Hitoshi

    1988-01-01

    Image processing of MR imaging was attempted by using a popular personal computer as 16-bit model. The computer processed the images on a 256 x 256 matrix and 512 x 512 matrix. The softwer languages for image-processing were those of Macro-Assembler performed by (MS-DOS). The original images, acuired with an 0.5 T superconducting machine (VISTA MR 0.5 T, Picker International) were transfered to the computer by the flexible disket. Image process are the display of image to monitor, other the contrast enhancement, the unsharped mask contrast enhancement, the various filter process, the edge detections or the color histogram was obtained in 1.6 sec to 67 sec, indicating that commercialzed personal computer had ability for routine clinical purpose in MRI-processing. (author)

  3. Introduction to computer image processing

    Science.gov (United States)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  4. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  5. Volumetric image processing: A new technique for three-dimensional imaging

    International Nuclear Information System (INIS)

    Fishman, E.K.; Drebin, B.; Magid, D.; St Ville, J.A.; Zerhouni, E.A.; Siegelman, S.S.; Ney, D.R.

    1986-01-01

    Volumetric three-dimensional (3D) image processing was performed on CT scans of 25 normal hips, and image quality and potential diagnostic applications were assessed. In contrast to surface detection 3D techniques, volumetric processing preserves every pixel of transaxial CT data, replacing the gray scale with transparent ''gels'' and shading. Anatomically, accurate 3D images can be rotated and manipulated in real time, including simulated tissue layer ''peeling'' and mock surgery or disarticulation. This pilot study suggests that volumetric rendering is a major advance in signal processing of medical image data, producing a high quality, uniquely maneuverable image that is useful for fracture interpretation, soft-tissue analysis, surgical planning, and surgical rehearsal

  6. REMOTE SENSING IMAGE QUALITY ASSESSMENT EXPERIMENT WITH POST-PROCESSING

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2018-04-01

    Full Text Available This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  7. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  8. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)

    2016-10-15

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  9. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    Science.gov (United States)

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  10. Scilab and SIP for Image Processing

    OpenAIRE

    Fabbri, Ricardo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2012-01-01

    This paper is an overview of Image Processing and Analysis using Scilab, a free prototyping environment for numerical calculations similar to Matlab. We demonstrate the capabilities of SIP -- the Scilab Image Processing Toolbox -- which extends Scilab with many functions to read and write images in over 100 major file formats, including PNG, JPEG, BMP, and TIFF. It also provides routines for image filtering, edge detection, blurring, segmentation, shape analysis, and image recognition. Basic ...

  11. Enhancement of image contrast in linacgram through image processing

    International Nuclear Information System (INIS)

    Suh, Hyun Suk; Shin, Hyun Kyo; Lee, Re Na

    2000-01-01

    Conventional radiation therapy portal images gives low contrast images. The purpose of this study was to enhance image contrast of a linacgram by developing a low--cost image processing method. Chest linacgram was obtained by irradiating humanoid phantom and scanned using Diagnostic-Pro scanner for image processing. Several types of scan method were used in scanning. These include optical density scan, histogram equalized scan, linear histogram based scan, linear histogram independent scan, linear optical density scan, logarithmic scan, and power square root scan. The histogram distribution of the scanned images were plotted and the ranges of the gray scale were compared among various scan types. The scanned images were then transformed to the gray window by pallette fitting method and the contrast of the reprocessed portal images were evaluated for image improvement. Portal images of patients were also taken at various anatomic sites and the images were processed by Gray Scale Expansion (GSE) method. The patient images were analyzed to examine the feasibility of using the GSE technique in clinic. The histogram distribution showed that minimum and maximum gray scale ranges of 3192 and 21940 were obtained when the image was scanned using logarithmic method and square root method, respectively. Out of 256 gray scale, only 7 to 30% of the steps were used. After expanding the gray scale to full range, contrast of the portal images were improved. Experiment performed with patient image showed that improved identification of organs were achieved by GSE in portal images of knee joint, head and neck, lung, and pelvis. Phantom study demonstrated that the GSE technique improved image contrast of a linacgram. This indicates that the decrease in image quality resulting from the dual exposure, could be improved by expanding the gray scale. As a result, the improved technique will make it possible to compare the digitally reconstructed radiographs (DRR) and simulation image for

  12. Digital Data Processing of Images

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  13. Image processing in diabetic related causes

    CERN Document Server

    Kumar, Amit

    2016-01-01

    This book is a collection of all the experimental results and analysis carried out on medical images of diabetic related causes. The experimental investigations have been carried out on images starting from very basic image processing techniques such as image enhancement to sophisticated image segmentation methods. This book is intended to create an awareness on diabetes and its related causes and image processing methods used to detect and forecast in a very simple way. This book is useful to researchers, Engineers, Medical Doctors and Bioinformatics researchers.

  14. Microcrustáceos y Vibrio cholerae O1 viable no cultivable (VNC: resultados en la Cuenca del Río Salí, Tucumán, Argentina Microcrustaceans and viable but nonculturable (VNC Vibrio cholerae O1: results in the Salí River basin, Tucumán, Argentina

    Directory of Open Access Journals (Sweden)

    Cecilia Locascio de Mitrovich

    2010-01-01

    Full Text Available Vibrio cholerae reside habitualmente en aguas marinas y continentales. Según las condiciones ambientales y los recursos le sean “favorables” o “desfavorables”, se generan estados viables cultivables (VC o viables no cultivables (VNC respectivamente y, bajo esta última forma sobrevive. Para abordar la problemática del cólera en la Cuenca del Río Salí (Tucumán, Argentina, se realizaron muestreos durante los años 2003-2005 donde se consideraron aspectos fisicos, químicos, biológicos y sanitarios. Para evaluar los probables reservorios del patógeno, se analizó el zooplancton del Río Salí (Canal Norte y Banda Río Salí y Río Lules. La mayor representatividad taxonómica la registraron los copépodos, especialmente Eucyclops neumani (Pesta, 1927, junto a Acanthocyclops robustus (Sars, 1863, Metacyclops sp., Paracyclops chiltoni y Notodiaptomus incompositus (Brian, 1925, además de algunos rotíferos y cladóceros como (Lecane sp., y (Brachionus sp., Moina sp. y Leydigia sp.. La frecuencia de ocurrencia fue baja y no superó el 25%. El Canal Norte fue ambiente más propicio por la riqueza específica, abundancia y constancia de la comunidad. Las variables fisicas y químicas asociadas al zooplancton coincidirían con los valores que por nuestros registros y los antecedentes, se conocen para el desarrollo del patógeno. En el período estival hubo coincidencia entre la presencia de la forma VNC de V. cholerae O1 (inmunofluorescencia con anticuerpos anti O1 y el desarrollo del zooplancton. Se observaron formas VNC sobre apéndices o estructuras de copépodos ciclopoideos y cladóceros quidóridos, reflejando probablemente afinidad con sustratos quitinosos.Vibrio cholerae habitually lives in marine and continental waters. According to "favourable" or "unfavourable" resources and environmental conditions, viable (VC or viable non-culturable (VNC states will be generated, surviving only the latter form. To address the problem of

  15. Fuzzy image processing and applications with Matlab

    CERN Document Server

    Chaira, Tamalika

    2009-01-01

    In contrast to classical image analysis methods that employ ""crisp"" mathematics, fuzzy set techniques provide an elegant foundation and a set of rich methodologies for diverse image-processing tasks. However, a solid understanding of fuzzy processing requires a firm grasp of essential principles and background knowledge.Fuzzy Image Processing and Applications with MATLAB® presents the integral science and essential mathematics behind this exciting and dynamic branch of image processing, which is becoming increasingly important to applications in areas such as remote sensing, medical imaging,

  16. Processing of medical images

    International Nuclear Information System (INIS)

    Restrepo, A.

    1998-01-01

    Thanks to the innovations in the technology for the processing of medical images, to the high development of better and cheaper computers, and, additionally, to the advances in the systems of communications of medical images, the acquisition, storage and handling of digital images has acquired great importance in all the branches of the medicine. It is sought in this article to introduce some fundamental ideas of prosecution of digital images that include such aspects as their representation, storage, improvement, visualization and understanding

  17. Spot restoration for GPR image post-processing

    Science.gov (United States)

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  18. Intelligent medical image processing by simulated annealing

    International Nuclear Information System (INIS)

    Ohyama, Nagaaki

    1992-01-01

    Image processing is being widely used in the medical field and already has become very important, especially when used for image reconstruction purposes. In this paper, it is shown that image processing can be classified into 4 categories; passive, active, intelligent and visual image processing. These 4 classes are explained at first through the use of several examples. The results show that the passive image processing does not give better results than the others. Intelligent image processing, then, is addressed, and the simulated annealing method is introduced. Due to the flexibility of the simulated annealing, formulated intelligence is shown to be easily introduced in an image reconstruction problem. As a practical example, 3D blood vessel reconstruction from a small number of projections, which is insufficient for conventional method to give good reconstruction, is proposed, and computer simulation clearly shows the effectiveness of simulated annealing method. Prior to the conclusion, medical file systems such as IS and C (Image Save and Carry) is pointed out to have potential for formulating knowledge, which is indispensable for intelligent image processing. This paper concludes by summarizing the advantages of simulated annealing. (author)

  19. Invitation to medical image processing

    International Nuclear Information System (INIS)

    Kitasaka, Takayuki; Suenaga, Yasuhito; Mori, Kensaku

    2010-01-01

    This medical essay explains the present state of CT image processing technology about its recognition, acquisition and visualization for computer-assisted diagnosis (CAD) and surgery (CAS), and future view. Medical image processing has a series of history of its original start from the discovery of X-ray to its application to diagnostic radiography, its combination with the computer for CT, multi-detector raw CT, leading to 3D/4D images for CAD and CAS. CAD is performed based on the recognition of normal anatomical structure of human body, detection of possible abnormal lesion and visualization of its numerical figure into image. Actual instances of CAD images are presented here for chest (lung cancer), abdomen (colorectal cancer) and future body atlas (models of organs and diseases for imaging), a recent national project: computer anatomy. CAS involves the surgical planning technology based on 3D images, navigation of the actual procedure and of endoscopy. As guidance to beginning technological image processing, described are the national and international community like related academic societies, regularly conducting congresses, textbooks and workshops, and topics in the field like computed anatomy of an individual patient for CAD and CAS, its data security and standardization. In future, protective medicine is in authors' view based on the imaging technology, e.g., daily life CAD of individuals ultimately, as exemplified in the present body thermometer and home sphygmometer, to monitor one's routine physical conditions. (T.T.)

  20. Differential morphology and image processing.

    Science.gov (United States)

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  1. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  2. Simulating SAL formation and aerosol size distribution during SAMUM-I

    KAUST Repository

    Khan, Basit Ali

    2015-04-01

    To understand the formation mechanisms of Saharan Air Layer (SAL), we combine model simulations and dust observations collected during the first stage of the Saharan Mineral Dust Experiment (SAMUM-I), which sampled dust events that extended from Morocco to Portugal, and investigated the spatial distribution and the microphysical, optical, chemical, and radiative properties of Saharan mineral dust. We employed the Weather Research Forecast model coupled with the Chemistry/Aerosol module (WRF-Chem) to reproduce the meteorological environment and spatial and size distributions of dust. The experimental domain covers northwest Africa including the southern Sahara, Morocco and part of the Atlantic Ocean with 5 km horizontal grid spacing and 51 vertical layers. The experiments were run from 20 May to 9 June 2006, covering the period of most intensive dust outbreaks. Comparisons of model results with available airborne and ground-based observations show that WRF-Chem reproduces observed meteorological fields as well as aerosol spatial distribution across the entire region and along the airplane\\'s tracks. We evaluated several aerosol uplift processes and found that orographic lifting, aerosol transport through the land/sea interface with steep gradients of meteorological characteristics, and interaction of sea breezes with the continental outflow are key mechanisms that form a surface-detached aerosol plume over the ocean. Comparisons of simulated dust size distributions with airplane and ground-based observations are generally good, but suggest that more detailed treatment of microphysics in the model is required to capture the full-scale effect of large aerosol particles.

  3. Selections from 2017: Image Processing with AstroImageJ

    Science.gov (United States)

    Kohler, Susanna

    2017-12-01

    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry

  4. Nuclear medicine imaging and data processing

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1978-01-01

    The Oak Ridge Imaging System (ORIS) is a software operating system structure around the Digital Equipment Corporation's PDP-8 minicomputer which provides a complete range of image manipulation procedures. Through its modular design it remains open-ended for easy expansion to meet future needs. Already included in the system are image access routines for use with the rectilinear scanner or gamma camera (both static and flow studies); display hardware design and corresponding software; archival storage provisions; and, most important, many image processing techniques. The image processing capabilities include image defect removal, smoothing, nonlinear bounding, preparation of functional images, and transaxial emission tomography reconstruction from a limited number of views

  5. Image exploitation and dissemination prototype of distributed image processing

    International Nuclear Information System (INIS)

    Batool, N.; Huqqani, A.A.; Mahmood, A.

    2003-05-01

    Image processing applications requirements can be best met by using the distributed environment. This report presents to draw inferences by utilizing the existed LAN resources under the distributed computing environment using Java and web technology for extensive processing to make it truly system independent. Although the environment has been tested using image processing applications, its design and architecture is truly general and modular so that it can be used for other applications as well, which require distributed processing. Images originating from server are fed to the workers along with the desired operations to be performed on them. The Server distributes the task among the Workers who carry out the required operations and send back the results. This application has been implemented using the Remote Method Invocation (RMl) feature of Java. Java RMI allows an object running in one Java Virtual Machine (JVM) to invoke methods on another JVM thus providing remote communication between programs written in the Java programming language. RMI can therefore be used to develop distributed applications [1]. We undertook this project to gain a better understanding of distributed systems concepts and its uses for resource hungry jobs. The image processing application is developed under this environment

  6. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    Completely self-contained-and heavily illustrated-this introduction to basic concepts and methodologies for digital image processing is written at a level that truly is suitable for seniors and first...

  7. Predictive images of postoperative levator resection outcome using image processing software

    Directory of Open Access Journals (Sweden)

    Mawatari Y

    2016-09-01

    Full Text Available Yuki Mawatari,1 Mikiko Fukushima2 1Igo Ophthalmic Clinic, Kagoshima, 2Department of Ophthalmology, Faculty of Life Science, Kumamoto University, Chuo-ku, Kumamoto, Japan Purpose: This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection.Methods: Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection. Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®. Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery.Results: Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2% were satisfied with their postoperative appearances, and 55 patients (84.8% positively responded to the usefulness of processed images to predict postoperative appearance.Conclusion: Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. Keywords: levator resection, blepharoptosis, image processing, Adobe Photoshop® 

  8. Predictive images of postoperative levator resection outcome using image processing software.

    Science.gov (United States)

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  9. Applied medical image processing a basic course

    CERN Document Server

    Birkfellner, Wolfgang

    2014-01-01

    A widely used, classroom-tested text, Applied Medical Image Processing: A Basic Course delivers an ideal introduction to image processing in medicine, emphasizing the clinical relevance and special requirements of the field. Avoiding excessive mathematical formalisms, the book presents key principles by implementing algorithms from scratch and using simple MATLAB®/Octave scripts with image data and illustrations on an accompanying CD-ROM or companion website. Organized as a complete textbook, it provides an overview of the physics of medical image processing and discusses image formats and data storage, intensity transforms, filtering of images and applications of the Fourier transform, three-dimensional spatial transforms, volume rendering, image registration, and tomographic reconstruction.

  10. Image processing for medical diagnosis using CNN

    International Nuclear Information System (INIS)

    Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

    2003-01-01

    Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images

  11. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  12. An alternative option for "resect and discard" strategy, using magnifying narrow-band imaging: a prospective "proof-of-principle" study.

    Science.gov (United States)

    Takeuchi, Yoji; Hanafusa, Masao; Kanzaki, Hiromitsu; Ohta, Takashi; Hanaoka, Noboru; Yamamoto, Sachiko; Higashino, Koji; Tomita, Yasuhiko; Uedo, Noriya; Ishihara, Ryu; Iishi, Hiroyasu

    2015-10-01

    The "resect and discard" strategy is beneficial for cost savings on screening and surveillance colonoscopy, but it has the risk to discard lesions with advanced histology or small invasive cancer (small advanced lesion; SALs). The aim of this study was to prove the principle of new "resect and discard" strategy with consideration for SALs using magnifying narrow-band imaging (M-NBI). Patients undergoing colonoscopy at a tertiary center were involved in this prospective trial. For each detected polyp <10 mm, optical diagnosis (OD) and virtual management ("leave in situ", "discard" or "send for pathology") were independently made using non-magnifying NBI (N-NBI) and M-NBI, and next surveillance interval were predicted. Histological and optical diagnosis results of all polyps were compared. While the management could be decided in 82% of polyps smaller than 10 mm, 24/31 (77%) SALs including two small invasive cancers were not discarded based on OD using M-NBI. The sensitivity [90% confidence interval (CI)] of M-NBI for SALs was 0.77 (0.61-0.89). The risk for discarding SALs using N-NBI was significantly higher than that using M-NBI (53 vs. 23%, p = 0.02). The diagnostic accuracy (95% CI) of M-NBI in distinguishing neoplastic from non-neoplastic lesions [0.88 (0.86-0.90)] was significantly better than that of N-NBI [0.84 (0.82-0.87)] (p = 0.005). The results of our study indicated that our "resect and discard" strategy using M-NBI could work to reduce the risk for discarding SALs including small invasive cancer (UMIN-CTR, UMIN000003740).

  13. Luminescence properties of Sm, Tb(Sal){sub 3}Phen complex in polyvinyl alcohol: an approach for white-light emission

    Energy Technology Data Exchange (ETDEWEB)

    Kaur, Gagandeep; Rai, S B, E-mail: sbrai49@yahoo.co.in [Laser and Spectroscopy Laboratory, Department of Physics Banaras Hindu University, Varanasi, 221005 (India)

    2011-10-26

    Polyvinyl alcohol polymer films doped with Sm,Tb(Sal){sub 3}Phen complexes have been synthesized using solution casting technique. An enhancement in absorption intensity is observed revealing the encapsulation of rare earth ions by salicylic acid (Sal)/1,10 phenanthroline (Phen) complex. Photoluminescence spectra of the co-doped samples were examined by varying the concentration of Tb{sup 3+} keeping concentration of Sm{sup 3+} ions fixed and vice-versa. It is found that the polymer samples emit a combination of blue, green and orange-red wavelengths tunable to white light when excited with 355 nm radiation. The emission spectra also show a self-quenching effect at higher concentration of Sm{sup 3+} ions. An efficient energy transfer was observed from Tb{sup 3+} : {sup 5}D{sub 4} {yields} Sm{sup 3+} : {sup 4}G{sub 9/2}. The reason for the enhancement in fluorescence intensities of Sm{sup 3+} in the co-doped polymer sample is the intermolecular as well as the intramolecular energy transfer.

  14. Processing computed tomography images by using personal computer

    International Nuclear Information System (INIS)

    Seto, Kazuhiko; Fujishiro, Kazuo; Seki, Hirofumi; Yamamoto, Tetsuo.

    1994-01-01

    Processing of CT images was attempted by using a popular personal computer. The program for image-processing was made with C compiler. The original images, acquired with CT scanner (TCT-60A, Toshiba), were transferred to the computer by 8-inch flexible diskette. Many fundamental image-processing, such as displaying image to the monitor, calculating CT value and drawing the profile curve. The result showed that a popular personal computer had ability to process CT images. It seemed that 8-inch flexible diskette was still useful medium of transferring image data. (author)

  15. TV REKLAMLARINDAN KAÇINMA: DAVRANIŞSAL VE MEKANİK KAÇINMAYA ETKİ EDEN FAKTÖRLER

    Directory of Open Access Journals (Sweden)

    Ayşen AKYÜZ

    2012-11-01

    Full Text Available TV ADVERTISING AVOIDANCE: THE FACTORS INFLUINCING BEHAVIOURAL AND MECHANISTIC AVOIDANCEAbstract: TV advertising avoidance, presents a main issue for advertisers and marketers. Creation of TV ads involves great effort, imaginative strategy formulation and high expenditures. The expectations of the companies and advertisers from the medium are intrinsically high. The current study examines the influence of both the general attitude towards advertising and the belief factors (product information, good for economy, hedonic/pleasure and materialism on behavioral (e.g .making phone calls and mechanical avoidance (e.g. zapping by conducting surveys with the university students. Structural Equation Model is employed to investigate the relationship between the dependent and independent variables. It is believed that, the findings would make a significant contribution towards a development of the theory and create bases for a further research on this topic. Keywords: Ad Avoidance, Behavioral Avoidance, Mechanical Avoidance, Beliefs, Attitude Toward Advertising.  TV REKLAMLARINDAN KAÇINMA: DAVRANIŞSAL VE MEKANİK KAÇINMAYA ETKİ EDEN FAKTÖRLER Özet: Reklamdan kaçınma davranışı, reklamcılar ve pazarlamacılar için oldukça önemli bir gündem oluşturmaktadır. Televizyon reklamlarının yaratılması yoğun çaba, yaratıcı strateji ve yüksek harcamaları kapsar. Firmaların ve reklamcıların bu iletişim aracından beklentileri ise doğal olarak yüksek olacaktır. Bu çalışma, hem inanç faktörlerinin (bilgilendirme, ekonomik fayda, zevk/hoşnutluk, materyalizm; hem de reklama yönelik genel tutumun, davranışsal ve mekanik kaçınma üzerindeki etkilerini üniversite öğrencileri ile yapılan bir anket çalışması ile incelemektedir. Bağımlı ve bağımsız değişkenler arasındaki ilişkiyi incelemek için Yapısal Eşitlik Modeli kullanılmıştır. Bulguların hem konuyla ilgili gelecekte yapılacak araştırmalarda hem de

  16. Process perspective on image quality evaluation

    Science.gov (United States)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  17. Digital processing of radiographic images

    Science.gov (United States)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  18. FITS Liberator: Image processing software

    Science.gov (United States)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  19. Image processing. Volumetric analysis with a digital image processing system. [GAMMA]. Bildverarbeitung. Volumetrie mittels eines digitalen Bildverarbeitungssystems

    Energy Technology Data Exchange (ETDEWEB)

    Kindler, M; Radtke, F; Demel, G

    1986-01-01

    The book is arranged in seven sections, describing various applications of volumetric analysis using image processing systems, and various methods of diagnostic evaluation of images obtained by gamma scintigraphy, cardic catheterisation, and echocardiography. A dynamic ventricular phantom is explained that has been developed for checking and calibration for safe examination of patient, the phantom allowing extensive simulation of volumetric and hemodynamic conditions of the human heart: One section discusses the program development for image processing, referring to a number of different computer systems. The equipment described includes a small non-expensive PC system, as well as a standardized nuclear medical diagnostic system, and a computer system especially suited to image processing.

  20. Organization of bubble chamber image processing

    International Nuclear Information System (INIS)

    Gritsaenko, I.A.; Petrovykh, L.P.; Petrovykh, Yu.L.; Fenyuk, A.B.

    1985-01-01

    A programme of bubble chamber image processing is described. The programme is written in FORTRAN, it is developed for the DEC-10 computer and is designed for operation of semi-automation processing-measurement projects PUOS-2 and PUOS-4. Fornalization of the image processing permits to use it for different physical experiments

  1. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    Science.gov (United States)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  2. An Applied Image Processing for Radiographic Testing

    International Nuclear Information System (INIS)

    Ratchason, Surasak; Tuammee, Sopida; Srisroal Anusara

    2005-10-01

    An applied image processing for radiographic testing (RT) is desirable because it decreases time-consuming, decreases the cost of inspection process that need the experienced workers, and improves the inspection quality. This paper presents the primary study of image processing for RT-films that is the welding-film. The proposed approach to determine the defects on weld-images. The BMP image-files are opened and developed by computer program that using Borland C ++ . The software has five main methods that are Histogram, Contrast Enhancement, Edge Detection, Image Segmentation and Image Restoration. Each the main method has the several sub method that are the selected options. The results showed that the effective software can detect defects and the varied method suit for the different radiographic images. Furthermore, improving images are better when two methods are incorporated

  3. Quantitative image processing in fluid mechanics

    Science.gov (United States)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  4. Digital image processing

    National Research Council Canada - National Science Library

    Gonzalez, Rafael C; Woods, Richard E

    2008-01-01

    ...-year graduate students in almost any technical discipline. The leading textbook in its field for more than twenty years, it continues its cutting-edge focus on contemporary developments in all mainstream areas of image processing-e.g...

  5. Hyperspectral image processing

    CERN Document Server

    Wang, Liguo

    2016-01-01

    Based on the authors’ research, this book introduces the main processing techniques in hyperspectral imaging. In this context, SVM-based classification, distance comparison-based endmember extraction, SVM-based spectral unmixing, spatial attraction model-based sub-pixel mapping, and MAP/POCS-based super-resolution reconstruction are discussed in depth. Readers will gain a comprehensive understanding of these cutting-edge hyperspectral imaging techniques. Researchers and graduate students in fields such as remote sensing, surveying and mapping, geosciences and information systems will benefit from this valuable resource.

  6. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing

    2008-01-01

    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  7. On some applications of diffusion processes for image processing

    International Nuclear Information System (INIS)

    Morfu, S.

    2009-01-01

    We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

  8. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  9. Trends in medical image processing

    International Nuclear Information System (INIS)

    Robilotta, C.C.

    1987-01-01

    The function of medical image processing is analysed, mentioning the developments, the physical agents, and the main categories, as conection of distortion in image formation, detectability increase, parameters quantification, etc. (C.G.C.) [pt

  10. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  11. Application of Java technology in radiation image processing

    International Nuclear Information System (INIS)

    Cheng Weifeng; Li Zheng; Chen Zhiqiang; Zhang Li; Gao Wenhuan

    2002-01-01

    The acquisition and processing of radiation image plays an important role in modern application of civil nuclear technology. The author analyzes the rationale of Java image processing technology which includes Java AWT, Java 2D and JAI. In order to demonstrate applicability of Java technology in field of image processing, examples of application of JAI technology in processing of radiation images of large container have been given

  12. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  13. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Image processing in 60Co container inspection system

    International Nuclear Information System (INIS)

    Wu Zhifang; Zhou Liye; Wang Liqiang; Liu Ximing

    1999-01-01

    The authors analyzes the features of 60 Co container inspection image, the design of several special processing methods for container image and some normal processing methods for two-dimensional digital image, including gray enhancement, pseudo-enhancement, space filter, edge enhancement, geometry process, etc. It gives out the way to carry out the above mentioned process in Windows 95 or Win NT. It discusses some ways to improve the image processing speed on microcomputer and good results were obtained

  15. Crack Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal, Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better than that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  16. Eliminating "Hotspots" in Digital Image Processing

    Science.gov (United States)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  17. How Digital Image Processing Became Really Easy

    Science.gov (United States)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  18. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  19. Sal die ‘ewolusie’ vanaf biologie na lewenswetenskappe ‘uitsterwing’ van die vakgebied voorkom?

    Directory of Open Access Journals (Sweden)

    Johanna G. Ferreira

    2012-03-01

    Full Text Available In hierdie artikel word die wysiging van die skoolvak van ‘biologie’ na ‘lewenswetenskappe’ bespreek, asook die gepaardgaande en verwagte implikasies van die verandering. Die vraag word gestel of die beoogde wysiging wel noemenswaardig is en of dit enigsins ‘n verskil aan die dalende belangstelling in die lewenswetenskappe op tersiêre vlak sal maak. Die wysigings in die nuwe kurrikula verhoog die relevansie van die vakinhoud vir leerders en vir die gemeenskap, maar sekere kwessies kry nie aandag nie. Voorstelle word gemaak om die kurrikulum te wysig en onderrigmetodes aan te pas om die nodige vaardighede by leerders te vestig.

  20. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail: M.H.Yap@lboro.ac.uk; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2010-03-15

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  1. Processed images in human perception: A case study in ultrasound breast imaging

    International Nuclear Information System (INIS)

    Yap, Moi Hoon; Edirisinghe, Eran; Bez, Helmut

    2010-01-01

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  2. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  3. PCB Fault Detection Using Image Processing

    Science.gov (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.

    2017-08-01

    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  4. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  5. Cellular automata in image processing and geometry

    CERN Document Server

    Adamatzky, Andrew; Sun, Xianfang

    2014-01-01

    The book presents findings, views and ideas on what exact problems of image processing, pattern recognition and generation can be efficiently solved by cellular automata architectures. This volume provides a convenient collection in this area, in which publications are otherwise widely scattered throughout the literature. The topics covered include image compression and resizing; skeletonization, erosion and dilation; convex hull computation, edge detection and segmentation; forgery detection and content based retrieval; and pattern generation. The book advances the theory of image processing, pattern recognition and generation as well as the design of efficient algorithms and hardware for parallel image processing and analysis. It is aimed at computer scientists, software programmers, electronic engineers, mathematicians and physicists, and at everyone who studies or develops cellular automaton algorithms and tools for image processing and analysis, or develops novel architectures and implementations of mass...

  6. Image processing unit with fall-back.

    NARCIS (Netherlands)

    2011-01-01

    An image processing unit ( 100,200,300 ) for computing a sequence of output images on basis of a sequence of input images, comprises: a motion estimation unit ( 102 ) for computing a motion vector field on basis of the input images; a quality measurement unit ( 104 ) for computing a value of a

  7. OPTIMIZACIÓN DE LA LOGÍSTICA DE OPERACIONES DE UNA PLANTA DE MOLIENDA DE SAL

    Directory of Open Access Journals (Sweden)

    Oscar Araya Pasten

    2001-09-01

    Full Text Available

    El presente trabajo tuvo como objetivo desarrollar un modelo de simulación mediante el uso del software Awesim para optimizar el sistema de producción y transporte de sal de una empresa minera del norte de Chile, con la finalidad de incrementar la productividad mediante la reducción de los tiempos de detención de la planta de producción, compatibilizando los ciclos de llenado de las tolvas de almacenamiento con los ciclos de transporte de los camiones.

  8. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  9. Automated synthesis of image processing procedures using AI planning techniques

    Science.gov (United States)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  10. Musashi dynamic image processing system

    International Nuclear Information System (INIS)

    Murata, Yutaka; Mochiki, Koh-ichi; Taguchi, Akira

    1992-01-01

    In order to produce transmitted neutron dynamic images using neutron radiography, a real time system called Musashi dynamic image processing system (MDIPS) was developed to collect, process, display and record image data. The block diagram of the MDIPS is shown. The system consists of a highly sensitive, high resolution TV camera driven by a custom-made scanner, a TV camera deflection controller for optimal scanning, which adjusts to the luminous intensity and the moving speed of an object, a real-time corrector to perform the real time correction of dark current, shading distortion and field intensity fluctuation, a real time filter for increasing the image signal to noise ratio, a video recording unit and a pseudocolor monitor to realize recording in commercially available products and monitoring by means of the CRTs in standard TV scanning, respectively. The TV camera and the TV camera deflection controller utilized for producing still images can be applied to this case. The block diagram of the real-time corrector is shown. Its performance is explained. Linear filters and ranked order filters were developed. (K.I.)

  11. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  12. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  13. La comunidad sefardita de Salónica después de las guerras balcánicas (1912-1913)

    OpenAIRE

    Morcillo Rosillo, Matilde

    1993-01-01

    Esta comunicación no pretende ir más allá de una primera aproximación al colectivo de judíos españoles de Salónica tras finalizar las guerras balcánicas. Un período justificado en sí mismo por cuanto coincide con el hundimiento del Imperio otomano y el resurgimiento de la nueva Grecia, lo que dificultará las relaciones de convivencia entre israelitas y griegos.

  14. Investigación: La historia de la sal en México, las salinas de Cuyutlán y el caso de la cooperativa de salineros de Colima

    Directory of Open Access Journals (Sweden)

    Oriana Zaret Gaytán Gómez

    2015-11-01

    Full Text Available En México, y en muchos lugares del mundo, existen productores artesanales que extraen sal de maneras casi tan autóctonas como lo hicieron nuestros antepasado prehispánicos. En este trabajo se pretende exponer el caso de una cooperativa ubicada en la región de Cuyutlán, en el Estado de Colima, que, si bien ya ha implementado algunas innovaciones, aún sigue utilizando el trabajo intensivo para poder llevar a cabo la extracción de la sal. Conoceremos un poco de la historia de la cooperativa, de los conflictos que enfrentó durante su constitución y a lo largo de toda su existencia. Asimismo, se abordará el tema de la cultura regional que ha emergido alrededor de ella. Antes de exponer dicha situación, se presentará un panorama general de lo que ha sido la historia de la sal en México antes y después de la llegada de los españoles, durante el México independiente y la Revolución, hasta llegar a la situación actual; ello con el fin de contextualizar la investigación y poner en antecedentes al lector de la importancia de este mineral en la historia de nuestro país. Para finalizar el trabajo, se expondrá la realidad presente y pasada de las salinas de la Costa de Colima para con ello exponer la pertinencia de estudiar a mayor profundidad el fenómeno de explotación de la sal a través de dicha sociedad, la cual el 1º de enero de 2015 cumplió 90 años de existencia.

  15. Effectiveness of sal deoiled seed cake as an inducer for protease production from Aeromonas sp. S1 for its application in kitchen wastewater treatment.

    Science.gov (United States)

    Saini, Vandana; Bhattacharya, Amrik; Gupta, Anshu

    2013-08-01

    The present study is an attempt to demonstrate the feasibility of sal (Shorea robusta) deoiled cake--a forest-based industrial by-product--as a cheaper media supplement for augmented protease production from Aeromonas sp. S1 and application of protease in the treatment of kitchen wastewater. Under optimized conditions, protease production could successfully be enhanced to 5.13-fold (527.5 U mL(-1)) on using sal deoiled seed cake extract (SDOCE), as medium additive, compared to an initial production of 102.7 U mL(-1) in its absence. The culture parameters for optimum production of protease were determined to be incubation time (48 h), pH (7.0), SDOCE concentration (3 % (v/v)), inoculum size (0.3-0.6 % (v/v)), and agitation rate (100 rpm). The enzyme was found to have an optimum pH and temperature of 8.0 and 60 °C, respectively. The protease preparation was tested for treatment of organic-laden kitchen wastewater. After 96 h of wastewater treatment under static condition, enzyme preparation was able to reduce 74 % biological oxygen demand, 37 % total suspended solids, and 41 % oil and grease. The higher and improved level of protease obtained using sal deoiled seed cake-based media hence offers a new approach for value addition to this underutilized biomass through industrial enzyme production. The protease produced using this biomass could also be used as pretreatment tool for remediation of organic-rich food wastewater.

  16. Image restoration and processing methods

    International Nuclear Information System (INIS)

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  17. Análise epidemiológica de lesões no futebol de salão durante o XV Campeonato Brasileiro de Seleções Sub 20 Análisis epidemiológico de las lesiones en el fútbol de salón durante el XV Campeonato Brasileño de Selecciones Sub 20 Epidemiologic analysis of injuries occurred during the 15th Brazilian Indoor Soccer (Futsal Sub20 Team Selection Championship

    Directory of Open Access Journals (Sweden)

    Rodrigo Nogueira Ribeiro

    2006-02-01

    Full Text Available INTRODUÇÃO: Vários autores têm investigado a incidência de lesões no futebol. Entretanto, poucos trabalhos têm analisado as lesões no Futebol de Salão. O objetivo deste estudo foi analisar a incidência, circunstâncias e características das lesões registradas no Futebol de Salão durante o XV Campeonato Brasileiro de Futebol de Salão Sub 20. MÉTODOS: Fisioterapeutas ou médicos de todas as seleções participantes do XV Campeonato Brasileiro de Futebol de Salão Sub 20 responderam a um questionário para investigar a ocorrência de lesões durante as partidas. A taxa de resposta foi de 100%. RESULTADOS: Um total de 32 lesões foi registrado durante as 23 partidas, com incidência de 1,39 lesão por partida ou 208,6 lesões por 1.000 horas/jogo. Aproximadamente 1 a 3 lesões por partida resultaram em afastamento de jogadores em partidas ou treinamentos. As lesões de contato eram predominantes em 65,62% (21 das 32 lesões e a maioria dessas lesões não resultou no afastamento dos jogadores. CONCLUSÕES: O presente estudo observou que a incidência das lesões durante o XV Campeonato Brasileiro de Futebol de Salão Sub 20 foi semelhante à registrada em torneios de Futebol de Salão, mas superior aos achados em torneios de futebol, caracterizando a especificidade do esporte. Entretanto, circunstâncias e características são similares entre eles devido à semelhança de demanda do esporte.INTRODUCCIÓN: Varios autores han estado investigando la incidencia de lesiones en el fútbol. Sin embargo, pocos trabajos han estado analizando las lesiones en el fútbol de salón. El objetivo de este estudio fue el de analizar la incidencia, y las circunstancias, y características de las lesiones registradas en el fútbol de salón durante el XV Campeonato Brasileño de Fútbol de salón Sub 20. MÉTODOS: Médicos y/o fisioterapeutas del todos los participantes del XV Campeonato Brasileño de Fútbol de salón Sub 20 contestaron una encuesta

  18. Early skin tumor detection from microscopic images through image processing

    International Nuclear Information System (INIS)

    Siddiqi, A.A.; Narejo, G.B.; Khan, A.M.

    2017-01-01

    The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing) allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface) that is generated for the algorithm makes the system user friendly. (author)

  19. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  20. Design for embedded image processing on FPGAs

    CERN Document Server

    Bailey, Donald G

    2011-01-01

    "Introductory material will consider the problem of embedded image processing, and how some of the issues may be solved using parallel hardware solutions. Field programmable gate arrays (FPGAs) are introduced as a technology that provides flexible, fine-grained hardware that can readily exploit parallelism within many image processing algorithms. A brief review of FPGA programming languages provides the link between a software mindset normally associated with image processing algorithms, and the hardware mindset required for efficient utilization of a parallel hardware design. The bulk of the book will focus on the design process, and in particular how designing an FPGA implementation differs from a conventional software implementation. Particular attention is given to the techniques for mapping an algorithm onto an FPGA implementation, considering timing, memory bandwidth and resource constraints, and efficient hardware computational techniques. Extensive coverage will be given of a range of image processing...

  1. Crack detection using image processing

    International Nuclear Information System (INIS)

    Moustafa, M.A.A

    2010-01-01

    This thesis contains five main subjects in eight chapters and two appendices. The first subject discus Wiener filter for filtering images. In the second subject, we examine using different methods, as Steepest Descent Algorithm (SDA) and the Wavelet Transformation, to detect and filling the cracks, and it's applications in different areas as Nano technology and Bio-technology. In third subject, we attempt to find 3-D images from 1-D or 2-D images using texture mapping with Open Gl under Visual C ++ language programming. The fourth subject consists of the process of using the image warping methods for finding the depth of 2-D images using affine transformation, bilinear transformation, projective mapping, Mosaic warping and similarity transformation. More details about this subject will be discussed below. The fifth subject, the Bezier curves and surface, will be discussed in details. The methods for creating Bezier curves and surface with unknown distribution, using only control points. At the end of our discussion we will obtain the solid form, using the so called NURBS (Non-Uniform Rational B-Spline); which depends on: the degree of freedom, control points, knots, and an evaluation rule; and is defined as a mathematical representation of 3-D geometry that can accurately describe any shape from a simple 2-D line, circle, arc, or curve to the most complex 3-D organic free-form surface or (solid) which depends on finding the Bezier curve and creating family of curves (surface), then filling in between to obtain the solid form. Another representation for this subject is concerned with building 3D geometric models from physical objects using image-based techniques. The advantage of image techniques is that they require no expensive equipment; we use NURBS, subdivision surface and mesh for finding the depth of any image with one still view or 2D image. The quality of filtering depends on the way the data is incorporated into the model. The data should be treated with

  2. JIP: Java image processing on the Internet

    Science.gov (United States)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  3. La comunidad sefardita de Salónica después de las guerras balcánicas (1912-1913

    Directory of Open Access Journals (Sweden)

    Matilde Morcillo Rosillo

    1993-01-01

    Full Text Available Esta comunicación no pretende ir más allá de una primera aproximación al colectivo de judíos españoles de Salónica tras finalizar las guerras balcánicas. Un período justificado en sí mismo por cuanto coincide con el hundimiento del Imperio otomano y el resurgimiento de la nueva Grecia, lo que dificultará las relaciones de convivencia entre israelitas y griegos.

  4. Multispectral image enhancement processing for microsat-borne imager

    Science.gov (United States)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  5. Advanced Secure Optical Image Processing for Communications

    Science.gov (United States)

    Al Falou, Ayman

    2018-04-01

    New image processing tools and data-processing network systems have considerably increased the volume of transmitted information such as 2D and 3D images with high resolution. Thus, more complex networks and long processing times become necessary, and high image quality and transmission speeds are requested for an increasing number of applications. To satisfy these two requests, several either numerical or optical solutions were offered separately. This book explores both alternatives and describes research works that are converging towards optical/numerical hybrid solutions for high volume signal and image processing and transmission. Without being limited to hybrid approaches, the latter are particularly investigated in this book in the purpose of combining the advantages of both techniques. Additionally, pure numerical or optical solutions are also considered since they emphasize the advantages of one of the two approaches separately.

  6. PARAGON-IPS: A Portable Imaging Software System For Multiple Generations Of Image Processing Hardware

    Science.gov (United States)

    Montelione, John

    1989-07-01

    Paragon-IPS is a comprehensive software system which is available on virtually all generations of image processing hardware. It is designed for an image processing department or a scientist and engineer who is doing image processing full-time. It is being used by leading R&D labs in government agencies and Fortune 500 companies. Applications include reconnaissance, non-destructive testing, remote sensing, medical imaging, etc.

  7. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  8. An integral design strategy combining optical system and image processing to obtain high resolution images

    Science.gov (United States)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  9. Entropy-Based Block Processing for Satellite Image Registration

    Directory of Open Access Journals (Sweden)

    Ikhyun Lee

    2012-11-01

    Full Text Available Image registration is an important task in many computer vision applications such as fusion systems, 3D shape recovery and earth observation. Particularly, registering satellite images is challenging and time-consuming due to limited resources and large image size. In such scenario, state-of-the-art image registration methods such as scale-invariant feature transform (SIFT may not be suitable due to high processing time. In this paper, we propose an algorithm based on block processing via entropy to register satellite images. The performance of the proposed method is evaluated using different real images. The comparative analysis shows that it not only reduces the processing time but also enhances the accuracy.

  10. Traditions matrimoniales dans la région de Rabat-Salé-Zemmour-Zaer au Maroc

    OpenAIRE

    Hami, H.; Soulaymani, A.; Mokhtari, A.

    2011-01-01

    La pratique des mariages consanguins est très répandue au Moyen-Orient, en Afrique du Nord et dans le Sud-Ouest Asiatique où 20 à plus de 50 % de mariages sont consanguins. L’analyse d’un échantillon de 270 femmes mariées, pris au hasard dans le service de Maternité de l’Hôpital Souissi à Rabat (2004-2005), a fait l’objet d’une étude prospective visant à déterminer la fréquence des mariages consanguins dans la région de Rabat-Salé-Zemmour-Zaer au Maroc. Les résultats obtenus montrent que 20...

  11. The Pan-STARRS PS1 Image Processing Pipeline

    Science.gov (United States)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  12. A Document Imaging Technique for Implementing Electronic Loan Approval Process

    Directory of Open Access Journals (Sweden)

    J. Manikandan

    2015-04-01

    Full Text Available The image processing is one of the leading technologies of computer applications. Image processing is a type of signal processing, the input for image processor is an image or video frame and the output will be an image or subset of image [1]. Computer graphics and computer vision process uses an image processing techniques. Image processing systems are used in various environments like medical fields, computer-aided design (CAD, research fields, crime investigation fields and military fields. In this paper, we proposed a document image processing technique, for establishing electronic loan approval process (E-LAP [2]. Loan approval process has been tedious process, the E-LAP system attempts to reduce the complexity of loan approval process. Customers have to login to fill the loan application form online with all details and submit the form. The loan department then processes the submitted form and then sends an acknowledgement mail via the E-LAP to the requested customer with the details about list of documents required for the loan approval process [3]. The approaching customer can upload the scanned copies of all required documents. All this interaction between customer and bank take place using an E-LAP system.

  13. Automatic tissue image segmentation based on image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  14. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various samples. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  15. Digital image processing in neutron radiography

    International Nuclear Information System (INIS)

    Koerner, S.

    2000-11-01

    Neutron radiography is a method for the visualization of the macroscopic inner-structure and material distributions of various materials. The basic experimental arrangement consists of a neutron source, a collimator functioning as beam formatting assembly and of a plane position sensitive integrating detector. The object is placed between the collimator exit and the detector, which records a two dimensional image. This image contains information about the composition and structure of the sample-interior, as a result of the interaction of neutrons by penetrating matter. Due to rapid developments of detector and computer technology as well as deployments in the field of digital image processing, new technologies are nowadays available which have the potential to improve the performance of neutron radiographic investigations enormously. Therefore, the aim of this work was to develop a state-of-the art digital imaging device, suitable for the two neutron radiography stations located at the 250 kW TRIGA Mark II reactor at the Atominstitut der Oesterreichischen Universitaeten and furthermore, to identify and develop two and three dimensional digital image processing methods suitable for neutron radiographic and tomographic applications, and to implement and optimize them within data processing strategies. The first step was the development of a new imaging device fulfilling the requirements of a high reproducibility, easy handling, high spatial resolution, a large dynamic range, high efficiency and a good linearity. The detector output should be inherently digitized. The key components of the detector system selected on the basis of these requirements consist of a neutron sensitive scintillator screen, a CCD-camera and a mirror to reflect the light emitted by the scintillator to the CCD-camera. This detector design enables to place the camera out of the direct neutron beam. The whole assembly is placed in a light shielded aluminum box. The camera is controlled by a

  16. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  17. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance... and spatial co-ordinates into discrete components. The mathematical concepts involved are the sampling and transform theory. Two dimensional transforms are used for image enhancement, restoration, encoding and description too. The main objective of the image...

  18. Integrating digital topology in image-processing libraries.

    Science.gov (United States)

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  19. Preparation and characterization of Tb3+ and Tb(sal)3.nH2O doped PC:PMMA blend

    International Nuclear Information System (INIS)

    Dwivedi, Y.; Singh, A.K.; Prakash, Rajiv; Rai, S.B.

    2011-01-01

    Tb doped polycarbonate:poly(methyl methacrylate) (Tb-PC:PMMA) blend was prepared with varying proportions of PC and PMMA. Thermal and spectroscopic properties of the doped polymer have been investigated employing Fourier Transform Infrared (FTIR) absorption and differential scanning calorimetric (DSC) techniques. PC:PMMA blend (with 10 wt% PC and 90 wt% PMMA) shows better miscibility. Optical properties of the dopant Tb 3+ ions have been investigated using UV-vis absorption and fluorescence excited by 355 nm radiation. It is seen that luminescence intensity of Tb 3+ ion depends on PC:PMMA ratio and on Tb 3+ ion concentration. Concentration quenching is seen for TbCl 3 .6H 2 O concentration larger than 4 wt%. Addition of salicylic acid to the polymer blend increases the luminescence from Tb 3+ ions. Luminescence decay curve analysis affirms the non-radiative energy transfer from salicylic acid to Tb 3+ ions, which is identified as the reason behind this enhancement. - Highlights: → Blend formation is confirmed at PC/90PMMA, using FTIR and DSC techniques. → Absorption and bandgap studies of blend and parent components were studied. → Optical properties of Tb and Tb(sal) 3 .nH 2 O complex have been studied in PC/PMMA blend. → Luminescence decay curves confirm non-radiative energy transfer from Sal to Tb 3+ ions.

  20. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  1. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  2. Signal Processing in Medical Ultrasound B-mode Imaging

    International Nuclear Information System (INIS)

    Song, Tai Kyong

    2000-01-01

    Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing

  3. El sueño premonitorio de Moisés Almosnino sobre Yosef Nasí en el Tratado de los sueños (Salónica 1564

    Directory of Open Access Journals (Sweden)

    Romeu Ferré, Pilar

    2004-06-01

    Full Text Available Moshe Almosnino and Yosef Nasi did not only share a common political and social context, namely Salónica, Constantinople in the 16th century. They also devoted their time and effort to help the Sephardic people of the Diaspora to which they both belonged. One can say that their work got rewarded as the premonitory dream was fulfilled.Dos personajes famosos, Moisés Almosnino y Yosef Nasí, coincidieron en un espacio geográfico y temporal común (Salónica-Constantinopla a mediados del siglo XVI. Ambos trabajaron desde allí denodadamente en favor del pueblo sefardí en la diáspora. La realización del sueño premonitorio vendría a colmar las expectativas de ambos.

  4. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  5. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    Science.gov (United States)

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of

  6. Bio-inspired approach to multistage image processing

    Science.gov (United States)

    Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan

    2017-08-01

    Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.

  7. Crack Length Detection by Digital Image Processing

    DEFF Research Database (Denmark)

    Lyngbye, Janus; Brincker, Rune

    1990-01-01

    It is described how digital image processing is used for measuring the length of fatigue cracks. The system is installed in a Personal Computer equipped with image processing hardware and performs automated measuring on plane metal specimens used in fatigue testing. Normally one can not achieve...... a resolution better then that of the image processing equipment. To overcome this problem an extrapolation technique is used resulting in a better resolution. The system was tested on a specimen loaded with different loads. The error σa was less than 0.031 mm, which is of the same size as human measuring...

  8. Automated processing of X-ray images in medicine

    International Nuclear Information System (INIS)

    Babij, Ya.S.; B'yalyuk, Ya.O.; Yanovich, I.A.; Lysenko, A.V.

    1991-01-01

    Theoretical and practical achievements in application of computing technology means for processing of X-ray images in medicine were generalized. The scheme of the main directions and tasks of processing of X-ray images was given and analyzed. The principal problems appeared in automated processing of X-ray images were distinguished. It is shown that for interpretation of X-ray images it is expedient to introduce a notion of relative operating characteristic (ROC) of a roentgenologist. Every point on ROC curve determines the individual criteria of roentgenologist to put a positive diagnosis for definite situation

  9. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Twofold processing for denoising ultrasound medical images.

    Science.gov (United States)

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  11. Influencia del consumo de sal y de analgésicos efervescentes con sodio en pacientes con hipertensión y riesgo vascular

    Directory of Open Access Journals (Sweden)

    Martínez Pérez SR

    2010-12-01

    Full Text Available La ingesta del sodio que contienen los alimentos y algunos medicamentos puede producir una elevación de los valores de presión arterial de los individuos. La Organización Mundial de la Salud recomienda de forma global no superar la ingesta diaria de 2 g de sodio en los adultos sanos (5 g de sal común. Para grupos de riesgo se establecen límites más estrictos (0,5-1,5 g de sodio diarios. En España se estima que cada persona consume al día 11 g de sal por término medio. Diversos estudios, realizados en distintas poblaciones, han podido objetivar una correlación directa entre la ingesta de sodio en la dieta y la prevalencia de hipertensión arterial. Otros estudios corroboran el efecto de la reducción del consumo de sal en la dieta sobre la disminución de los valores de presión arterial y de la morbilidad y mortalidad cardiovascular. Muchos medicamentos contienen una elevada cantidad de sodio por tener excipientes efervescentes (1 g de paracetamol efervescente puede llegar a aportar más de 0,5 g de sodio, de forma que, si su posología es cada 6-8 horas, pueden superar los límites diarios de sodio recomendados, incluso para un adulto sano. En este artículo se revisa la evidencia disponible sobre el efecto beneficioso de una dieta hiposódica para el control de la hipertensión, las consideraciones sobre el uso de analgésicos y AINE en los pacientes con enfermedad cardiovascular y se insiste en la advertencia de evitar, siempre que sea posible, el uso de medicamentos efervescentes, especialmente en los mayores de 50 años.

  12. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  13. Acquisition and Post-Processing of Immunohistochemical Images.

    Science.gov (United States)

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  14. The Digital Microscope and Its Image Processing Utility

    Directory of Open Access Journals (Sweden)

    Tri Wahyu Supardi

    2011-12-01

    Full Text Available Many institutions, including high schools, own a large number of analog or ordinary microscopes. These microscopes are used to observe small objects. Unfortunately, object observations on the ordinary microscope require precision and visual acuity of the user. This paper discusses the development of a high-resolution digital microscope from an analog microscope, including the image processing utility, which allows the digital microscope users to capture, store and process the digital images of the object being observed. The proposed microscope is constructed from hardware components that can be easily found in Indonesia. The image processing software is capable of performing brightness adjustment, contrast enhancement, histogram equalization, scaling and cropping. The proposed digital microscope has a maximum magnification of 1600x, and image resolution can be varied from 320x240 pixels up to 2592x1944 pixels. The microscope was tested with various objects with a variety of magnification, and image processing was carried out on the image of the object. The results showed that the digital microscope and its image processing system were capable of enhancing the observed object and other operations in accordance with the user need. The digital microscope has eliminated the need for direct observation by human eye as with the traditional microscope.

  15. Image processing on the image with pixel noise bits removed

    Science.gov (United States)

    Chuang, Keh-Shih; Wu, Christine

    1992-06-01

    Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.

  16. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego

    2017-01-01

    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  17. Fast image acquisition and processing on a TV camera-based portal imaging system

    International Nuclear Information System (INIS)

    Baier, K.; Meyer, J.

    2005-01-01

    The present paper describes the fast acquisition and processing of portal images directly from a TV camera-based portal imaging device (Siemens Beamview Plus trademark). This approach employs not only hard- and software included in the standard package installed by the manufacturer (in particular the frame grabber card and the Matrox(tm) Intellicam interpreter software), but also a software tool developed in-house for further processing and analysis of the images. The technical details are presented, including the source code for the Matrox trademark interpreter script that enables the image capturing process. With this method it is possible to obtain raw images directly from the frame grabber card at an acquisition rate of 15 images per second. The original configuration by the manufacturer allows the acquisition of only a few images over the course of a treatment session. The approach has a wide range of applications, such as quality assurance (QA) of the radiation beam, real-time imaging, real-time verification of intensity-modulated radiation therapy (IMRT) fields, and generation of movies of the radiation field (fluoroscopy mode). (orig.)

  18. MXS-Chaining: A Highly Efficient Cloning Platform for Imaging and Flow Cytometry Approaches in Mammalian Systems.

    Directory of Open Access Journals (Sweden)

    Hanna L Sladitschek

    Full Text Available The continuous improvement of imaging technologies has driven the development of sophisticated reporters to monitor biological processes. Such constructs should ideally be assembled in a flexible enough way to allow for their optimization. Here we describe a highly reliable cloning method to efficiently assemble constructs for imaging or flow cytometry applications in mammalian cell culture systems. We bioinformatically identified a list of restriction enzymes whose sites are rarely found in human and mouse cDNA libraries. From the best candidates, we chose an enzyme combination (MluI, XhoI and SalI: MXS that enables iterative chaining of individual building blocks. The ligation scar resulting from the compatible XhoI- and SalI-sticky ends can be translated and hence enables easy in-frame cloning of coding sequences. The robustness of the MXS-chaining approach was validated by assembling constructs up to 20 kb long and comprising up to 34 individual building blocks. By assessing the success rate of 400 ligation reactions, we determined cloning efficiency to be 90% on average. Large polycistronic constructs for single-cell imaging or flow cytometry applications were generated to demonstrate the versatility of the MXS-chaining approach. We devised several constructs that fluorescently label subcellular structures, an adapted version of FUCCI (fluorescent, ubiquitination-based cell cycle indicator optimized to visualize cell cycle progression in mouse embryonic stem cells and an array of artificial promoters enabling dosage of doxycyline-inducible transgene expression. We made publicly available through the Addgene repository a comprehensive set of MXS-building blocks comprising custom vectors, a set of fluorescent proteins, constitutive promoters, polyadenylation signals, selection cassettes and tools for inducible gene expression. Finally, detailed guidelines describe how to chain together prebuilt MXS-building blocks and how to generate new

  19. Post-processing of digital images.

    Science.gov (United States)

    Perrone, Luca; Politi, Marco; Foschi, Raffaella; Masini, Valentina; Reale, Francesca; Costantini, Alessandro Maria; Marano, Pasquale

    2003-01-01

    Post-processing of bi- and three-dimensional images plays a major role for clinicians and surgeons in both diagnosis and therapy. The new spiral (single and multislice) CT and MRI machines have allowed better quality of images. With the associated development of hardware and software, post-processing has become indispensable in many radiologic applications in order to address precise clinical questions. In particular, in CT the acquisition technique is fundamental and should be targeted and optimized to obtain good image reconstruction. Multiplanar reconstructions ensure simple, immediate display of sections along different planes. Three-dimensional reconstructions include numerous procedures: multiplanar techniques as maximum intensity projections (MIP); surface rendering techniques as the Shaded Surface Display (SSD); volume techniques as the Volume Rendering Technique; techniques of virtual endoscopy. In surgery computer-aided techniques as the neuronavigator, which with information provided by neuroimaging helps the neurosurgeon in simulating and performing the operation, are extremely interesting.

  20. Digital image processing techniques in archaeology

    Digital Repository Service at National Institute of Oceanography (India)

    Santanam, K.; Vaithiyanathan, R.; Tripati, S.

    Digital image processing involves the manipulation and interpretation of digital images with the aid of a computer. This form of remote sensing actually began in the 1960's with a limited number of researchers analysing multispectral scanner data...

  1. Radiology image orientation processing for workstation display

    Science.gov (United States)

    Chang, Chung-Fu; Hu, Kermit; Wilson, Dennis L.

    1998-06-01

    Radiology images are acquired electronically using phosphor plates that are read in Computed Radiology (CR) readers. An automated radiology image orientation processor (RIOP) for determining the orientation for chest images and for abdomen images has been devised. In addition, the chest images are differentiated as front (AP or PA) or side (Lateral). Using the processing scheme outlined, hospitals will improve the efficiency of quality assurance (QA) technicians who orient images and prepare the images for presentation to the radiologists.

  2. Image processing by use of the digital cross-correlator

    International Nuclear Information System (INIS)

    Katou, Yoshinori

    1982-01-01

    We manufactured for trial an instrument which achieved the image processing using digital correlators. A digital correlator perform 64-bit parallel correlation at 20 MH. The output of a digital correlator is a 7-bit word representing. An A-D converter is used to quantize it a precision of six bits. The resulting 6-bit word is fed to six correlators, wired in parallel. The image processing achieved in 12 bits, whose digital outputs converted an analog signal by a D-A converter. This instrument is named the digital cross-correlator. The method which was used in the image processing system calculated the convolution with the digital correlator. It makes various digital filters. In the experiment with the image processing video signals from TV camera were used. The digital image processing time was approximately 5 μs. The contrast was enhanced and smoothed. The digital cross-correlator has the image processing of 16 sorts, and was produced inexpensively. (author)

  3. Measurement and Image Processing Techniques for Particle Image Velocimetry Using Solid-Phase Carbon Dioxide

    Science.gov (United States)

    2014-03-27

    stereoscopic PIV: the angular displacement configuration and the translation configuration. The angular displacement configuration is most commonly used today...images were processed using ImageJ, an open-source, Java -based image processing software available from the National Institute of Health (NIH). The

  4. Digital Image Processing Overview For Helmet Mounted Displays

    Science.gov (United States)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  5. ARTIP: Automated Radio Telescope Image Processing Pipeline

    Science.gov (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  6. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  7. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  8. Processing Of Binary Images

    Science.gov (United States)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  9. Application of two-dimensional crystallography and image processing to atomic resolution Z-contrast images.

    Science.gov (United States)

    Morgan, David G; Ramasse, Quentin M; Browning, Nigel D

    2009-06-01

    Zone axis images recorded using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM or Z-contrast imaging) reveal the atomic structure with a resolution that is defined by the probe size of the microscope. In most cases, the full images contain many sub-images of the crystal unit cell and/or interface structure. Thanks to the repetitive nature of these images, it is possible to apply standard image processing techniques that have been developed for the electron crystallography of biological macromolecules and have been used widely in other fields of electron microscopy for both organic and inorganic materials. These methods can be used to enhance the signal-to-noise present in the original images, to remove distortions in the images that arise from either the instrumentation or the specimen itself and to quantify properties of the material in ways that are difficult without such data processing. In this paper, we describe briefly the theory behind these image processing techniques and demonstrate them for aberration-corrected, high-resolution HAADF-STEM images of Si(46) clathrates developed for hydrogen storage.

  10. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    Science.gov (United States)

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  11. Image processing of early gastric cancer cases

    International Nuclear Information System (INIS)

    Inamoto, Kazuo; Umeda, Tokuo; Inamura, Kiyonari

    1992-01-01

    Computer image processing was used to enhance gastric lesions in order to improve the detection of stomach cancer. Digitization was performed in 25 cases of early gastric cancer that had been confirmed surgically and pathologically. The image processing consisted of grey scale transformation, edge enhancement (Sobel operator), and high-pass filtering (unsharp masking). Grey scale transformation improved image quality for the detection of gastric lesions. The Sobel operator enhanced linear and curved margins, and consequently, suppressed the rest. High-pass filtering with unsharp masking was superior to visualization of the texture pattern on the mucosa. Eight of 10 small lesions (less than 2.0 cm) were successfully demonstrated. However, the detection of two lesions in the antrum, was difficult even with the aid of image enhancement. In the other 15 lesions (more than 2.0 cm), the tumor surface pattern and margin between the tumor and non-pathological mucosa were clearly visualized. Image processing was considered to contribute to the detection of small early gastric cancer lesions by enhancing the pathological lesions. (author)

  12. REVIEW OF MATHEMATICAL METHODS AND ALGORITHMS OF MEDICAL IMAGE PROCESSING ON THE EXAMPLE OF TECHNOLOGY OF MEDICAL IMAGE PROCESSING FROM WOLFRAM MATHEMATICA

    Directory of Open Access Journals (Sweden)

    О. E. Prokopchenko

    2015-09-01

    Full Text Available The article analyzes the basic methods and algorithms of mathematical processing of medical images as objects of computer mathematics. The presented methods and computer algorithms of mathematics relevant and may find application in the field of medical imaging - automated processing of images; as a tool for measurement and determination the optical parameters; identification and formation of medical images database. Methods and computer algorithms presented in the article & based on Wolfram Mathematica are also relevant to the problem of modern medical education. As an example of Wolfram Mathematica may be considered appropriate demonstration, such as recognition of special radiographs and morphological imaging. These methods are used to improve the diagnostic significance and value of medical (clinical research and can serve as an educational interactive demonstration. Implementation submitted individual methods and algorithms of computer Wolfram Mathematics contributes, in general, the optimization process of practical processing and presentation of medical images.

  13. Computer Vision and Image Processing: A Paper Review

    Directory of Open Access Journals (Sweden)

    victor - wiley

    2018-02-01

    Full Text Available Computer vision has been studied from many persective. It expands from raw data recording into techniques and ideas combining digital image processing, pattern recognition, machine learning and computer graphics. The wide usage has attracted many scholars to integrate with many disciplines and fields. This paper provide a survey of the recent technologies and theoretical concept explaining the development of computer vision especially related to image processing using different areas of their field application. Computer vision helps scholars to analyze images and video to obtain necessary information,    understand information on events or descriptions, and scenic pattern. It used method of multi-range application domain with massive data analysis. This paper provides contribution of recent development on reviews related to computer vision, image processing, and their related studies. We categorized the computer vision mainstream into four group e.g., image processing, object recognition, and machine learning. We also provide brief explanation on the up-to-date information about the techniques and their performance.

  14. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  15. Automated measurement of pressure injury through image processing.

    Science.gov (United States)

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure

  16. High-speed image processing systems in non-destructive testing

    Science.gov (United States)

    Shashev, D. V.; Shidlovskiy, S. V.

    2017-08-01

    Digital imaging systems are using in most of both industrial and scientific industries. Such systems effectively solve a wide range of tasks in the field of non-destructive testing. There are problems in digital image processing for decades associated with the speed of the operation of such systems, sufficient to efficiently process and analyze video streams in real time, ideally in mobile small-sized devices. In this paper, we consider the use of parallel-pipeline computing architectures in image processing problems using the example of an algorithm for calculating the area of an object on a binary image. The approach used allows us to achieve high-speed performance in the tasks of digital image processing.

  17. Effects of image processing on the detective quantum efficiency

    Science.gov (United States)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  18. Document Image Processing: Going beyond the Black-and-White Barrier. Progress, Issues and Options with Greyscale and Colour Image Processing.

    Science.gov (United States)

    Hendley, Tom

    1995-01-01

    Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…

  19. Topics in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2013-01-01

      The sixteen chapters included in this book were written by invited experts of international recognition and address important issues in Medical Image Processing and Computational Vision, including: Object Recognition, Object Detection, Object Tracking, Pose Estimation, Facial Expression Recognition, Image Retrieval, Data Mining, Automatic Video Understanding and Management, Edges Detection, Image Segmentation, Modelling and Simulation, Medical thermography, Database Systems, Synthetic Aperture Radar and Satellite Imagery.   Different applications are addressed and described throughout the book, comprising: Object Recognition and Tracking, Facial Expression Recognition, Image Database, Plant Disease Classification, Video Understanding and Management, Image Processing, Image Segmentation, Bio-structure Modelling and Simulation, Medical Imaging, Image Classification, Medical Diagnosis, Urban Areas Classification, Land Map Generation.   The book brings together the current state-of-the-art in the various mul...

  20. MR imaging of abnormal synovial processes

    International Nuclear Information System (INIS)

    Quinn, S.F.; Sanchez, R.; Murray, W.T.; Silbiger, M.L.; Ogden, J.; Cochran, C.

    1987-01-01

    MR imaging can directly image abnormal synovium. The authors reviewed over 50 cases with abnormal synovial processes. The abnormalities include Baker cysts, semimembranous bursitis, chronic shoulder bursitis, peroneal tendon ganglion cyst, periarticular abscesses, thickened synovium from rheumatoid and septic arthritis, and synovial hypertrophy secondary to Legg-Calve-Perthes disease. MR imaging has proved invaluable in identifying abnormal synovium, defining the extent and, to a limited degree, characterizing its makeup

  1. Quaternion Fourier transforms for signal and image processing

    CERN Document Server

    Ell, Todd A; Sangwine, Stephen J

    2014-01-01

    Based on updates to signal and image processing technology made in the last two decades, this text examines the most recent research results pertaining to Quaternion Fourier Transforms. QFT is a central component of processing color images and complex valued signals. The book's attention to mathematical concepts, imaging applications, and Matlab compatibility render it an irreplaceable resource for students, scientists, researchers, and engineers.

  2. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  3. REVIEW OF MATHEMATICAL METHODS AND ALGORITHMS OF MEDICAL IMAGE PROCESSING ON THE EXAMPLE OF TECHNOLOGY OF MEDICAL IMAGE PROCESSING FROM WOLFRAM MATHEMATICS

    Directory of Open Access Journals (Sweden)

    O. Ye. Prokopchenko

    2015-10-01

    Full Text Available The article analyzes the basic methods and algorithms of mathematical processing of medical images as objects of computer mathematics. The presented methods and computer algorithms of mathematics relevant and may find application in the field of medical imaging - automated processing of images; as a tool for measurement and determination the optical parameters; identification and formation of medical images database. Methods and computer algorithms presented in the article and based on Wolfram Mathematica are also relevant to the problem of modern medical education. As an example of Wolfram Mathematics may be considered appropriate demonstration, such as recognition of special radiographs and morphological imaging. These methods are used to improve  the diagnostic significance and value of medical (clinical research and can serve as an educational interactive demonstration. Implementation submitted individual methods and algorithms of computer Wolfram Mathematics contributes, in general, the optimization process of practical processing and presentation of medical images.

  4. Fundamental concepts of digital image processing

    Energy Technology Data Exchange (ETDEWEB)

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  5. Fundamental Concepts of Digital Image Processing

    Science.gov (United States)

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  6. An Efficient Secret Key Homomorphic Encryption Used in Image Processing Service

    Directory of Open Access Journals (Sweden)

    Pan Yang

    2017-01-01

    Full Text Available Homomorphic encryption can protect user’s privacy when operating on user’s data in cloud computing. But it is not practical for wide using as the data and services types in cloud computing are diverse. Among these data types, digital image is an important personal data for users. There are also many image processing services in cloud computing. To protect user’s privacy in these services, this paper proposed a scheme using homomorphic encryption in image processing. Firstly, a secret key homomorphic encryption (IGHE was constructed for encrypting image. IGHE can operate on encrypted floating numbers efficiently to adapt to the image processing service. Then, by translating the traditional image processing methods into the operations on encrypted pixels, the encrypted image can be processed homomorphically. That is, service can process the encrypted image directly, and the result after decryption is the same as processing the plain image. To illustrate our scheme, three common image processing instances were given in this paper. The experiments show that our scheme is secure, correct, and efficient enough to be used in practical image processing applications.

  7. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  8. Bayesian image processing in two and three dimensions

    International Nuclear Information System (INIS)

    Hart, H.; Liang, Z.

    1986-01-01

    Tomographic image processing customarily analyzes data acquired over a series of projective orientations. If, however, the point source function (the matrix R) of the system is strongly depth dependent, tomographic information is also obtainable from a series of parallel planar images corresponding to different ''focal'' depths. Bayesian image processing (BIP) was carried out for two and three dimensional spatially uncorrelated discrete amplitude a priori source distributions

  9. Morphology and probability in image processing

    International Nuclear Information System (INIS)

    Fabbri, A.G.

    1985-01-01

    The author presents an analysis of some concepts which relate morphological attributes of digital objects to statistically meaningful measures. Some elementary transformations of binary images are described and examples of applications are drawn from the geological and image analysis domains. Some of the morphological models applicablle in astronomy are discussed. It is shown that the development of new spatially oriented computers leads to more extensive applications of image processing in the geosciences

  10. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    Science.gov (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  11. Viewpoints on Medical Image Processing: From Science to Application

    Science.gov (United States)

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  12. Viewpoints on Medical Image Processing: From Science to Application.

    Science.gov (United States)

    Deserno Né Lehmann, Thomas M; Handels, Heinz; Maier-Hein Né Fritzsche, Klaus H; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-05-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.

  13. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    Science.gov (United States)

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  14. Opportunities and applications of medical imaging and image processing techniques for nondestructive testing

    International Nuclear Information System (INIS)

    Song, Samuel Moon Ho; Cho, Jung Ho; Son, Sang Rock; Sung, Je Jonng; Ahn, Hyung Keun; Lee, Jeong Soon

    2002-01-01

    Nondestructive testing (NDT) of structures strives to extract all relevant data regarding the state of the structure without altering its form or properties. The success enjoyed by imaging and image processing technologies in the field of modem medicine forecasts similar success of image processing related techniques both in research and practice of NDT. In this paper, we focus on two particular instances of such applications: a modern vision technique for 3-D profile and shape measurement, and ultrasonic imaging with rendering for 3-D visualization. Ultrasonic imaging of 3-D structures for nondestructive evaluation purposes must provide readily recognizable 3-D images with enough details to clearly show various faults that may or may not be present. As a step towards Improving conspicuity and thus detection of faults, we propose a pulse-echo ultrasonic imaging technique to generate a 3-D image of the 3-D object under evaluation through strategic scanning and processing of the pulse-echo data. This three-dimensional processing and display improves conspicuity of faults and in addition, provides manipulation capabilities, such as pan and rotation of the 3-D structure. As a second application, we consider an image based three-dimensional shape determination system. The shape, and thus the three-dimensional coordinate information of the 3-D object, is determined solely from captured images of the 3-D object from a prescribed set of viewpoints. The approach is based on the shape from silhouette (SFS) technique and the efficacy of the SFS method is tested using a sample data set. This system may be used to visualize the 3-D object efficiently, or to quickly generate initial CAD data for reverse engineering purposes. The proposed system potentially may be used in three dimensional design applications such as 3-D animation and 3-D games.

  15. Apparatus and method X-ray image processing

    International Nuclear Information System (INIS)

    1984-01-01

    The invention relates to a method for X-ray image processing. The radiation passed through the object is transformed into an electric image signal from which the logarithmic value is determined and displayed by a display device. Its main objective is to provide a method and apparatus that renders X-ray images or X-ray subtraction images with strong reduction of stray radiation. (Auth.)

  16. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  17. Suitable post processing algorithms for X-ray imaging using oversampled displaced multiple images

    International Nuclear Information System (INIS)

    Thim, J; Reza, S; Nawaz, K; Norlin, B; O'Nils, M; Oelmann, B

    2011-01-01

    X-ray imaging systems such as photon counting pixel detectors have a limited spatial resolution of the pixels, based on the complexity and processing technology of the readout electronics. For X-ray imaging situations where the features of interest are smaller than the imaging system pixel size, and the pixel size cannot be made smaller in the hardware, alternative means of resolution enhancement require to be considered. Oversampling with the usage of multiple displaced images, where the pixels of all images are mapped to a final resolution enhanced image, has proven a viable method of reaching a sub-pixel resolution exceeding the original resolution. The effectiveness of the oversampling method declines with the number of images taken, the sub-pixel resolution increases, but relative to a real reduction of imaging pixel sizes yielding a full resolution image, the perceived resolution from the sub-pixel oversampled image is lower. This is because the oversampling method introduces blurring noise into the mapped final images, and the blurring relative to full resolution images increases with the oversampling factor. One way of increasing the performance of the oversampling method is by sharpening the images in post processing. This paper focus on characterizing the performance increase of the oversampling method after the use of some suitable post processing filters, for digital X-ray images specifically. The results show that spatial domain filters and frequency domain filters of the same type yield indistinguishable results, which is to be expected. The results also show that the effectiveness of applying sharpening filters to oversampled multiple images increase with the number of images used (oversampling factor), leaving 60-80% of the original blurring noise after filtering a 6 x 6 mapped image (36 images taken), where the percentage is depending on the type of filter. This means that the effectiveness of the oversampling itself increase by using sharpening

  18. SIP: A Web-Based Astronomical Image Processing Program

    Science.gov (United States)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  19. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  20. Rotation Covariant Image Processing for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Henrik Skibbe

    2013-01-01

    Full Text Available With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  1. Penn State astronomical image processing system

    International Nuclear Information System (INIS)

    Truax, R.J.; Nousek, J.A.; Feigelson, E.D.; Lonsdale, C.J.

    1987-01-01

    The needs of modern astronomy for image processing set demanding standards in simultaneously requiring fast computation speed, high-quality graphic display, large data storage, and interactive response. An innovative image processing system was designed, integrated, and used; it is based on a supermicro architecture which is tailored specifically for astronomy, which provides a highly cost-effective alternative to the traditional minicomputer installation. The paper describes the design rationale, equipment selection, and software developed to allow other astronomers with similar needs to benefit from the present experience. 9 references

  2. Software architecture for intelligent image processing using Prolog

    Science.gov (United States)

    Jones, Andrew C.; Batchelor, Bruce G.

    1994-10-01

    We describe a prototype system for interactive image processing using Prolog, implemented by the first author on an Apple Macintosh computer. This system is inspired by Prolog+, but differs from it in two particularly important respects. The first is that whereas Prolog+ assumes the availability of dedicated image processing hardware, with which the Prolog system communicates, our present system implements image processing functions in software using the C programming language. The second difference is that although our present system supports Prolog+ commands, these are implemented in terms of lower-level Prolog predicates which provide a more flexible approach to image manipulation. We discuss the impact of the Apple Macintosh operating system upon the implementation of the image-processing functions, and the interface between these functions and the Prolog system. We also explain how the Prolog+ commands have been implemented. The system described in this paper is a fairly early prototype, and we outline how we intend to develop the system, a task which is expedited by the extensible architecture we have implemented.

  3. An ImageJ plugin for ion beam imaging and data processing at AIFIRA facility

    Energy Technology Data Exchange (ETDEWEB)

    Devès, G.; Daudin, L. [Univ. Bordeaux, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Bessy, A.; Buga, F.; Ghanty, J.; Naar, A.; Sommar, V. [Univ. Bordeaux, F-33170 Gradignan (France); Michelet, C.; Seznec, H.; Barberet, P. [Univ. Bordeaux, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France)

    2015-04-01

    Quantification and imaging of chemical elements at the cellular level requires the use of a combination of techniques such as micro-PIXE, micro-RBS, STIM, secondary electron imaging associated with optical and fluorescence microscopy techniques employed prior to irradiation. Such a numerous set of methods generates an important amount of data per experiment. Typically for each acquisition the following data has to be processed: chemical map for each element present with a concentration above the detection limit, density and backscattered maps, mean and local spectra corresponding to relevant region of interest such as whole cell, intracellular compartment, or nanoparticles. These operations are time consuming, repetitive and as such could be source of errors in data manipulation. In order to optimize data processing, we have developed a new tool for batch data processing and imaging. This tool has been developed as a plugin for ImageJ, a versatile software for image processing that is suitable for the treatment of basic IBA data operations. Because ImageJ is written in Java, the plugin can be used under Linux, Mas OS X and Windows in both 32-bits and 64-bits modes, which may interest developers working on open-access ion beam facilities like AIFIRA. The main features of this plugin are presented here: listfile processing, spectroscopic imaging, local information extraction, quantitative density maps and database management using OMERO.

  4. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  5. Architecture Of High Speed Image Processing System

    Science.gov (United States)

    Konishi, Toshio; Hayashi, Hiroshi; Ohki, Tohru

    1988-01-01

    One of architectures for a high speed image processing system which corresponds to a new algorithm for a shape understanding is proposed. And the hardware system which is based on the archtecture was developed. Consideration points of the architecture are mainly that using processors should match with the processing sequence of the target image and that the developed system should be used practically in an industry. As the result, it was possible to perform each processing at a speed of 80 nano-seconds a pixel.

  6. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo

    2014-07-01

    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  7. Earth Observation Services (Image Processing Software)

    Science.gov (United States)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  8. Insights into an Optimization of Plasmodium vivax Sal-1 In Vitro Culture: The Aotus Primate Model

    Science.gov (United States)

    Obaldía, Nicanor; Nuñez, Marlon; Dutary, Sahir; Lim, Caeul; Barnes, Samantha; Kocken, Clemens H. M.; Duraisingh, Manoj T.; Adams, John H.; Pasini, Erica M.

    2016-01-01

    Malaria is one of the most significant tropical diseases, and of the Plasmodium species that cause human malaria, P. vivax is the most geographically widespread. However, P. vivax remains a relatively neglected human parasite since research is typically limited to laboratories with direct access to parasite isolates from endemic field settings or from non-human primate models. This restricted research capacity is in large part due to the lack of a continuous P. vivax in vitro culture system, which has hampered the ability for experimental research needed to gain biological knowledge and develop new therapies. Consequently, efforts to establish a long-term P. vivax culture system are confounded by our poor knowledge of the preferred host cell and essential nutrients needed for in vitro propagation. Reliance on very heterogeneous P. vivax field isolates makes it difficult to benchmark parasite characteristics and further complicates development of a robust and reliable culture method. In an effort to eliminate parasite variability as a complication, we used a well-defined Aotus-adapted P. vivax Sal-1 strain to empirically evaluate different short-term in vitro culture conditions and compare them with previous reported attempts at P. vivax in vitro culture Most importantly, we suggest that reticulocyte enrichment methods affect invasion efficiency and we identify stabilized forms of nutrients that appear beneficial for parasite growth, indicating that P. vivax may be extremely sensitive to waste products. Leuko-depletion methods did not significantly affect parasite development. Formatting changes such as shaking and static cultures did not seem to have a major impact while; in contrast, the starting haematocrit affected both parasite invasion and growth. These results support the continued use of Aotus-adapted Sal-1 for development of P. vivax laboratory methods; however, further experiments are needed to optimize culture conditions to support long-term parasite

  9. Digital image processing for radiography in nuclear power plants

    International Nuclear Information System (INIS)

    Heidt, H.; Rose, P.; Raabe, P.; Daum, W.

    1985-01-01

    With the help of digital processing of radiographic images from reactor-components it is possible to increase the security and objectiveness of the evaluation. Several examples of image processing procedures (contrast enhancement, density profiles, shading correction, digital filtering, superposition of images etc.) show the advantages for the visualization and evaluation of radiographs. Digital image processing can reduce some of the restrictions of radiography in nuclear power plants. In addition a higher degree of automation can be cost-saving and increase the quality of radiographic evaluation. The aim of the work performed was to to improve the readability of radiographs for the human observer. The main problem is lack of contrast and the presence of disturbing structures like weld seams. Digital image processing of film radiographs starts with the digitization of the image. Conventional systems use TV-cameras or scanners and provide a dynamic range of 1.5. to 3 density units, which are digitized to 256 grey levels. For the enhancement process it is necessary that the grey level range covers the density range of the important regions of the presented film. On the other hand the grey level coverage should not be wider than necessary to minimize the width of digitization steps. Poor digitization makes flaws and cracks invisible and spoils all further image processing

  10. Graphical user interface for image acquisition and processing

    Science.gov (United States)

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  11. IDAPS (Image Data Automated Processing System) System Description

    Science.gov (United States)

    1988-06-24

    This document describes the physical configuration and components used in the image processing system referred to as IDAPS (Image Data Automated ... Processing System). This system was developed by the Environmental Research Institute of Michigan (ERIM) for Eglin Air Force Base. The system is designed

  12. Defects quantization in industrial radiographs by image processing

    International Nuclear Information System (INIS)

    Briand, F.Y.; Brillault, B.; Philipp, S.

    1988-01-01

    This paper refers to the industrial application of image processing using Non Destructive Testing by radiography. The various problems involved by the conception of a numerical tool are described. This tool intends to help radiograph experts to quantify defects and to follow up their evolution, using numerical techniques. The sequences of processings that achieve defect segmentation and quantization are detailed. They are based on the thorough knowledge of radiographs formation techniques. The process uses various methods of image analysis, including textural analysis and morphological mathematics. The interface between the final product and users will occur in an explicit language, using the terms of radiographic expertise without showing any processing details. The problem is thoroughly described: image formation, digitization, processings fitted to flaw morphology and finally product structure in progress. 12 refs [fr

  13. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process

  14. Pattern recognition and expert image analysis systems in biomedical image processing (Invited Paper)

    Science.gov (United States)

    Oosterlinck, A.; Suetens, P.; Wu, Q.; Baird, M.; F. M., C.

    1987-09-01

    This paper gives an overview of pattern recoanition techniques (P.R.) used in biomedical image processing and problems related to the different P.R. solutions. Also the use of knowledge based systems to overcome P.R. difficulties, is described. This is illustrated by a common example ofabiomedical image processing application.

  15. Polarization information processing and software system design for simultaneously imaging polarimetry

    Science.gov (United States)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.

  16. Effects of image processing on the detective quantum efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na [Yonsei University, Wonju (Korea, Republic of)

    2010-02-15

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  17. Effects of image processing on the detective quantum efficiency

    International Nuclear Information System (INIS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-01-01

    The evaluation of image quality is an important part of digital radiography. The modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) are widely accepted measurements of the digital radiographic system performance. However, as the methodologies for such characterization have not been standardized, it is difficult to compare directly reported the MTF, NPS, and DQE results. In this study, we evaluated the effect of an image processing algorithm for estimating the MTF, NPS, and DQE. The image performance parameters were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) posterior-anterior (PA) images of a hand for measuring the signal to noise ratio (SNR), the slit images for measuring the MTF, and the white images for measuring the NPS were obtained, and various multi-Scale image contrast amplification (MUSICA) factors were applied to each of the acquired images. All of the modifications of the images obtained by using image processing had a considerable influence on the evaluated image quality. In conclusion, the control parameters of image processing can be accounted for evaluating characterization of image quality in same way. The results of this study should serve as a baseline for based on evaluating imaging systems and their imaging characteristics by MTF, NPS, and DQE measurements.

  18. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  19. Image processing for medical diagnosis of human organs

    International Nuclear Information System (INIS)

    Tamura, Shin-ichi

    1989-01-01

    The report first describes expectations and needs for diagnostic imaging in the field of clinical medicine, radiation medicine in particular, viewed by the author as an image processing expert working at a medical institute. Then, medical image processing techniques are discussed in relation to advanced information processing techniques that are currently drawing much attention in the field of engineering. Finally, discussion is also made of practical applications of image processing techniques to diagnosis. In the field of clinical diagnosis, advanced equipment such as PACS (picture archiving and communication system) has come into wider use, and efforts have been made to shift from visual examination to more quantitative and objective diagnosis by means of such advanced systems. In clinical medicine, practical, robust systems are more useful than sophisticated ones. It is difficult, though important, to develop completely automatized diagnostic systems. The urgent, realistic goal, therefore, is to develop effective diagnosis support systems. In particular, operation support systems equipped with three-dimensional displays will be very useful. (N.K.)

  20. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  1. Image processing system for flow pattern measurements

    International Nuclear Information System (INIS)

    Ushijima, Satoru; Miyanaga, Yoichi; Takeda, Hirofumi

    1989-01-01

    This paper describes the development and application of an image processing system for measurements of flow patterns occuring in natural circulation water flows. In this method, the motions of particles scattered in the flow are visualized by a laser light slit and they are recorded on normal video tapes. These image data are converted to digital data with an image processor and then transfered to a large computer. The center points and pathlines of the particle images are numerically analized, and velocity vectors are obtained with these results. In this image processing system, velocity vectors in a vertical plane are measured simultaneously, so that the two dimensional behaviors of various eddies, with low velocity and complicated flow patterns usually observed in natural circulation flows, can be determined almost quantitatively. The measured flow patterns, which were obtained from natural circulation flow experiments, agreed with photographs of the particle movements, and the validity of this measuring system was confirmed in this study. (author)

  2. Image processing for HTS SQUID probe microscope

    International Nuclear Information System (INIS)

    Hayashi, T.; Koetitz, R.; Itozaki, H.; Ishikawa, T.; Kawabe, U.

    2005-01-01

    An HTS SQUID probe microscope has been developed using a high-permeability needle to enable high spatial resolution measurement of samples in air even at room temperature. Image processing techniques have also been developed to improve the magnetic field images obtained from the microscope. Artifacts in the data occur due to electromagnetic interference from electric power lines, line drift and flux trapping. The electromagnetic interference could successfully be removed by eliminating the noise peaks from the power spectrum of fast Fourier transforms of line scans of the image. The drift between lines was removed by interpolating the mean field value of each scan line. Artifacts in line scans occurring due to flux trapping or unexpected noise were removed by the detection of a sharp drift and interpolation using the line data of neighboring lines. Highly detailed magnetic field images were obtained from the HTS SQUID probe microscope by the application of these image processing techniques

  3. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.

    2018-01-09

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  4. Current status on image processing in medical fields in Japan

    International Nuclear Information System (INIS)

    Atsumi, Kazuhiko

    1979-01-01

    Information on medical images are classified in the two patterns. 1) off-line images on films-x-ray films, cell image, chromosome image etc. 2) on-line images detected through sensors, RI image, ultrasonic image, thermogram etc. These images are divided into three characteristic, two dimensional three dimensional and dynamic images. The research on medical image processing have been reported in several meeting in Japan and many fields on images have been studied on RI, thermogram, x-ray film, x-ray-TV image, cancer cell, blood cell, bacteria, chromosome, ultrasonics, and vascular image. Processing on TI image useful and easy because of their digital displays. Software on smoothing, restoration (iterative approximation), fourier transformation, differentiation and subtration. Image on stomach and chest x-ray films have been processed automatically utilizing computer system. Computed Tomography apparatuses have been already developed in Japan and automated screening instruments on cancer cells and recently on blood cells classification have been also developed. Acoustical holography imaging and moire topography have been also studied in Japan. (author)

  5. Image Segmentation and Processing for Efficient Parking Space Analysis

    OpenAIRE

    Tutika, Chetan Sai; Vallapaneni, Charan; R, Karthik; KP, Bharath; Muthu, N Ruban Rajesh Kumar

    2018-01-01

    In this paper, we develop a method to detect vacant parking spaces in an environment with unclear segments and contours with the help of MATLAB image processing capabilities. Due to the anomalies present in the parking spaces, such as uneven illumination, distorted slot lines and overlapping of cars. The present-day conventional algorithms have difficulties processing the image for accurate results. The algorithm proposed uses a combination of image pre-processing and false contour detection ...

  6. Fingerprint image enhancement by differential hysteresis processing.

    Science.gov (United States)

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  7. The operation technology of realtime image processing system (Datacube)

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Lee, Yong Bum; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Park, Jin Seok

    1997-02-01

    In this project, a Sparc VME-based MaxSparc system, running the solaris operating environment, is selected as the dedicated image processing hardware for robot vision applications. In this report, the operation of Datacube maxSparc system, which is high performance realtime image processing hardware, is systematized. And image flow example programs for running MaxSparc system are studied and analyzed. The state-of-the-arts of Datacube system utilizations are studied and analyzed. For the next phase, advanced realtime image processing platform for robot vision application is going to be developed. (author). 19 refs., 71 figs., 11 tabs.

  8. Matching rendered and real world images by digital image processing

    Science.gov (United States)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  9. Processing of space images and geologic interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Yudin, V S

    1981-01-01

    Using data for standard sections, a correlation was established between natural formations in geologic/geophysical dimensions and the form they take in the imaging. With computer processing, important data can be derived from the image. Use of the above correlations has allowed to make a number of preliminary classifications of tectonic structures, and to determine certain ongoing processes in the given section. The derived data may be used for search of useful minerals.

  10. Advances in the Application of Image Processing Fruit Grading

    OpenAIRE

    Fang , Chengjun; Hua , Chunjian

    2013-01-01

    International audience; In the perspective of actual production, the paper presents the advances in the application of image processing fruit grading from several aspects, such as processing precision and processing speed of image processing technology. Furthermore, the different algorithms about detecting size, shape, color and defects are combined effectively to reduce the complexity of each algorithm and achieve a balance between the processing precision and processing speed are keys to au...

  11. Geometric correction of radiographic images using general purpose image processing program

    International Nuclear Information System (INIS)

    Kim, Eun Kyung; Cheong, Ji Seong; Lee, Sang Hoon

    1994-01-01

    The present study was undertaken to compare geometric corrected image by general-purpose image processing program for the Apple Macintosh II computer (NIH Image, Adobe Photoshop) with standardized image by individualized custom fabricated alignment instrument. Two non-standardized periapical films with XCP film holder only were taken at the lower molar portion of 19 volunteers. Two standardized periapical films with customized XCP film holder with impression material on the bite-block were taken for each person. Geometric correction was performed with Adobe Photoshop and NIH Image program. Specially, arbitrary image rotation function of 'Adobe Photoshop' and subtraction with transparency function of 'NIH Image' were utilized. The standard deviations of grey values of subtracted images were used to measure image similarity. Average standard deviation of grey values of subtracted images if standardized group was slightly lower than that of corrected group. However, the difference was found to be statistically insignificant (p>0.05). It is considered that we can use 'NIH Image' and 'Adobe Photoshop' program for correction of nonstandardized film, taken with XCP film holder at lower molar portion.

  12. Evaluation of processing methods for static radioisotope scan images

    International Nuclear Information System (INIS)

    Oakberg, J.A.

    1976-12-01

    Radioisotope scanning in the field of nuclear medicine provides a method for the mapping of a radioactive drug in the human body to produce maps (images) which prove useful in detecting abnormalities in vital organs. At best, radioisotope scanning methods produce images with poor counting statistics. One solution to improving the body scan images is using dedicated small computers with appropriate software to process the scan data. Eleven methods for processing image data are compared

  13. Digital image processing in NDT : Application to industrial radiography

    International Nuclear Information System (INIS)

    Aguirre, J.; Gonzales, C.; Pereira, D.

    1988-01-01

    Digital image processing techniques are applied to image enhancement discontinuity detection and characterization is radiographic test. Processing is performed mainly by image histogram modification, edge enhancement, texture and user interactive segmentation. Implementation was achieved in a microcomputer with video image capture system. Results are compared with those obtained through more specialized equipment main frame computers and high precision mechanical scanning digitisers. Procedures are intended as a precious stage for automatic defect detection

  14. Quantum Computation-Based Image Representation, Processing Operations and Their Applications

    Directory of Open Access Journals (Sweden)

    Fei Yan

    2014-10-01

    Full Text Available A flexible representation of quantum images (FRQI was proposed to facilitate the extension of classical (non-quantum-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI, quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers.

  15. An invertebrate embryologist's guide to routine processing of confocal images.

    Science.gov (United States)

    von Dassow, George

    2014-01-01

    It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.

  16. Development of X-ray radiography examination technology by image processing method

    Energy Technology Data Exchange (ETDEWEB)

    Min, Duck Kee; Koo, Dae Seo; Kim, Eun Ka [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-06-01

    Because the dimension of nuclear fuel rods was measured with rapidity and accuracy by X-ray radiography examination, the set-up of image processing system which was composed of 979 CCD-L camera, image processing card and fluorescent lighting was carried out, and the image processing system enabled image processing to perform. The examination technology of X-ray radiography, which enabled dimension measurement of nuclear fuel rods to perform, was developed by image processing method. The result of dimension measurement of standard fuel rod by image processing method was 2% reduction in relative measuring error than that of X-ray radiography film, while the former was better by 100 {approx} 200 {mu}m in measuring accuracy than the latter. (author). 9 refs., 22 figs., 3 tabs.

  17. Roles of medical image processing in medical physics

    International Nuclear Information System (INIS)

    Arimura, Hidetaka

    2011-01-01

    Image processing techniques including pattern recognition techniques play important roles in high precision diagnosis and radiation therapy. The author reviews a symposium on medical image information, which was held in the 100th Memorial Annual Meeting of the Japan Society of Medical Physics from September 23rd to 25th. In this symposium, we had three invited speakers, Dr. Akinobu Shimizu, Dr. Hideaki Haneishi, and Dr. Hirohito Mekata, who are active engineering researchers of segmentation, image registration, and pattern recognition, respectively. In this paper, the author reviews the roles of the medical imaging processing in medical physics field, and the talks of the three invited speakers. (author)

  18. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  20. Performance Measure as Feedback Variable in Image Processing

    Directory of Open Access Journals (Sweden)

    Ristić Danijela

    2006-01-01

    Full Text Available This paper extends the view of image processing performance measure presenting the use of this measure as an actual value in a feedback structure. The idea behind is that the control loop, which is built in that way, drives the actual feedback value to a given set point. Since the performance measure depends explicitly on the application, the inclusion of feedback structures and choice of appropriate feedback variables are presented on example of optical character recognition in industrial application. Metrics for quantification of performance at different image processing levels are discussed. The issues that those metrics should address from both image processing and control point of view are considered. The performance measures of individual processing algorithms that form a character recognition system are determined with respect to the overall system performance.

  1. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  2. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques.

    Science.gov (United States)

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan

    2010-07-01

    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  3. Need for coordinated programs to improve global health by optimizing salt and iodine intake Necesidad de programas coordinados para mejorar la salud a escala mundial mediante la optimización de la ingesta de sal y yodo

    Directory of Open Access Journals (Sweden)

    Norm R. C. Campbell

    2012-10-01

    Full Text Available High dietary salt is a major cause of increased blood pressure, the leading risk for death worldwide. The World Health Organization (WHO has recommended that salt intake be less than 5 g/day, a goal that only a small proportion of people achieve. Iodine deficiency can cause cognitive and motor impairment and, if severe, hypothyroidism with serious mental and growth retardation. More than 2 billion people worldwide are at risk of iodine deficiency. Preventing iodine deficiency by using salt fortified with iodine is a major global public health success. Programs to reduce dietary salt are technically compatible with programs to prevent iodine deficiency through salt fortification. However, for populations to fully benefit from optimum intake of salt and iodine, the programs must be integrated. This review summarizes the scientific basis for salt reduction and iodine fortification programs, the compatibility of the programs, and the steps that need to be taken by the WHO, national governments, and nongovernmental organizations to ensure that populations fully benefit from optimal intake of salt and iodine. Specifically, expert groups must be convened to help countries implement integrated programs and context-specific case studies of successfully integrated programs; lessons learned need to be compiled and disseminated. Integrated surveillance programs will be more efficient and will enhance current efforts to optimize intake of iodine and salt. For populations to fully benefit, governments need to place a high priority on integrating these two important public health programs.El alto contenido de sal en la dieta es una causa principal de incremento de la presión arterial, el principal factor de riesgo de muerte a escala mundial. La Organización Mundial de la Salud (OMS ha recomendado que el consumo de sal sea inferior a 5 g/d, una meta que solo logran una pequeña proporción de personas. La falta de yodo puede causar deficiencia cognoscitiva y

  4. A software package for biomedical image processing and analysis

    International Nuclear Information System (INIS)

    Goncalves, J.G.M.; Mealha, O.

    1988-01-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developed using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an excellent tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail

  5. A gamma cammera image processing system

    International Nuclear Information System (INIS)

    Chen Weihua; Mei Jufang; Jiang Wenchuan; Guo Zhenxiang

    1987-01-01

    A microcomputer based gamma camera image processing system has been introduced. Comparing with other systems, the feature of this system is that an inexpensive microcomputer has been combined with specially developed hardware, such as, data acquisition controller, data processor and dynamic display controller, ect. Thus the process of picture processing has been speeded up and the function expense ratio of the system raised

  6. Intensity-dependent point spread image processing

    International Nuclear Information System (INIS)

    Cornsweet, T.N.; Yellott, J.I.

    1984-01-01

    There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction

  7. Image processing in radiology. Current applications

    International Nuclear Information System (INIS)

    Neri, E.; Caramella, D.; Bartolozzi, C.

    2008-01-01

    Few fields have witnessed such impressive advances as image processing in radiology. The progress achieved has revolutionized diagnosis and greatly facilitated treatment selection and accurate planning of procedures. This book, written by leading experts from many countries, provides a comprehensive and up-to-date description of how to use 2D and 3D processing tools in clinical radiology. The first section covers a wide range of technical aspects in an informative way. This is followed by the main section, in which the principal clinical applications are described and discussed in depth. To complete the picture, a third section focuses on various special topics. The book will be invaluable to radiologists of any subspecialty who work with CT and MRI and would like to exploit the advantages of image processing techniques. It also addresses the needs of radiographers who cooperate with clinical radiologists and should improve their ability to generate the appropriate 2D and 3D processing. (orig.)

  8. Propuesta de sistema de vigilancia de la producción, distribución y consumo de la sal yodada en Cuba

    OpenAIRE

    Terry Berro, Blanca; Zulueta Torres, Daisy; Paz Luna, Maytell de la

    2006-01-01

    La implementación de componentes de vigilancia constituye un elemento esencial en cualquier programa de fortificación de alimentos para garanta nacional. El presente trabajo aborda la propuesta de diseño del sistema de vigilancia de la yodación de la sal, elemento indispensable para lograr la sostenibidad del programa. The implementation of surveillance components is an essential element in any food fortification program to guarantee that the beneficiary population receives on account of t...

  9. Low level image processing techniques using the pipeline image processing engine in the flight telerobotic servicer

    Science.gov (United States)

    Nashman, Marilyn; Chaconas, Karen J.

    1988-01-01

    The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.

  10. An Automated, Image Processing System for Concrete Evaluation

    International Nuclear Information System (INIS)

    Baumgart, C.W.; Cave, S.P.; Linder, K.E.

    1998-01-01

    Allied Signal Federal Manufacturing ampersand Technologies (FM ampersand T) was asked to perform a proof-of-concept study for the Missouri Highway and Transportation Department (MHTD), Research Division, in June 1997. The goal of this proof-of-concept study was to ascertain if automated scanning and imaging techniques might be applied effectively to the problem of concrete evaluation. In the current evaluation process, a concrete sample core is manually scanned under a microscope. Voids (or air spaces) within the concrete are then detected visually by a human operator by incrementing the sample under the cross-hairs of a microscope and by counting the number of ''pixels'' which fall within a void. Automation of the scanning and image analysis processes is desired to improve the speed of the scanning process, to improve evaluation consistency, and to reduce operator fatigue. An initial, proof-of-concept image analysis approach was successfully developed and demonstrated using acquired black and white imagery of concrete samples. In this paper, the automated scanning and image capture system currently under development will be described and the image processing approach developed for the proof-of-concept study will be demonstrated. A development update and plans for future enhancements are also presented

  11. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  12. Real-time progressive hyperspectral image processing endmember finding and anomaly detection

    CERN Document Server

    Chang, Chein-I

    2016-01-01

    The book covers the most crucial parts of real-time hyperspectral image processing: causality and real-time capability. Recently, two new concepts of real time hyperspectral image processing, Progressive Hyperspectral Imaging (PHSI) and Recursive Hyperspectral Imaging (RHSI). Both of these can be used to design algorithms and also form an integral part of real time hyperpsectral image processing. This book focuses on progressive nature in algorithms on their real-time and causal processing implementation in two major applications, endmember finding and anomaly detection, both of which are fundamental tasks in hyperspectral imaging but generally not encountered in multispectral imaging. This book is written to particularly address PHSI in real time processing, while a book, Recursive Hyperspectral Sample and Band Processing: Algorithm Architecture and Implementation (Springer 2016) can be considered as its companion book. Includes preliminary background which is essential to those who work in hyperspectral ima...

  13. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    Science.gov (United States)

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA

  14. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique

    Science.gov (United States)

    2015-01-01

    Background DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. Results We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. Conclusions This work presents an

  15. Effects of optimization and image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Kheddache, S.; Maansson, L.G.; Angelhed, J.E.; Denbratt, L.; Gottfridsson, B.; Schlossman, D.

    1991-01-01

    A digital system for chest radiography based on a large image intensifier was compared to a conventional film-screen system. The digital system was optimized with regard to spatial and contrast resolution and dose. The images were digitally processed for contrast and edge enhancement. A simulated pneumothorax and two and two simulated nodules were positioned over the lungs and the mediastinum of an anthro-pomorphic phantom. Observer performance was evaluated with Receiver Operating Characteristic (ROC) analysis. Five observers assessed the processed digital images and the conventional full-size radiographs. The time spent viewing the full-size radiographs and the digital images was recorded. For the simulated pneumothorax, the results showed perfect performance for the full-size radiographs and detectability was high also for the processed digital images. No significant differences in the detectability of the simulated nodules was seen between the two imaging systems. The results for the digital images showed a significantly improved detectability for the nodules in the mediastinum as compared to a previous ROC study where no optimization and image processing was available. No significant difference in detectability was seen between the former and the present ROC study for small nodules in the lung. No difference was seen in the time spent assessing the conventional full-size radiographs and the digital images. The study indicates that processed digital images produced by a large image intensifier are equal in image quality to conventional full-size radiographs for low-contrast objects such as nodules. (author). 38 refs.; 4 figs.; 1 tab

  16. Processing Infrared Images For Fire Management Applications

    Science.gov (United States)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  17. iMAGE cloud: medical image processing as a service for regional healthcare in a hybrid cloud environment.

    Science.gov (United States)

    Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen

    2016-11-01

    To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.

  18. Development and application of efficient portal imaging solutions

    International Nuclear Information System (INIS)

    Boer, J.C.J. de

    2003-01-01

    This thesis describes the theoretical derivation and clinical application of methods to measure and improve patient setup in radiotherapy by means of electronic portal imaging devices (EPIDs). The focus is on methods that (1) are simple to implement and (2) add minimal workload. First, the relation between setup errors and treatment planning margins is quantified in a population-statistics approach. A major result is that systematic errors (recurring each treatment fraction) require about three times larger margins than random errors (fluctuating from fraction to fraction). Therefore, the emphasis is on reduction of systematic setup errors using off-line correction protocols. The new no action level (NAL) protocol, aimed at significant reduction of systematic errors using a small number of imaged fractions, is proposed and investigated in detail. It is demonstrated that the NAL protocol provides final distributions of residue systematic errors at least as good as the most widely applied comparable protocol, the shrinking action level (SAL) protocol, but uses only 3 imaged fractions per patient instead of the 8-10 required by SAL. The efficacy of NAL is demonstrated retrospectively on a database of measured setup errors involving 600 patients with weekly set-up measurements and prospectively in a group of 30 patients. The general properties of NAL are investigated using both analytical and Monte Carlo calculations. As an add-on to NAL, a correction verification (COVER) protocol has been developed using computer simulations combined with a risk analysis. With COVER, a single additional imaged fraction per patient is sufficient to reduce the detrimental effect of possible systematic mistakes in the execution of setup corrections to negligible levels. The high accuracy achieved with off-line setup corrections (yielding SDs of systematic errors ∼1 mm) is demonstrated in clinical studies involving 60 lung cancer patients and 31 head-and-neck patients. Furthermore

  19. Suplementação com zinco pode recuperar apetite para refeições de sal Zinc supplementation may recover taste for salt meals

    Directory of Open Access Journals (Sweden)

    Dioclécio Campos Jr

    2004-02-01

    Full Text Available OBJETIVO: Avaliar o efeito do zinco em crianças de 8 meses a 5 anos de idade com falta de apetite para refeições de sal. POPULAÇÃO E MÉTODOS: Estudo duplo-cego com placebo. Dois grupos de crianças apresentando recusa a alimentos de sal foram acompanhados durante 6 meses. As crianças do primeiro grupo receberam 1 mg/kg/dia de zinco sob forma de quelato, durante três meses, enquanto as do segundo grupo receberam uma solução placebo durante o mesmo período. Os dois grupos eram semelhantes quanto a idade, sexo, peso, duração do aleitamento materno, idade de desmame e exames hetamatológicos e bioquímicos. A resposta das crianças ao tratamento foi informada em questionário preenchido regularmente pelas mães. RESULTADOS: 17/20 (85% das crianças que receberam zinco e 10/20 (50% das que receberam placebo recuperaram o apetite para refeições de sal. A diferença foi estatisticamente significativa para p OBJECTIVE: To evaluate the effect of zinc on the appetite for salt foods in children aged 8 months to 5 years. METHOD: Double-blind, placebo-controlled study. Two groups of 20 children refusing to eat salt foods were followed during six months. The children in the first group received zinc chelate 1 mg/kg daily for three months. The second group received a placebo solution. The two groups were similar in terms of age, sex, weight, duration of breastfeeding, age at weaning, biochemical and hematological data. The response of children to treatment was informed by their mothers. RESULTS: 17/20 (85% of the children receiving zinc chelate and 10/20 (50% of the children receiving placebo improved their appetite for salt foods. The difference was statistically significant (p < 0.05, chi-square test. CONCLUSION: Zinc supplementation may improve the acceptance of salt foods by children.

  20. El paisaje como infraestructura. Caso de estudio: el río Salí en el sistema metropolitano de Tucumán (SIMET

    Directory of Open Access Journals (Sweden)

    María Paula Llomparte

    2013-01-01

    Full Text Available La incorporación de la noción de paisaje como infraestructura se presenta como una alternativa estratégica para la generación de un espacio participativo, de calidad ambiental y de inclusión social, que atienda a recursos culturales y naturales en el marco de la planificación del territorio.Las dinámicas y procesos de expansión del Área Metropolitana de Tucumán (AMeT han afectado al río Salí como recurso y a sus paisajes como componentes fundamentales para la concreción de un modelo más cercano a la sustentabilidad. Desde mediados del siglo XX hasta la actualidad se impulsaron numerosos planes y propuestas que procuran revertir esta situación.El presente trabajo propone revisar las diferentes propuestas de actuación en los márgenes del Salí a los fines de evaluar si propician o atienden al paisaje como elemento estructurante del espacio metropolitano. Se argumenta que los planes propuestos se reproducen desde una lógica netamente extractivista, afectando al río como patrimonio cultural y ambiental y desatendiendo el aprovechamiento responsable de los recursos territoriales.

  1. High-performance method of morphological medical image processing

    Directory of Open Access Journals (Sweden)

    Ryabykh M. S.

    2016-07-01

    Full Text Available the article shows the implementation of grayscale morphology vHGW algorithm for selection borders in the medical image. Image processing is executed using OpenMP and NVIDIA CUDA technology for images with different resolution and different size of the structuring element.

  2. Spatially assisted down-track median filter for GPR image post-processing

    Science.gov (United States)

    Paglieroni, David W; Beer, N Reginald

    2014-10-07

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  3. Measurement of smaller colon polyp in CT colonography images using morphological image processing.

    Science.gov (United States)

    Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K

    2017-11-01

    Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].

  4. Enhancement of dental x-ray images by two channel image processing

    International Nuclear Information System (INIS)

    Mitra, S.; Yu, T.H.

    1991-01-01

    In this paper, the authors develop a new algorithm for the enhancement of low-contrast details of dental X-ray images using a two channel structure. The algorithm first decomposes an input image in the frequency domain into two parts by filtering: one containing the low frequency components and the other containing the high frequency components. Then these parts are enhanced separately using a transform magnitude modifier. Finally a contrast enhanced image is formed by combining these two processed pats. The performance of the proposed algorithm is illustrated through enhancement of dental X-ray images. The algorithm can be easily implemented on a personal computer

  5. Subband/Transform MATLAB Functions For Processing Images

    Science.gov (United States)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  6. Image processing tensor transform and discrete tomography with Matlab

    CERN Document Server

    Grigoryan, Artyom M

    2012-01-01

    Focusing on mathematical methods in computer tomography, Image Processing: Tensor Transform and Discrete Tomography with MATLAB(R) introduces novel approaches to help in solving the problem of image reconstruction on the Cartesian lattice. Specifically, it discusses methods of image processing along parallel rays to more quickly and accurately reconstruct images from a finite number of projections, thereby avoiding overradiation of the body during a computed tomography (CT) scan. The book presents several new ideas, concepts, and methods, many of which have not been published elsewhere. New co

  7. New real-time image processing system for IRFPA

    Institute of Scientific and Technical Information of China (English)

    WANG Bing-jian; LIU Shang-qian; CHENG Yu-bao

    2006-01-01

    Influenced by detectors' material,manufacturing technology etc,every detector in infrared focal plane array (IRFPA) will output different voltages even if their input radiation flux is the same.And this is called non-uniformity of IRFPA.At the same time,the high background temperature,low temperature difference between targets and background and the low responsivity of IRFPA result in low contrast of infrared images.So non-uniformity correction and image enhancement are important techniques for IRFPA imaging system.This paper proposes a new real-time infrared image processing system based on Field Programmable Gate Array(FPGA).The system implements non-uniformity correction,image enhancement and video synthesization etc.By using parallel architecture and pipeline technique,the system processing speed is as high as 50Mx12bits per second.It is appropriate greatly to a large IRFPA and a high frame frequency IRFPA imaging system.The system is miniatured in one FPGA.

  8. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG

    2016-02-01

    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  9. Salários e tecnologia num modelo de crescimento com restrição externa Wages and technology in a growth model with external constraints

    Directory of Open Access Journals (Sweden)

    Marcus Dutra

    2006-04-01

    Full Text Available O modelo proposto formaliza uma preocupação que se encontra cada vez com mais freqüência na literatura, a saber, a de que trabalhadores que não têm acesso a condições adequadas de capacitação, saúde e motivação tendem a aprender menos, reduzindo a velocidade de inovação em produtos e processos na firma. Na medida em que a competitividade internacional repousa crescentemente na inovação e/ou na imitação rápida de tecnologia, um nível baixo de desenvolvimento humano implicará oportunidades de crescimento perdidas. Assim, o modelo assume que, até certo valor crítico do salário real, aumentos de salário real produzem aumentos de competitividade e da taxa de crescimento com equilíbrio externo, tornando compatíveis o crescimento econômico e a distribuição da renda, inclusive num contexto de abertura e de intensa concorrência internacional.The model formalizes a topic that the economic literature addresses with increasing frequency, namely that workers who have no access to adequate levels of education, health and motivation tend to learn more slowly and this in turn reduces the rate of innovation in products and processes in the firm. To the extent that international competitiveness increasingly relies on innovation and imitation of technology, a low level of human development will render lost opportunities for growth. Thus, the model assumes that - up to a certain critical level of the real wage - increases in real wages lead to a higher rate of growth consistent with balance-of-payments equilibrium, which makes compatible growth and income distribution even in contexts of external openness and intense international competition.

  10. Eliminación de sal inorgánica residual producida en la fermentación farmacéutica por nanofiltración y ósmosis inversa: experimento y modelo matemático

    Directory of Open Access Journals (Sweden)

    Jesús Mora Molina

    2007-05-01

    Full Text Available El problema del agua residual con altos contenidos de sal es una importante preocupación de las autoridades medioambientalistas. Los metódos de tratamientos de aguas residuales existentes, tanto municipales como industriales, son incapaces de retener eficientemente los compuestos inorgánicos. En este trabajo se presentan nuevos resultados con membranas de ósmosis inversa (OI y nanofiltración (NF preparadas por las compañías Filmtec y Millipore. Los objetivos principales de esta investigación fueron estos: 1. Encontrar un sistema con alta eficiencia para separar la sal del agua residual generada en el proceso fermentativo farmacéutico y obtener una concentración según lo establecido por la legislación ambiental (concentración de sal: 2500 mg/l, y concentración de los sólidos totales: 1200 mgO2/l; además, que a la planta de tratamiento biológico o directamente al medio ambiente. 2.Determinar la retención de sal demanda química de oxígeno y el flujo del filtrado con sistemas de NF y OI. Asimismo, con base en los datos experimentales, poder describir soluciones similares a las basadas en el modelo de ósmosis a aguas residuales. La temperatura experimental, la presión y el flujo de recirculación fueron mantenidos constantes, en la NF 30°C, 30 bar y 200 l/h, en la OI 30-40°C, 40-50 bar y 300 l/h. En los experimentos de la NF y la OI el factor de concentración fue de Cff = Vaguaori(m3 /Vretenido (m3 = 2,67. Se midieron y se calcularon el flujo del filtrado, la conductividad eléctrica del filtrado, la demanda química de oxígeno y los sólidos totales. En el sistema de OI se obtuvieron los siguientes resultados: contenido de sólidos totales en el agua original 2,06%, en el filtrado, 0,048%; demanda química de oxígeno en el agua original 8750 mg O2/l, en el filtrado 289 mgO2/l. Los resultados demostraron claramente que las membranas de NF investigadas no fueron lo suficientemente eficientes en la retención de sal del agua

  11. Medical image processing on the GPU - past, present and future.

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. PROCESSING, CATALOGUING AND DISTRIBUTION OF UAS IMAGES IN NEAR REAL TIME

    Directory of Open Access Journals (Sweden)

    I. Runkel

    2013-08-01

    Full Text Available Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications – where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security

  13. Processing, Cataloguing and Distribution of Uas Images in Near Real Time

    Science.gov (United States)

    Runkel, I.

    2013-08-01

    Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images

  14. Image processing applications: From particle physics to society

    International Nuclear Information System (INIS)

    Sotiropoulou, C.-L.; Citraro, S.; Dell'Orso, M.; Luciano, P.; Gkaitatzis, S.; Giannetti, P.

    2017-01-01

    We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.

  15. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  16. The Study of Image Processing Method for AIDS PA Test

    International Nuclear Information System (INIS)

    Zhang, H J; Wang, Q G

    2006-01-01

    At present, the main test technique of AIDS is PA in China. Because the judgment of PA test image is still depending on operator, the error ration is high. To resolve this problem, we present a new technique of image processing, which first process many samples and get the data including coordinate of center and the rang of kinds images; then we can segment the image with the data; at last, the result is exported after data was judgment. This technique is simple and veracious; and it also turns out to be suitable for the processing and analyzing of other infectious diseases' PA test image

  17. Diversification in an image retrieval system based on text and image processing

    Directory of Open Access Journals (Sweden)

    Adrian Iftene

    2014-11-01

    Full Text Available In this paper we present an image retrieval system created within the research project MUCKE (Multimedia and User Credibility Knowledge Extraction, a CHIST-ERA research project where UAIC{\\footnote{"Alexandru Ioan Cuza" University of Iasi}} is one of the partners{\\footnote{Together with Technical University from Wienna, Austria, CEA-LIST Institute from Paris, France and BILKENT University from Ankara, Turkey}}. Our discussion in this work will focus mainly on components that are part of our image retrieval system proposed in MUCKE, and we present the work done by the UAIC group. MUCKE incorporates modules for processing multimedia content in different modes and languages (like English, French, German and Romanian and UAIC is responsible with text processing tasks (for Romanian and English. One of the problems addressed by our work is related to search results diversification. In order to solve this problem, we first process the user queries in both languages and secondly, we create clusters of similar images.

  18. Parallel Processing of Images in Mobile Devices using BOINC

    Science.gov (United States)

    Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo

    2018-04-01

    Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  19. Parallel Processing of Images in Mobile Devices using BOINC

    Directory of Open Access Journals (Sweden)

    Curiel Mariela

    2018-04-01

    Full Text Available Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  20. Development of an image processing system at the Technology Applications Center, UNM: Landsat image processing in mineral exploration and related activities. Final report

    International Nuclear Information System (INIS)

    Budge, T.K.

    1980-09-01

    This project was a demonstration of the capabilities of Landsat satellite image processing applied to the monitoring of mining activity in New Mexico. Study areas included the Navajo coal surface mine, the Jackpile uranium surface mine, and the potash mining district near Carlsbad, New Mexico. Computer classifications of a number of land use categories in these mines were presented and discussed. A literature review of a number of case studies concerning the use of Landsat image processing in mineral exploration and related activities was prepared. Included in this review is a discussion of the Landsat satellite system and the basics of computer image processing. Topics such as destriping, contrast stretches, atmospheric corrections, ratioing, and classification techniques are addressed. Summaries of the STANSORT II and ELAS software packages and the Technology Application Center's Digital Image Processing System (TDIPS) are presented

  1. Signal and image processing in medical applications

    CERN Document Server

    Kumar, Amit; Rahim, B Abdul; Kumar, D Sravan

    2016-01-01

    This book highlights recent findings on and analyses conducted on signals and images in the area of medicine. The experimental investigations involve a variety of signals and images and their methodologies range from very basic to sophisticated methods. The book explains how signal and image processing methods can be used to detect and forecast abnormalities in an easy-to-follow manner, offering a valuable resource for researchers, engineers, physicians and bioinformatics researchers alike.

  2. Cellular Neural Network for Real Time Image Processing

    International Nuclear Information System (INIS)

    Vagliasindi, G.; Arena, P.; Fortuna, L.; Mazzitelli, G.; Murari, A.

    2008-01-01

    Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)

  3. Mapping spatial patterns with morphological image processing

    Science.gov (United States)

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham

    2006-01-01

    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  4. Image processing in digital chest radiography

    International Nuclear Information System (INIS)

    Manninen, H.; Partanen, K.; Lehtovirta, J.; Matsi, P.; Soimakallio, S.

    1992-01-01

    The usefulness of digital image processing of chest radiographs was evaluated in a clinical study. In 54 patients, chest radiographs in the posteroanterior projection were obtained by both 14 inch digital image intensifier equipment and the conventional screen-film technique. The digital radiographs (512x512 image format) viewed on a 625 line monitor were processed in 3 different ways: 1.standard display; 2.digital edge enhancement for the standard display; 3.inverse intensity display. The radiographs were interpreted independently by 3 radiologists. Diagnoses were confirmed by CT, follow-up radiographs and clinical records. Chest abnormalities of the films analyzed included 21 primary lung tumors, 44 pulmonary nodules, 16 cases with mediastinal disease, 17 with pneumonia /atelectasis. Interstitial lung disease, pleural plaques, and pulmonary emphysema were found in 30, 18 and 19 cases respectively. Sensitivity of conventional radiography when averaged overall findings was better than that of digital techniques (P<0.001). Differences in diagnostic accuracy measured by sensitivity and specificity between the 3 digital display modes were small. Standard image display showed better sensitivity for pulmonary nodules (0.74 vs 0.66; P<0.05) but poorer specificity for pulmonary emphysema (0.85 vs 0.93; P<0.05) compared with inverse intensity display. It is concluded that when using 512x512 image format, the routine use of digital edge enhancement and tone reversal at digital chest radiographs is not warranted. (author). 12 refs.; 4 figs.; 2 tabs

  5. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  6. Image processing can cause some malignant soft-tissue lesions to be missed in digital mammography images.

    Science.gov (United States)

    Warren, L M; Halling-Brown, M D; Looney, P T; Dance, D R; Wallis, M G; Given-Wilson, R M; Wilkinson, L; McAvinchey, R; Young, K C

    2017-09-01

    To investigate the effect of image processing on cancer detection in mammography. An observer study was performed using 349 digital mammography images of women with normal breasts, calcification clusters, or soft-tissue lesions including 191 subtle cancers. Images underwent two types of processing: FlavourA (standard) and FlavourB (added enhancement). Six observers located features in the breast they suspected to be cancerous (4,188 observations). Data were analysed using jackknife alternative free-response receiver operating characteristic (JAFROC) analysis. Characteristics of the cancers detected with each image processing type were investigated. For calcifications, the JAFROC figure of merit (FOM) was equal to 0.86 for both types of image processing. For soft-tissue lesions, the JAFROC FOM were better for FlavourA (0.81) than FlavourB (0.78); this difference was significant (p=0.001). Using FlavourA a greater number of cancers of all grades and sizes were detected than with FlavourB. FlavourA improved soft-tissue lesion detection in denser breasts (p=0.04 when volumetric density was over 7.5%) CONCLUSIONS: The detection of malignant soft-tissue lesions (which were primarily invasive) was significantly better with FlavourA than FlavourB image processing. This is despite FlavourB having a higher contrast appearance often preferred by radiologists. It is important that clinical choice of image processing is based on objective measures. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  7. An image processing approach to analyze morphological features of microscopic images of muscle fibers.

    Science.gov (United States)

    Comin, Cesar Henrique; Xu, Xiaoyin; Wang, Yaming; Costa, Luciano da Fontoura; Yang, Zhong

    2014-12-01

    We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Image processing with a cellular nonlinear network

    International Nuclear Information System (INIS)

    Morfu, S.

    2005-01-01

    A cellular nonlinear network (CNN) based on uncoupled nonlinear oscillators is proposed for image processing purposes. It is shown theoretically and numerically that the contrast of an image loaded at the nodes of the CNN is strongly enhanced, even if this one is initially weak. An image inversion can be also obtained without reconfiguration of the network whereas a gray levels extraction can be performed with an additional threshold filtering. Lastly, an electronic implementation of this CNN is presented

  9. Fast image processing on parallel hardware

    International Nuclear Information System (INIS)

    Bittner, U.

    1988-01-01

    Current digital imaging modalities in the medical field incorporate parallel hardware which is heavily used in the stage of image formation like the CT/MR image reconstruction or in the DSA real time subtraction. In order to image post-processing as efficient as image acquisition, new software approaches have to be found which take full advantage of the parallel hardware architecture. This paper describes the implementation of two-dimensional median filter which can serve as an example for the development of such an algorithm. The algorithm is analyzed by viewing it as a complete parallel sort of the k pixel values in the chosen window which leads to a generalization to rank order operators and other closely related filters reported in literature. A section about the theoretical base of the algorithm gives hints for how to characterize operations suitable for implementations on pipeline processors and the way to find the appropriate algorithms. Finally some results that computation time and usefulness of medial filtering in radiographic imaging are given

  10. Reducing the absorbed dose in analogue radiography of infant chest images by improving the image quality, using image processing techniques

    International Nuclear Information System (INIS)

    Karimian, A.; Yazdani, S.; Askari, M. A.

    2011-01-01

    Radiographic inspection is one of the most widely employed techniques for medical testing methods. Because of poor contrast and high un-sharpness of radiographic image quality in films, converting radiographs to a digital format and using further digital image processing is the best method of enhancing the image quality and assisting the interpreter in their evaluation. In this research work, radiographic films of 70 infant chest images with different sizes of defects were selected. To digitise the chest images and employ image processing the two algorithms (i) spatial domain and (ii) frequency domain techniques were used. The MATLAB environment was selected for processing in the digital format. Our results showed that by using these two techniques, the defects with small dimensions are detectable. Therefore, these suggested techniques may help medical specialists to diagnose the defects in the primary stages and help to prevent more repeat X-ray examination of paediatric patients. (authors)

  11. The development of application technology for image processing in nuclear facilities

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Bum; Kim, Woog Ki; Sohn, Surg Won; Kim, Seung Ho; Hwang, Suk Yeoung; Kim, Byung Soo

    1991-01-01

    The object of this project is to develop application technology of image processing in nuclear facilities where image signal are used for reliability and safety enhancement of operation, radiation exposure reduce of operator, and automation of operation processing. We has studied such application technology for image processing in nuclear facilities as non-tactile measurement, remote and automatic inspection, remote control, and enhanced analysis of visual information. On these bases, automation system and real-time image processing system are developed. Nuclear power consists in over 50% share of electic power supply of our country nowdays. So, it is required of technological support for top-notch technology in nuclear industry and its related fields. Especially, it is indispensable for image processing technology to enhance the reliabilty and safety of operation, to automate the process in a place like a nuclear power plant and radioactive envionment. It is important that image processing technology is linked to a nuclear engineering, and enhance the reliability abd safety of nuclear operation, as well as decrease the dose rate. (Author)

  12. Digital Data Processing of Images | Lotter | South African Medical ...

    African Journals Online (AJOL)

    Digital data processing was investigated to perform image processing. Image smoothing and restoration were explored and promising results obtained. The use of the computer, not only as a data management device, but as an important tool to render quantitative information, was illustrated by lung function determination.

  13. Analysis of the Growth Process of Neural Cells in Culture Environment Using Image Processing Techniques

    Science.gov (United States)

    Mirsafianf, Atefeh S.; Isfahani, Shirin N.; Kasaei, Shohreh; Mobasheri, Hamid

    Here we present an approach for processing neural cells images to analyze their growth process in culture environment. We have applied several image processing techniques for: 1- Environmental noise reduction, 2- Neural cells segmentation, 3- Neural cells classification based on their dendrites' growth conditions, and 4- neurons' features Extraction and measurement (e.g., like cell body area, number of dendrites, axon's length, and so on). Due to the large amount of noise in the images, we have used feed forward artificial neural networks to detect edges more precisely.

  14. IDP: Image and data processing (software) in C++

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  15. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  16. Grid Portal for Image and Video Processing

    International Nuclear Information System (INIS)

    Dinitrovski, I.; Kakasevski, G.; Buckovska, A.; Loskovska, S.

    2007-01-01

    Users are typically best served by G rid Portals . G rid Portals a re web servers that allow the user to configure or run a class of applications. The server is then given the task of authentication of the user with the Grid and invocation of the required grid services to launch the user's application. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. PHP is powerful and modern server-side scripting language producing HTML or XML output which easily can be accessed by everyone via web interface (with the browser of your choice) and can execute shell scripts on the server side. The aim of our work is development of Grid portal for image and video processing. The shell scripts contains gLite and globus commands for obtaining proxy certificate, job submission, data management etc. Using this technique we can easily create web interface to the Grid infrastructure. The image and video processing algorithms are implemented in C++ language using various image processing libraries. (Author)

  17. Digital image processing in art conservation

    Czech Academy of Sciences Publication Activity Database

    Zitová, Barbara; Flusser, Jan

    č. 53 (2003), s. 44-45 ISSN 0926-4981 Institutional research plan: CEZ:AV0Z1075907 Keywords : art conservation * digital image processing * change detection Subject RIV: JD - Computer Applications, Robotics

  18. Imaging partons in exclusive scattering processes

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus

    2012-06-15

    The spatial distribution of partons in the proton can be probed in suitable exclusive scattering processes. I report on recent performance estimates for parton imaging at a proposed Electron-Ion Collider.

  19. Computational analysis of Pelton bucket tip erosion using digital image processing

    Science.gov (United States)

    Shrestha, Bim Prasad; Gautam, Bijaya; Bajracharya, Tri Ratna

    2008-03-01

    Erosion of hydro turbine components through sand laden river is one of the biggest problems in Himalayas. Even with sediment trapping systems, complete removal of fine sediment from water is impossible and uneconomical; hence most of the turbine components in Himalayan Rivers are exposed to sand laden water and subject to erode. Pelton bucket which are being wildly used in different hydropower generation plant undergoes erosion on the continuous presence of sand particles in water. The subsequent erosion causes increase in splitter thickness, which is supposed to be theoretically zero. This increase in splitter thickness gives rise to back hitting of water followed by decrease in turbine efficiency. This paper describes the process of measurement of sharp edges like bucket tip using digital image processing. Image of each bucket is captured and allowed to run for 72 hours; sand concentration in water hitting the bucket is closely controlled and monitored. Later, the image of the test bucket is taken in the same condition. The process is repeated for 10 times. In this paper digital image processing which encompasses processes that performs image enhancement in both spatial and frequency domain. In addition, the processes that extract attributes from images, up to and including the measurement of splitter's tip. Processing of image has been done in MATLAB 6.5 platform. The result shows that quantitative measurement of edge erosion of sharp edges could accurately be detected and the erosion profile could be generated using image processing technique.

  20. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  1. Panorama Image Processing for Condition Monitoring with Thermography in Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Byoung Joon; Kim, Tae Hwan; Kim, Soon Geol; Mo, Yoon Syub [UNETWARE, Seoul (Korea, Republic of); Kim, Won Tae [Kongju National University, Gongju (Korea, Republic of)

    2010-04-15

    In this paper, imaging processing study obtained from CCD image and thermography image was performed in order to treat easily thermographic data without any risks of personnel who conduct the condition monitoring for the abnormal or failure status occurrable in industrial power plants. This imaging processing is also applicable to the predictive maintenance. For confirming the broad monitoring, a methodology producting single image from the panorama technique was developed no matter how many cameras are employed, including fusion method for discrete configuration for the target. As results, image fusion from quick realtime processing was obtained and it was possible to save time to track the location monitoring in matching the images between CCTV and thermography

  2. Panorama Image Processing for Condition Monitoring with Thermography in Power Plant

    International Nuclear Information System (INIS)

    Jeon, Byoung Joon; Kim, Tae Hwan; Kim, Soon Geol; Mo, Yoon Syub; Kim, Won Tae

    2010-01-01

    In this paper, imaging processing study obtained from CCD image and thermography image was performed in order to treat easily thermographic data without any risks of personnel who conduct the condition monitoring for the abnormal or failure status occurrable in industrial power plants. This imaging processing is also applicable to the predictive maintenance. For confirming the broad monitoring, a methodology producting single image from the panorama technique was developed no matter how many cameras are employed, including fusion method for discrete configuration for the target. As results, image fusion from quick realtime processing was obtained and it was possible to save time to track the location monitoring in matching the images between CCTV and thermography

  3. Image recognition on raw and processed potato detection: a review

    Science.gov (United States)

    Qi, Yan-nan; Lü, Cheng-xu; Zhang, Jun-ning; Li, Ya-shuo; Zeng, Zhen; Mao, Wen-hua; Jiang, Han-lu; Yang, Bing-nan

    2018-02-01

    Objective: Chinese potato staple food strategy clearly pointed out the need to improve potato processing, while the bottleneck of this strategy is technology and equipment of selection of appropriate raw and processed potato. The purpose of this paper is to summarize the advanced raw and processed potato detection methods. Method: According to consult research literatures in the field of image recognition based potato quality detection, including the shape, weight, mechanical damage, germination, greening, black heart, scab potato etc., the development and direction of this field were summarized in this paper. Result: In order to obtain whole potato surface information, the hardware was built by the synchronous of image sensor and conveyor belt to achieve multi-angle images of a single potato. Researches on image recognition of potato shape are popular and mature, including qualitative discrimination on abnormal and sound potato, and even round and oval potato, with the recognition accuracy of more than 83%. Weight is an important indicator for potato grading, and the image classification accuracy presents more than 93%. The image recognition of potato mechanical damage focuses on qualitative identification, with the main affecting factors of damage shape and damage time. The image recognition of potato germination usually uses potato surface image and edge germination point. Both of the qualitative and quantitative detection of green potato have been researched, currently scab and blackheart image recognition need to be operated using the stable detection environment or specific device. The image recognition of processed potato mainly focuses on potato chips, slices and fries, etc. Conclusion: image recognition as a food rapid detection tool have been widely researched on the area of raw and processed potato quality analyses, its technique and equipment have the potential for commercialization in short term, to meet to the strategy demand of development potato as

  4. Determinação do melhor nível de sal comum para codornas japonesas em postura Determination of the best level of salt for Japanese laying quails

    Directory of Open Access Journals (Sweden)

    Alice Eiko Murakami

    2006-12-01

    Full Text Available Este estudo foi realizado com o objetivo de determinar o melhor nível de sal comum para codornas japonesas (Coturnix coturnix japonica em postura. Foram utilizadas 336 codornas com 13 semanas de idade, alojadas em gaiolas de 118 cm²/codorna durante 84 dias (quatro ciclos de 21 dias. O delineamento experimental utilizado foi inteiramente casualizado, com sete tratamentos (0; 0,15; 0,20; 0,25; 0,30; 0,35 e 0,45% de sal comum e seis repetições de oito aves por parcela. A cada 21 dias, foram avaliados os parâmetros de desempenho (postura, consumo de ração e conversão alimentar e qualidade dos ovos (peso médio do ovo, massa de ovo, porcentagem e espessura da casca e Unidade Haugh. Os dados obtidos foram submetidos às análises de variância e de regressão e as médias comparadas pelo teste Dunnett a 5% de significância. A equação de regressão ajustada não foi significativa para os parâmetros avaliados em função dos níveis de sal na dieta. Entretanto, pela comparação entre as médias, observou-se que, nos tratamentos com a adição de sal, as aves apresentaram melhor desempenho produtivo e qualidade externa dos ovos, sendo que o nível de 0,15% de sal (equivalente a 0,10% de Na e 0,12% de Cl foi suficiente para obtenção destes resultados.The aim of this experiment was to determine the best level of salt for Japanese laying quails (Coturnix coturnix japonica. Three hundred and thirty-six quails with 13 weeks of age were housed in cages with 118 cm²/quail for 84 days (four cycles of 21 days each. The experiment was analyzed as a complete randomized design with seven treatments (0, 0.15, 0.20, 0.25, 0.30, 0.35, and 0.45% of salt with six replicates of eight quails per pen. Every 21 days, the productive performance (% of production, feed intake and feed gain ratio [kg/kg and kg/dozen] and egg quality (average egg weight, egg mass, eggshell percentage and thickness and Unit Haugh were evaluated. Data were submitted to analyses of

  5. Use of coccidiostat in mineral salt and study on ovine eimeriosis Uso de coccidiostático no sal mineral e estudo da eimeriose ovina

    Directory of Open Access Journals (Sweden)

    Alberto Luiz Freire de Andrade Júnior

    2012-03-01

    Full Text Available Coccidiosis is a serious obstacle to sheep production, which is becoming a limiting factor, especially with regard to lamb production. However, there are few studies on this parasite in the State of Rio Grande do Norte. The aim of this study was to evaluate the action of decoquinate, added to mineral salt, for controlling Eimeria infection in lambs, and to identify which species are infecting sheep in the eastern region of the state. This study was carried out from August 2009 to January 2010, and used 76 animals. These were divided into two treatment groups: one with common mineral salt, and the other with mineral salt enriched with 6% micronized decoquinate. Fecal samples and body weight measurements were taken every 14 days for parasitological diagnosis, weight gain follow-up and quantitative analysis. The study showed that there was a significant difference in OPG only at the 7th collection, but no significant difference in weight gain. The Eimeria species found were E. ahsata. E. crandallis. E. granulosa. E. intrincata. E. ovina. E. faurei. E. ovinoidalis. E. pallida and E. parva. It was concluded that addition of decoquinate to mineral salt gave rise to lower oocyst elimination, thus favoring eimeriosis control in sheep.A coccidiose constitui-se num sério obstáculo à ovinocultura, a qual vem se tornando um fator limitante para a exploração, especialmente para a produção de cordeiros precoces. Porém, poucos são os estudos com esse parasito no Estado do Rio Grande do Norte. O objetivo deste trabalho foi avaliar a ação do decoquinato, adicionado ao sal mineral, no controle da infecção causada por parasitas do gênero Eimeria em cordeiros, e identificar quais as espécies infectam ovinos na região leste Potiguar. O trabalho foi desenvolvido entre agosto de 2009 e janeiro de 2010, e foram usados 76 animais, distribuídos em dois tratamentos, um com sal mineral comum e o outro com sal mineral enriquecido com decoquinato a 6

  6. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  7. Grid Computing Application for Brain Magnetic Resonance Image Processing

    International Nuclear Information System (INIS)

    Valdivia, F; Crépeault, B; Duchesne, S

    2012-01-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  8. Parallel asynchronous hardware implementation of image processing algorithms

    Science.gov (United States)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  9. Image processing using pulse-coupled neural networks applications in Python

    CERN Document Server

    Lindblad, Thomas

    2013-01-01

    Image processing algorithms based on the mammalian visual cortex are powerful tools for extraction information and manipulating images. This book reviews the neural theory and translates them into digital models. Applications are given in areas of image recognition, foveation, image fusion and information extraction. The third edition reflects renewed international interest in pulse image processing with updated sections presenting several newly developed applications. This edition also introduces a suite of Python scripts that assist readers in replicating results presented in the text and to further develop their own applications.

  10. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Science.gov (United States)

    Della Mea, Vincenzo; Baroni, Giulia L; Pilutti, David; Di Loreto, Carla

    2017-01-01

    The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  11. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  12. Image processing techniques for thermal, x-rays and nuclear radiations

    International Nuclear Information System (INIS)

    Chadda, V.K.

    1998-01-01

    The paper describes image acquisition techniques for the non-visible range of electromagnetic spectrum especially thermal, x-rays and nuclear radiations. Thermal imaging systems are valuable tools used for applications ranging from PCB inspection, hot spot studies, fire identification, satellite imaging to defense applications. Penetrating radiations like x-rays and gamma rays are used in NDT, baggage inspection, CAT scan, cardiology, radiography, nuclear medicine etc. Neutron radiography compliments conventional x-rays and gamma radiography. For these applications, image processing and computed tomography are employed for 2-D and 3-D image interpretation respectively. The paper also covers main features of image processing systems for quantitative evaluation of gray level and binary images. (author)

  13. Digital-image processing improves man-machine communication at a nuclear reactor

    International Nuclear Information System (INIS)

    Cook, S.A.; Harrington, T.P.; Toffer, H.

    1982-01-01

    The application of digital image processing to improve man-machine communication in a nuclear reactor control room is illustrated. At the Hanford N Reactor, operated by UNC Nuclear Industries for the United States Department of Energy, in Richland, Washington, digital image processing is applied to flow, temperature, and tube power data. Color displays are used to present the data in a clear and concise fashion. Specific examples are used to demonstrate the capabilities and benefits of digital image processing of reactor data. N Reactor flow and power maps for routine reactor operations and for perturbed reactor conditions are displayed. The advantages of difference mapping are demonstrated. Image processing techniques have also been applied to results of analytical reactor models; two examples are shown. The potential of combining experimental and analytical information with digital image processing to produce predictive and adaptive reactor core models is discussed. The applications demonstrate that digital image processing can provide new more effective ways for control room personnel to assess reactor status, to locate problems and explore corrective actions. 10 figures

  14. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  15. Brain's tumor image processing using shearlet transform

    Science.gov (United States)

    Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander

    2017-09-01

    Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.

  16. Theoretical analysis of radiographic images by nonstationary Poisson processes

    International Nuclear Information System (INIS)

    Tanaka, Kazuo; Uchida, Suguru; Yamada, Isao.

    1980-01-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process. (author)

  17. The vision guidance and image processing of AGV

    Science.gov (United States)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  18. Image Post-Processing and Analysis. Chapter 17

    Energy Technology Data Exchange (ETDEWEB)

    Yushkevich, P. A. [University of Pennsylvania, Philadelphia (United States)

    2014-09-15

    For decades, scientists have used computers to enhance and analyse medical images. At first, they developed simple computer algorithms to enhance the appearance of interesting features in images, helping humans read and interpret them better. Later, they created more advanced algorithms, where the computer would not only enhance images but also participate in facilitating understanding of their content. Segmentation algorithms were developed to detect and extract specific anatomical objects in images, such as malignant lesions in mammograms. Registration algorithms were developed to align images of different modalities and to find corresponding anatomical locations in images from different subjects. These algorithms have made computer aided detection and diagnosis, computer guided surgery and other highly complex medical technologies possible. Nowadays, the field of image processing and analysis is a complex branch of science that lies at the intersection of applied mathematics, computer science, physics, statistics and biomedical sciences. This chapter will give a general overview of the most common problems in this field and the algorithms that address them.

  19. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    Directory of Open Access Journals (Sweden)

    Vincenzo Della Mea

    Full Text Available The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  20. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    Science.gov (United States)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory

  1. Uso de sal durante o transporte de juvenis (1kg de pirarucu (Arapaima gigas Use of salt during the transportation of pirarucu juveniles (1kg (Arapaima gigas

    Directory of Open Access Journals (Sweden)

    Franmir Rodrigues Brandão

    2008-12-01

    Full Text Available O pirarucu é um peixe nativo da bacia Amazônica cuja criaçãovem sendo estudada em algumas partes do Brasil. O objetivo desse trabalho foi testar o sal de cozinha como mitigador de estresse durante o transporte de juvenis de pirarucu (1 kg. Para isso, os peixes foram transportados em dois diferentes sistemas: caixas sem adição de oxigênio (transporte aberto e sacos plásticos com injeção de oxigênio e lacrado (transporte fechado. Nos dois sistemas os peixes foram transportados em três diferentes tratamentos: controle e duas concentrações de sal na água (3 e 6 g.L-1. Após o transporte os peixes foram colocados em viveiros para avaliação da recuperação. Foram analisados parâmetros do metabolismo energético (cortisol, glicose e lactato e de hematologia (hematócrito. O sal de cozinha não foi eficiente em mitigar as respostas de estresse no transporte em nenhum dos dois sistemas de transporte estudados.Pirarucu is a native fish of the Amazon basin, widely used in culture systems in some parts of Brazil. The objective of this work was to test table salt as a stress mitigator during transportation of pirarucu juveniles (1kg. Fish were transported by two different systems: boxes without addition of oxygen (open system and closed oxygen filled plastic bags (closed system. To both systems fish were transported at three different treatments: control and two table salt concentration (3 and 6 gL-1. After transportation, fish were stocked in ponds to monitor recovery. Metabolic (cortisol, glucose and lactate and hematological (hematocrit parameters were analyzed. The table salt was not efficient in mitigating stress response during the both tested transport system.

  2. Knowledge, attitudes and self-reported practices toward children oral health among mother's attending maternal and child's units, Salé, Morocco.

    Science.gov (United States)

    Chala, Sanaa; Houzmali, Soumia; Abouqal, Redouane; Abdallaoui, Faïza

    2018-05-11

    The occurrence of severe dental caries is particularly prevalent and harmful in children. A better understanding of parental factors that may be indicators of children's risk of developing dental caries is important for the development of preventive measures. This study was conducted to assess knowledge, attitudes, and practices (KAP) of mothers in Salé, Morocco regarding oral health and their predictors. A cross-sectional KAP study was conducted of Mother and Child units in Salé, Morocco. Mothers attending the selected units from November 2014 to 29 January 2015 were recruited. Data were collected using a semi-structured questionnaire, administered by face-to-face interviews, to record socio-demographic factors and KAPs. The main outcome measures included knowledge about oral health diseases and preventive measures, and attitudes and practices related to oral health prevention measures and dental care. KAPs scores were then recoded based on responses and scores were determined for each KAP domain. Linear regression analysis was conducted to assess predictors of KAP scores. Among 502 mothers included, 140 (27.8%) were illiterate and 285 (60.9%) were aware that fluoride has a beneficial effect in caries prevention. Mothers' own practices about dental care were statistically related to their children's use of dental care services (p knowledge score was associated with mother's age (β = 0.05; 95% CI; p oral health-related practices were mother's education level and children's health status. Limited KAP scores were observed among the studied population. A great emphasis on oral health education and some risk factor modifications are recommended.

  3. Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms

    Science.gov (United States)

    Negro Maggio, Valentina; Iocchi, Luca

    2015-02-01

    Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.

  4. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research.

    Science.gov (United States)

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris

    2017-06-01

    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at http://medisp.bme.teiath.gr/hicl/ . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.

  5. Digital image processing of mandibular trabeculae on radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Ogino, Toshi

    1987-06-01

    The present study was aimed to reveal the texture patterns of the radiographs of the mandibular trabeculae by digital image processing. The 32 cases of normal subjects and the 13 cases of patients with mandibular diseases of ameloblastoma, primordial cysts, squamous cell carcinoma and odontoma were analyzed by their intra-oral radiographs in the right premolar regions. The radiograms were digitized by the use of a drum scanner densitometry method. The input radiographic images were processed by a histogram equalization method. The result are as follows : First, the histogram equalization method enhances the image contrast of the textures. Second, the output images of the textures for normal mandible-trabeculae radiograms are of network pattern in nature. Third, the output images for the patients are characterized by the non-network pattern and replaced by the patterns of the fabric texture, intertwined plants (karakusa-pattern), scattered small masses and amorphous texture. Thus, these results indicates that the present digital image system is expected to be useful for revealing the texture patterns of the radiographs and in the future for the texture analysis of the clinical radiographs to obtain quantitative diagnostic findings.

  6. Application of digital image processing to industrial radiography

    International Nuclear Information System (INIS)

    Bodson; Varcin; Crescenzo; Theulot

    1985-01-01

    Radiography is widely used for quality control of fabrication of large reactor components. Image processing methods are applied to industrial radiographs in order to help to take a decision as well as to reduce costs and delays for examination. Films, performed in representative operating conditions, are used to test results obtained with algorithms for the restauration of images and for the detection, characterisation of indications in order to determine the possibility of an automatic radiographs processing [fr

  7. Digital image processing for real-time neutron radiography and its applications

    International Nuclear Information System (INIS)

    Fujine, Shigenori

    1989-01-01

    The present paper describes several digital image processing approaches for the real-time neutron radiography (neutron television-NTV), such as image integration, adaptive smoothing and image enhancement, which have beneficial effects on image improvements, and also describes how to use these techniques for applications. Details invisible in direct images of NTV are able to be revealed by digital image processing, such as reversed image, gray level correction, gray scale transformation, contoured image, subtraction technique, pseudo color display and so on. For real-time application a contouring operation and an averaging approach can also be utilized effectively. (author)

  8. [Digital thoracic radiology: devices, image processing, limits].

    Science.gov (United States)

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  9. Color Processing using Max-trees : A Comparison on Image Compression

    NARCIS (Netherlands)

    Tushabe, Florence; Wilkinson, M.H.F.

    2012-01-01

    This paper proposes a new method of processing color images using mathematical morphology techniques. It adapts the Max-tree image representation to accommodate color and other vectorial images. The proposed method introduces three new ways of transforming the color image into a gray scale image

  10. Mathematical problems in image processing

    International Nuclear Information System (INIS)

    Chidume, C.E.

    2000-01-01

    This is the second volume of a new series of lecture notes of the Abdus Salam International Centre for Theoretical Physics. This volume contains the lecture notes given by A. Chambolle during the School on Mathematical Problems in Image Processing. The school consisted of two weeks of lecture courses and one week of conference

  11. Signal and image processing for monitoring and testing at EDF

    International Nuclear Information System (INIS)

    Georgel, B.; Garreau, D.

    1992-04-01

    The quality of monitoring and non destructive testing devices in plants and utilities today greatly depends on the efficient processing of signal and image data. In this context, signal or image processing techniques, such as adaptive filtering or detection or 3D reconstruction, are required whenever manufacturing nonconformances or faulty operation have to be recognized and identified. This paper reviews the issues of industrial image and signal processing, by briefly considering the relevant studies and projects under way at EDF. (authors). 1 fig., 11 refs

  12. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  13. Functional imaging of the pancreas. Image processing techniques and clinical evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Fumiko

    1984-02-01

    An image processing technique for functional imaging of the pancreas was developed and is here reported. In this paper, clinical efficacy of the technique for detecting pancreatic abnormality is evaluated in comparison with conventional pancreatic scintigraphy and CT. For quantitative evaluation, functional rate, i.e. the rate of normal functioning pancreatic area, was calculated from the functional image and subtraction image. Two hundred and ninety-five cases were studied using this technique. Conventional image had a sensitivity of 65% and a specificity of 78%, while the use of functional imaging improved sensitivity to 88% and specificity to 88%. The mean functional rate in patients with pancreatic disease was significantly lower (33.3 +- 24.5 in patients with chronic pancreatitis, 28.1 +- 26.9 in patients with acute pancreatitis, 43.4 +- 22.3 in patients with diabetes mellitus, 20.4 +- 23.4 in patients with pancreatic cancer) than the mean functional rate in cases without pancreatic disease (86.4 +- 14.2). It is suggested that functional image of the pancreas reflecting pancreatic exocrine function and functional rate is a useful indicator of pancreatic exocrine function.

  14. Some computer applications and digital image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Lowinger, T.

    1981-01-01

    Methods of digital image processing are applied to problems in nuclear medicine imaging. The symmetry properties of central nervous system lesions are exploited in an attempt to determine the three-dimensional radioisotope density distribution within the lesions. An algorithm developed by astronomers at the end of the 19th century to determine the distribution of matter in globular clusters is applied to tumors. This algorithm permits the emission-computed-tomographic reconstruction of spherical lesions from a single view. The three-dimensional radioisotope distribution derived by the application of the algorithm can be used to characterize the lesions. The applicability to nuclear medicine images of ten edge detection methods in general usage in digital image processing were evaluated. A general model of image formation by scintillation cameras is developed. The model assumes that objects to be imaged are composed of a finite set of points. The validity of the model has been verified by its ability to duplicate experimental results. Practical applications of this work involve quantitative assessment of the distribution of radipharmaceuticals under clinical situations and the study of image processing algorithms

  15. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    Science.gov (United States)

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  16. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  17. Document Examination: Applications of Image Processing Systems.

    Science.gov (United States)

    Kopainsky, B

    1989-12-01

    Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.

  18. Methods for processing and analysis functional and anatomical brain images: computerized tomography, emission tomography and nuclear resonance imaging

    International Nuclear Information System (INIS)

    Mazoyer, B.M.

    1988-01-01

    The various methods for brain image processing and analysis are presented and compared. The following topics are developed: the physical basis of brain image comparison (nature and formation of signals intrinsic performance of the methods image characteristics); mathematical methods for image processing and analysis (filtering, functional parameter extraction, morphological analysis, robotics and artificial intelligence); methods for anatomical localization (neuro-anatomy atlas, proportional stereotaxic atlas, numerized atlas); methodology of cerebral image superposition (normalization, retiming); image networks [fr

  19. Some applications of nonlinear diffusion to processing of dynamic evolution images

    International Nuclear Information System (INIS)

    Goltsov, Alexey N.; Nikishov, Sergey A.

    1997-01-01

    Model nonlinear diffusion equation with the most simple Landau-Ginzburg free energy functional was applied to locate boundaries between meaningful regions of low-level images. The method is oriented to processing images of objects that are a result of dynamic evolution: images of different organs and tissues obtained by radiography and NMR methods, electron microscope images of morphogenesis fields, etc. In the methods developed by us, parameters of the nonlinear diffusion model are chosen on the basis of the preliminary treatment of the images. The parameters of the Landau-Ginzburg free energy functional are extracted from the structure factor of the images. Owing to such a choice of the model parameters, the image to be processed is located in the vicinity of the steady-state of the diffusion equation. The suggested method allows one to separate distinct structures having specific space characteristics from the whole image. The method was applied to processing X-ray images of the lung

  20. PREVALENCIA DE BOCIO ENDEMICO POR EL METODO ECOGRAFICO, DETERMINACION DE YODURIAS y YODO EN SAL EN ESCOLARES DEL PARAGUAY.

    OpenAIRE

    Jara Y, Jorge A; Pretell, Eduardo A; Zaracho de Irazusta, Juana; Goetting, Sonia; Riveros, Claudia

    2004-01-01

    Paraguay, país mediterráneo ubicado en el corazón de America del Sur, con una superficie de 406.542 Km2 y con una población de 5,8 millones de habitantes importa toda la sal que consume de países cercanos como la Argentina, Brasil y Chile. En el presente estudio observacional, de tipo descriptivo utiliza el método ecográfico para determinar el tamaño y las características de la glándula tiroides, se examinaron 1034 escolares de ambos sexos de 13 distritos del país y fue realizado durante 3 me...

  1. Etkin Piyasalar Hipotezi ve Davranışsal Finans Çatışması(A Conflict Beetween The Efficient Market Hypothesis and Behavioral Finance

    Directory of Open Access Journals (Sweden)

    İhsan Kulali

    2016-03-01

    Full Text Available Etkin piyasalar hipotezi (EPH, çoğu finansal iktisatçı tarafından yaygın şekilde kabul görmektedir. Modeli savunanlar sermaye piyasalarının, hisse senedi fiyatlarının ilgili elde edilebilir tüm bilgileri yansıtması nedeniyle, etkin olduğuna inanmaktadır. Yeni bir bilgi ortaya çıktığı zaman, haberler anında yayılmakta ve hisse senedi fiyatlarına gecikmeksizin yansımaktadır. Bu görüşe göre hisse senetlerinin gelecek fiyatlarının tahmini için teknik ve temel analize ihtiyaç bulunmamaktadır çünkü hiçbir yatırımcının piyasayı yenmesi ve aşırı kar elde etmesi mümkün değildir. Model, bütün yatırımcıların rasyonel ve faydalarını maksimize etmeye çalışan kişiler olduğunu varsaymaktadır. Yirminci yüzyılın başlarında davranışsal finans gündeme gelmiştir. Birçok finansal iktisatçı hisse senedi fiyatlarının en azında belirli ölçülerde tahmin edilebilir olduğuna inanmaktadır. Pek çok piyasa anomalisi ise psikolojik ve davranışsal faktörler ile açıklanmaktadır. Davranışsal finans yatırımcıları rasyonel değil normal olarak kabul etmektedir. Buna göre piyasa balonları ve krizler bilişsel ön yargılardan kaynaklanmaktadır.

  2. Tracker: Image-Processing and Object-Tracking System Developed

    Science.gov (United States)

    Klimek, Robert B.; Wright, Theodore W.

    1999-01-01

    Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in

  3. Improved cancer diagnostics by different image processing techniques on OCT images

    Science.gov (United States)

    Kanawade, Rajesh; Lengenfelder, Benjamin; Marini Menezes, Tassiana; Hohmann, Martin; Kopfinger, Stefan; Hohmann, Tim; Grabiec, Urszula; Klämpfl, Florian; Gonzales Menezes, Jean; Waldner, Maximilian; Schmidt, Michael

    2015-07-01

    Optical-coherence tomography (OCT) is a promising non-invasive, high-resolution imaging modality which can be used for cancer diagnosis and its therapeutic assessment. However, speckle noise makes detection of cancer boundaries and image segmentation problematic and unreliable. Therefore, to improve the image analysis for a precise cancer border detection, the performance of different image processing algorithms such as mean, median, hybrid median filter and rotational kernel transformation (RKT) for this task is investigated. This is done on OCT images acquired from an ex-vivo human cancerous mucosa and in vitro by using cultivated tumour applied on organotypical hippocampal slice cultures. The preliminary results confirm that the border between the healthy and the cancer lesions can be identified precisely. The obtained results are verified with fluorescence microscopy. This research can improve cancer diagnosis and the detection of borders between healthy and cancerous tissue. Thus, it could also reduce the number of biopsies required during screening endoscopy by providing better guidance to the physician.

  4. Smartphones as image processing systems for prosthetic vision.

    Science.gov (United States)

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  5. Intervenciones para reducir el consumo de sal a través del etiquetado Interventions to reduce salt consumption through labeling

    Directory of Open Access Journals (Sweden)

    Javier Sanz-Valero

    2012-04-01

    Full Text Available OBJETIVO: Determinar el grado en que el etiquetado de productos alimentarios informa acerca del consumo de sal. MÉTODOS: Se realizó un análisis crítico y sistemático de 9 estudios -seleccionados de un total de 133- recogidos mediante revisión de la literatura científica sobre las intervenciones realizadas en población humana orientadas a reducir el consumo de sal a través de mensajes en el etiquetado. Toda la información se obtuvo mediante consulta directa y vía Internet a la literatura científica recogida en varias bases de datos. RESULTADOS: De los 133 artículos recuperados, una vez aplicados los criterios de inclusión y exclusión, se seleccionaron para la revisión 9 trabajos: en todos ellos se planteaba a la población en estudio su conocimiento acerca de la interpretación de la etiqueta sobre el contenido de sal de los alimentos. CONCLUSIONES: Los consumidores de alimentos entienden y valoran más a los logotipos que a la composición nutricional que figura en la etiqueta. Se justificaría entonces el uso de logotipos alternativos que facilitaran esta información y que además fueran normalizados. Esta situación se ve reforzada porque la inclusión de símbolos fácilmente entendibles favorece la correcta elección por parte de los consumidores.OBJECTIVE: Determine the extent to which labeling of food products informs about salt consumption. METHODS: A critical and systematic analysis was conducted of 9 studies selected out of a total of 133 studies. The studies were collected by reviewing the scientific literature on interventions conducted in the human population aimed towards reducing salt consumption through label messaging. All of the information was obtained by direct consultation and by Internet from the scientific literature collected in several databases. RESULTS: Out of the 133 articles recovered, after the inclusion and exclusion criteria were applied, 9 studies were selected for review. All of them took into

  6. A Robust Photogrammetric Processing Method of Low-Altitude UAV Images

    Directory of Open Access Journals (Sweden)

    Mingyao Ai

    2015-02-01

    Full Text Available Low-altitude Unmanned Aerial Vehicles (UAV images which include distortion, illumination variance, and large rotation angles are facing multiple challenges of image orientation and image processing. In this paper, a robust and convenient photogrammetric approach is proposed for processing low-altitude UAV images, involving a strip management method to automatically build a standardized regional aerial triangle (AT network, a parallel inner orientation algorithm, a ground control points (GCPs predicting method, and an improved Scale Invariant Feature Transform (SIFT method to produce large number of evenly distributed reliable tie points for bundle adjustment (BA. A multi-view matching approach is improved to produce Digital Surface Models (DSM and Digital Orthophoto Maps (DOM for 3D visualization. Experimental results show that the proposed approach is robust and feasible for photogrammetric processing of low-altitude UAV images and 3D visualization of products.

  7. Linear Algebra and Image Processing

    Science.gov (United States)

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  8. SPARX, a new environment for Cryo-EM image processing.

    Science.gov (United States)

    Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

    2007-01-01

    SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.

  9. Dehydration process of fish analyzed by neutron beam imaging

    International Nuclear Information System (INIS)

    Tanoi, K.; Hamada, Y.; Seyama, S.; Saito, T.; Iikura, H.; Nakanishi, T.M.

    2009-01-01

    Since regulation of water content of the dried fish is an important factor for the quality of the fish, water-losing process during drying (squid and Japanese horse mackerel) was analyzed through neutron beam imaging. The neutron image showed that around the shoulder of mackerel, there was a part where water content was liable to maintain high during drying. To analyze water-losing process more in detail, spatial image was produced. From the images, it was clearly indicated that the decrease of water content was regulated around the shoulder part. It was suggested that to prevent deterioration around the shoulder part of the dried fish is an important factor to keep quality of the dried fish in the storage.

  10. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    Science.gov (United States)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  11. Surface regions of illusory images are detected with a slower processing speed than those of luminance-defined images.

    Science.gov (United States)

    Mihaylova, Milena; Manahilov, Velitchko

    2010-11-24

    Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.

  12. A language for image processing HILLS and its supporting system SDIP

    International Nuclear Information System (INIS)

    Suzuki, H.; Toriwaki, J.

    1984-01-01

    This paper presents a language HILLS and its supporting system SDIP for image processing. HILLS is a key-word type language for describing image processing procedures by using subroutine packages SLIP and SPIDER. SDIP, written in FORTRAN to keep portability, supports programming by HILLS in interactive mode including functions such as editing, translating HILLS into FORTRAN, error detection, and providing manual information. Results of preliminary experiments suggest that HILLS and SDIP are very useful tools for beginners and researchers in application fields of image processing to develop desired image analysis procedures

  13. Optimized image processing with modified preprocessing of image data sets of a transparent imaging plate by way of the lateral view of the cervical spine

    International Nuclear Information System (INIS)

    Reissberg, S.; Hoeschen, C.; Redlich, U.; Scherlach, C.; Preuss, H.; Kaestner, A.; Doehring, W.; Woischneck, D.; Schuetze, M.; Reichardt, K.; Firsching, R.

    2002-01-01

    Purpose: To improve the diagnostic quality of lateral radiographs of the cervical spine by pre-processing the image data sets produced by a transparent imaging plate with both-side reading and to evaluate any possible impact on minimizing the number of additional radiographs and supplementary investigations. Material and Methods: One hundred lateral digital radiographs of the cervical spine were processed with two different methods: processing of each data set using the system-imminent parameters and using the manual model. The difference between the two types of processing is the level of the latitude value. Hard copies of the processed images were judged by five radiologists and three neurosurgeons. The evaluation applied the image criteria score (ICS) without conventional reference images. Results: In 99% of the lateral radiographs of the cervical spine, all vertebral bodies could be completed delineated using the manual mode, but only 76% of the images processed by the system-imminent parameters showed all vertebral bodies. Thus, the manual mode enabled the evaluation of up to two additional more caudal vertebral bodies. The manual mode processing was significantly better concerning object size and processing artifacts. This optimized image processing and the resultant minimization of supplementary investigations was calculated to correspond to a theoretical dose reduction of about 50%. (orig.) [de

  14. Image processing with ImageJ

    NARCIS (Netherlands)

    Abramoff, M.D.; Magalhães, Paulo J.; Ram, Sunanda J.

    2004-01-01

    Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS).

  15. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  16. Fully automated rodent brain MR image processing pipeline on a Midas server: from acquired images to region-based statistics.

    Science.gov (United States)

    Budin, Francois; Hoogstoel, Marion; Reynolds, Patrick; Grauer, Michael; O'Leary-Moore, Shonagh K; Oguz, Ipek

    2013-01-01

    Magnetic resonance imaging (MRI) of rodent brains enables study of the development and the integrity of the brain under certain conditions (alcohol, drugs etc.). However, these images are difficult to analyze for biomedical researchers with limited image processing experience. In this paper we present an image processing pipeline running on a Midas server, a web-based data storage system. It is composed of the following steps: rigid registration, skull-stripping, average computation, average parcellation, parcellation propagation to individual subjects, and computation of region-based statistics on each image. The pipeline is easy to configure and requires very little image processing knowledge. We present results obtained by processing a data set using this pipeline and demonstrate how this pipeline can be used to find differences between populations.

  17. Anniversary Paper: Image processing and manipulation through the pages of Medical Physics

    International Nuclear Information System (INIS)

    Armato, Samuel G. III; Ginneken, Bram van

    2008-01-01

    The language of radiology has gradually evolved from ''the film'' (the foundation of radiology since Wilhelm Roentgen's 1895 discovery of x-rays) to ''the image,'' an electronic manifestation of a radiologic examination that exists within the bits and bytes of a computer. Rather than simply storing and displaying radiologic images in a static manner, the computational power of the computer may be used to enhance a radiologist's ability to visually extract information from the image through image processing and image manipulation algorithms. Image processing tools provide a broad spectrum of opportunities for image enhancement. Gray-level manipulations such as histogram equalization, spatial alterations such as geometric distortion correction, preprocessing operations such as edge enhancement, and enhanced radiography techniques such as temporal subtraction provide powerful methods to improve the diagnostic quality of an image or to enhance structures of interest within an image. Furthermore, these image processing algorithms provide the building blocks of more advanced computer vision methods. The prominent role of medical physicists and the AAPM in the advancement of medical image processing methods, and in the establishment of the ''image'' as the fundamental entity in radiology and radiation oncology, has been captured in 35 volumes of Medical Physics.

  18. Surface Distresses Detection of Pavement Based on Digital Image Processing

    OpenAIRE

    Ouyang , Aiguo; Luo , Chagen; Zhou , Chao

    2010-01-01

    International audience; Pavement crack is the main form of early diseases of pavement. The use of digital photography to record pavement images and subsequent crack detection and classification has undergone continuous improvements over the past decade. Digital image processing has been applied to detect the pavement crack for its advantages of large amount of information and automatic detection. The applications of digital image processing in pavement crack detection, distresses classificati...

  19. Bubble feature extracting based on image processing of coal flotation froth

    Energy Technology Data Exchange (ETDEWEB)

    Wang, F.; Wang, Y.; Lu, M.; Liu, W. [China University of Mining and Technology, Beijing (China). Dept of Chemical Engineering and Environment

    2001-11-01

    Using image processing the contrast ratio between the bubble on the surface of flotation froth and the image background was enhanced, and the edges of bubble were extracted. Thus a model about the relation between the statistic feature of the bubbles in the image and the cleaned coal can be established. It is feasible to extract the bubble by processing the froth image of coal flotation on the basis of analysing the shape of the bubble. By means of processing the 51 group images sampled from laboratory column, it is thought that the use of the histogram equalization of image gradation and the medium filtering can obviously improve the dynamic contrast range and the brightness of bubbles. Finally, the method of threshold value cut and the bubble edge detecting for extracting the bubble were also discussed to describe the bubble feature, such as size and shape, in the froth image and to distinguish the froth image of coal flotation. 6 refs., 3 figs.

  20. The study of image processing of parallel digital signal processor

    International Nuclear Information System (INIS)

    Liu Jie

    2000-01-01

    The author analyzes the basic characteristic of parallel DSP (digital signal processor) TMS320C80 and proposes related optimized image algorithm and the parallel processing method based on parallel DSP. The realtime for many image processing can be achieved in this way

  1. Consumer attitudes, knowledge, and behavior related to salt consumption in sentinel countries of the Americas Actitudes, conocimientos y comportamiento de los consumidores en relación con el consumo de sal en países centinelas de la Región de las Américas

    Directory of Open Access Journals (Sweden)

    Rafael Moreira Claro

    2012-10-01

    Full Text Available OBJECTIVE: To describe individual attitudes, knowledge, and behavior regarding salt intake, its dietary sources, and current food-labeling practices related to salt and sodium in five sentinel countries of the Americas. METHODS: A convenience sample of 1 992 adults (≥ 18 years old from Argentina, Canada, Chile, Costa Rica, and Ecuador (approximately 400 from each country was obtained between September 2010 and February 2011. Data collection was conducted in shopping malls or major commercial areas using a questionnaire containing 33 questions. Descriptive estimates are presented for the total sample and stratified by country and sociodemographic characteristics of the studied population. RESULTS: Almost 90% of participants associated excess intake of salt with the occurrence of adverse health conditions, more than 60% indicated they were trying to reduce their current intake of salt, and more than 30% believed reducing dietary salt to be of high importance. Only 26% of participants claimed to know the existence of a recommended maximum value of salt or sodium intake and 47% of them stated they knew the content of salt in food items. More than 80% of participants said that they would like food labeling to indicate high, medium, and low levels of salt or sodium and would like to see a clear warning label on packages of foods high in salt. CONCLUSIONS: Additional effort is required to increase consumers' knowledge about the existence of a maximum limit for intake and to improve their capacity to accurately monitor and reduce their personal salt consumption.OBJETIVO: Describir las actitudes, los conocimientos y el comportamiento individuales con respecto al consumo de sal, sus fuentes alimentarias, y las prácticas actuales de etiquetado de alimentos en relación con su contenido en sal y sodio en cinco países centinelas de la Región de las Américas. MÉTODOS: De septiembre del 2010 a febrero del 2011, se obtuvo una muestra de conveniencia de 1

  2. Recent Advances in Techniques for Hyperspectral Image Processing

    Science.gov (United States)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; hide

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  3. Processing Visual Images

    International Nuclear Information System (INIS)

    Litke, Alan

    2006-01-01

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  4. Conceptualization, Cognitive Process between Image and Word

    Directory of Open Access Journals (Sweden)

    Aurel Ion Clinciu

    2009-12-01

    Full Text Available The study explores the process of constituting and organizing the system of concepts. After a comparative analysis of image and concept, conceptualization is reconsidered through raising for discussion the relations of concept with image in general and with self-image mirrored in body schema in particular. Taking into consideration the notion of mental space, there is developed an articulated perspective on conceptualization which has the images of mental space at one pole and the categories of language and operations of thinking at the other pole. There are explored the explicative possibilities of the notion of Tversky’s diagrammatic space as an element which is necessary to understand the genesis of graphic behaviour and to define a new construct, graphic intelligence.

  5. Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.

    Science.gov (United States)

    Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han

    2017-09-07

    Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.

  6. Image processing of angiograms: A pilot study

    Science.gov (United States)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  7. Fast processing of microscopic images using object-based extended depth of field.

    Science.gov (United States)

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades

    2016-12-22

    Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This

  8. High Throughput Multispectral Image Processing with Applications in Food Science.

    Directory of Open Access Journals (Sweden)

    Panagiotis Tsakanikas

    Full Text Available Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  9. High Throughput Multispectral Image Processing with Applications in Food Science.

    Science.gov (United States)

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  10. Quantitative analysis of geomorphic processes using satellite image data at different scales

    Science.gov (United States)

    Williams, R. S., Jr.

    1985-01-01

    When aerial and satellite photographs and images are used in the quantitative analysis of geomorphic processes, either through direct observation of active processes or by analysis of landforms resulting from inferred active or dormant processes, a number of limitations in the use of such data must be considered. Active geomorphic processes work at different scales and rates. Therefore, the capability of imaging an active or dormant process depends primarily on the scale of the process and the spatial-resolution characteristic of the imaging system. Scale is an important factor in recording continuous and discontinuous active geomorphic processes, because what is not recorded will not be considered or even suspected in the analysis of orbital images. If the geomorphic process of landform change caused by the process is less than 200 m in x to y dimension, then it will not be recorded. Although the scale factor is critical, in the recording of discontinuous active geomorphic processes, the repeat interval of orbital-image acquisition of a planetary surface also is a consideration in order to capture a recurring short-lived geomorphic process or to record changes caused by either a continuous or a discontinuous geomorphic process.

  11. Preparation and provisional validation of a large size dried spike: Batch SAL-9934

    International Nuclear Information System (INIS)

    Jammet, G.; Zoigner, A.; Doubek, N.; Aigner, H.; Deron, S.; Bagliano, G.

    1990-05-01

    To determine uranium and plutonium concentration using isotope dilution mass spectrometry, weighed aliquands of a synthetic mixture containing about 2 mg of Pu (with a 239 Pu abundance of about 98%) and 40 mg of U (with a 235 U enrichment of about 19%) have been prepared and verified by SAL to be used to spike samples of concentrated spent fuel solutions with a high burn-up and a low 235 U enrichment. Certified Reference Materials Pu-NBL-126, natural U-NBS-960 and 93% enriched U-NBL-116 were used to prepare a stock solution containing 3.2 mg/ml of Pu and 64.3 mg/ml of 18.8% enriched U. Before shipment to the Reprocessing Plant, aliquands of the stock solution are dried to give Large Size Dried (LSD) Spikes which resist shocks encountered during transportation, so that they can readily be recovered quantitatively at the plant. This paper describes the preparation and the validation of a third batch of LSD-Spike which is intended to be used as a common spike by the plant operator, the national and the IAEA inspectorates. 6 refs, 6 tabs

  12. A Visual Environment for Real-Time Image Processing in Hardware (VERTIPH

    Directory of Open Access Journals (Sweden)

    Johnston CT

    2006-01-01

    Full Text Available Real-time video processing is an image-processing application that is ideally suited to implementation on FPGAs. We discuss the strengths and weaknesses of a number of existing languages and hardware compilers that have been developed for specifying image processing algorithms on FPGAs. We propose VERTIPH, a new multiple-view visual language that avoids the weaknesses we identify. A VERTIPH design incorporates three different views, each tailored to a different aspect of the image processing system under development; an overall architectural view, a computational view, and a resource and scheduling view.

  13. Monitoring of pellet coating process with image analysis—a feasibility study

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey; Esbensen, Kim; Bogomolov, Andrey

    2010-01-01

    of the process samples appearance, besides measurable distances, that may be connected to the information of interest. In the present paper, the methods of image analysis were applied to at-line monitoring of fluid bed pellet coating process. The quantitative description of images of pellet samples, taken from...... different process stages, has been obtained using two different approaches: wavelet decomposition and angle measure technique (AMT). Both methods revealed a strong correlation between image features and process parameters. However, the AMT results turned out to be more accurate and stable. It has been shown...

  14. Comparison of Pu isotopic composition between gamma and mass spectrometry: Experience from IAEA-SAL

    International Nuclear Information System (INIS)

    Parus, J.L.; Raab, W.

    1998-01-01

    About 2000 Pu containing samples have been analysed during the last 8 years at SAL using gamma spectrometry (GS) in parallel with mass spectrometry (MS). Four different detectors have been used for the measurement of gamma-ray spectra and several versions of the MGA program have been used for spectra evaluation. The results of Pu isotopic composition obtained by both methods have neem systematically compared. Attempts to improve the agreement between GS and MS are described. This was done by adjustment of the emission probabilities for some gamma energies and the development of a new correlation equation for 242 Pu. These improvements have been applied for evaluation of two sets containing 320 and 404 samples, respectively analysed in 1991 and in 1992-93. The mean differences and their standard deviations between MS and GS were calculated, showing mean relative differences for 238-241 Pu isotopes in the range from 0.1 to 0.5% with standard deviations within ± 0.4 to ±1%. For 242 Pu these values are about 0.5% and ± 5%, respectively. (author)

  15. Image processing for drift compensation in fluorescence microscopy

    DEFF Research Database (Denmark)

    Petersen, Steffen; Thiagarajan, Viruthachalam; Coutinho, Isabel

    2013-01-01

    Fluorescence microscopy is characterized by low background noise, thus a fluorescent object appears as an area of high signal/noise. Thermal gradients may result in apparent motion of the object, leading to a blurred image. Here, we have developed an image processing methodology that may remove....../reduce blur significantly for any type of microscopy. A total of ~100 images were acquired with a pixel size of 30 nm. The acquisition time for each image was approximately 1second. We can quantity the drift in X and Y using the sub pixel accuracy computed centroid location of an image object in each frame....... We can measure drifts down to approximately 10 nm in size and a drift-compensated image can therefore be reconstructed on a grid of the same size using the “Shift and Add” approach leading to an image of identical size asthe individual image. We have also reconstructed the image using a 3 fold larger...

  16. Multiresolution approach to processing images for different applications interaction of lower processing with higher vision

    CERN Document Server

    Vujović, Igor

    2015-01-01

    This book presents theoretical and practical aspects of the interaction between low and high level image processing. Multiresolution analysis owes its popularity mostly to wavelets and is widely used in a variety of applications. Low level image processing is important for the performance of many high level applications. The book includes examples from different research fields, i.e. video surveillance; biomedical applications (EMG and X-ray); improved communication, namely teleoperation, telemedicine, animation, augmented/virtual reality and robot vision; monitoring of the condition of ship systems and image quality control.

  17. Image processing with personal computer

    International Nuclear Information System (INIS)

    Hara, Hiroshi; Handa, Madoka; Watanabe, Yoshihiko

    1990-01-01

    The method of automating the judgement works using photographs in radiation nondestructive inspection with a simple type image processor on the market was examined. The software for defect extraction and making binary and the software for automatic judgement were made for trial, and by using the various photographs on which the judgement was already done as the object, the accuracy and the problematic points were tested. According to the state of the objects to be photographed and the condition of inspection, the accuracy of judgement from 100% to 45% was obtained. The criteria for judgement were in conformity with the collection of reference photographs made by Japan Cast Steel Association. In the non-destructive inspection by radiography, the number and size of the defect images in photographs are visually judged, the results are collated with the standard, and the quality is decided. Recently, the technology of image processing with personal computers advanced, therefore by utilizing this technology, the automation of the judgement of photographs was attempted to improve the accuracy, to increase the inspection efficiency and to realize labor saving. (K.I.)

  18. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  19. PROCESSING OF DIGITAL IMAGES OF INDUSTRIAL OBJECT SURFACES DURING NON-DESTRUCTIVE TESTING

    Directory of Open Access Journals (Sweden)

    A. A. Hundzin

    2016-01-01

    Full Text Available The paper presents modern approaches to processing of images obtained with the help of industrial equipment. Usage of pixel modification in small neighborhoods, application of uniform image processing while changing brightness level, possibilities for combination of several images, threshold image processing have been described in the paper. While processing a number of images on a metal structure containing micro-cracks and being under strain difference between two such images have been determined in the paper. The metal structure represents a contour specifying the difference in images. An analysis of the contour makes it possible to determine initial direction of crack propagation in the metal. A threshold binarization value has been determined while processing the image having a field of medium intensity which are disappearing in the process of simple binarization and merging with the background due to rather small drop between the edges. In this regard an algorithm of a balanced threshold histogram clipping has been selected and it is based on the following approach: two different histogram fractions are “weighed” and if one of the fractions “outweighs” then last column of the histogram fraction is removed and the procedure is repeated again. When there is rather high threshold value a contour break (disappearance of informative pixels may occur, and when there is a low threshold value – a noise (non-informative pixels may appear. The paper shows implementation of an algorithm for location of contact pads on image of semiconductor crystal. Algorithms for morphological processing of production prototype images have been obtained in the paper and these algorithms permit to detect defects on the surface of semiconductors, to carry out filtration, threshold binarization that presupposes application of an algorithm of a balanced threshold histogram clipping. The developed approaches can be used to highlight contours on the surface

  20. Mirion--a software package for automatic processing of mass spectrometric images.

    Science.gov (United States)

    Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B

    2013-08-01

    Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.

  1. Anniversary Paper: Image processing and manipulation through the pages of Medical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Armato, Samuel G. III; Ginneken, Bram van [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States); Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, Room Q0S.459, 3584 CX Utrecht (Netherlands)

    2008-10-15

    The language of radiology has gradually evolved from ''the film'' (the foundation of radiology since Wilhelm Roentgen's 1895 discovery of x-rays) to ''the image,'' an electronic manifestation of a radiologic examination that exists within the bits and bytes of a computer. Rather than simply storing and displaying radiologic images in a static manner, the computational power of the computer may be used to enhance a radiologist's ability to visually extract information from the image through image processing and image manipulation algorithms. Image processing tools provide a broad spectrum of opportunities for image enhancement. Gray-level manipulations such as histogram equalization, spatial alterations such as geometric distortion correction, preprocessing operations such as edge enhancement, and enhanced radiography techniques such as temporal subtraction provide powerful methods to improve the diagnostic quality of an image or to enhance structures of interest within an image. Furthermore, these image processing algorithms provide the building blocks of more advanced computer vision methods. The prominent role of medical physicists and the AAPM in the advancement of medical image processing methods, and in the establishment of the ''image'' as the fundamental entity in radiology and radiation oncology, has been captured in 35 volumes of Medical Physics.

  2. Image processing system design for microcantilever-based optical readout infrared arrays

    Science.gov (United States)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  3. SENTINEL-2 LEVEL 1 PRODUCTS AND IMAGE PROCESSING PERFORMANCES

    Directory of Open Access Journals (Sweden)

    S. J. Baillarin

    2012-07-01

    Full Text Available In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES program, the European Space Agency (ESA is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km, a high revisit (5 days with two satellites, a high resolution (10 m, 20 m and 60 m and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains. In this context, the Centre National d'Etudes Spatiales (CNES supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes, the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands; and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame

  4. SENTINEL-2 Level 1 Products and Image Processing Performances

    Science.gov (United States)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES) program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km), a high revisit (5 days with two satellites), a high resolution (10 m, 20 m and 60 m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes), the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands); and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame. The

  5. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    Science.gov (United States)

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  6. Atlantic Tropical Cyclogenetic Processes During SOP-3 NAMMA in the GEOS-5 Global Data Assimilation and Forecast System

    Science.gov (United States)

    Reale, Oreste; Lau, William K.; Kim, Kyu-Myong; Brin, Eugenia

    2009-01-01

    This article investigates the role of the Saharan air layer (SAL) in tropical cyclogenetic processes associated with a nondeveloping and a developing African easterly wave observed during the Special Observation Period (SOP-3) phase of the 2006 NASA African. Monsoon Multidisciplinary Analyses (NAMMA). The two waves are chosen because they both interact heavily with Saharan air. A glottal data assimilation and forecast system, the NASA Goddard Earth Observing System. version 5 (GEOS-5), is being run to produce a set of high-9 uality global analyses, inclusive of all observations used operationally but with additional satellite information. In particular, following previous works by the same authors, the duality-controlled data from the Atmospheric Infrared Sounder (AIRS) used to produce these analyses have a better coverage than the one adopted by operational centers. From these improved analyses, two sets of 31 five-day high-resolution forecasts, at horizontal resolutions of both half and quarter degrees, are produced. Results indicate that very steep moisture gradients are associated with the SAL in forecasts and analyses, even at great distances from their source over the Sahara. In addition, a thermal dipole in the vertiieat (warm above, cool below) is present in the nondeveloping case. The Moderate Resolution Imaging Spoctroradiometer (MODIS) aboard NASA's Terra and Aqua satellites shows that aerosol optical thickness, indicative of more dust as opposed to other factors, is higher in the nondeveloping case. Altogether, results suggest that the radiative effect of dust may play some role in producing a thermal structure less favorable to cyclogenesis. Results also indicate that only global horizontal resolutions on the order of 20-30 km can capture the large-scale transport and the tine thermal structure of the SAL, inclusive of the sharp moisture gradients, reproducing the effect of tropical cyclone suppression that has been hypothesized by previous authors

  7. Image processings of radiographs in the gastric cancer cases

    International Nuclear Information System (INIS)

    Inamoto, Kazuo; Yamashita, Kazuya; Morikawa, Kaoru; Takigawa, Atsushi

    1987-01-01

    For improving detectability of the gastric lesions in the X-ray examinations, the computer image processing methods were studied in radiographs of a stomach phantom and gastric cancer lesions by the A/D conversion. After several kinds of the basic processing methods were examined in the artificially made lesions in the stomach phantom and true gastric cancer lesions in 26 X-ray pictures of the 8 gastric cancer cases, we concluded that pathological changes on the edge or mucosal folds in the stomach were stressed by the image processing method using negative to positive conversion, density gradient control, edge enhancement (Sobel operation) and subtraction of the Sobel image from the original image. These methods contributed to interpretation of the gastric cancer by enhancement of the contour and mucosal pattern inside the lesion. The results were applied for follow up studies of the gastric cancer. Tumor expansions could be clarified, but it was yet difficult to catch a precancer lesion by retrospective studies. However, these methods would be expected in future application in the mass survey examination of the gastric cancer detection. (author)

  8. Design considerations for a neuroradiologic picture archival and image processing workstation

    International Nuclear Information System (INIS)

    Fishbein, D.S.

    1986-01-01

    The design and implementation of a small scale image archival and processing workstation for use in the study of digitized neuroradiologic images is described. The system is designed to be easily interfaced to existing equipment (presently PET, NMR and CT), function independent of a central file server, and provide for a versatile image processing environment. (Auth.)

  9. Digital Signal Processing for Medical Imaging Using Matlab

    CERN Document Server

    Gopi, E S

    2013-01-01

    This book describes medical imaging systems, such as X-ray, Computed tomography, MRI, etc. from the point of view of digital signal processing. Readers will see techniques applied to medical imaging such as Radon transformation, image reconstruction, image rendering, image enhancement and restoration, and more. This book also outlines the physics behind medical imaging required to understand the techniques being described. The presentation is designed to be accessible to beginners who are doing research in DSP for medical imaging. Matlab programs and illustrations are used wherever possible to reinforce the concepts being discussed.  ·         Acts as a “starter kit” for beginners doing research in DSP for medical imaging; ·         Uses Matlab programs and illustrations throughout to make content accessible, particularly with techniques such as Radon transformation and image rendering; ·         Includes discussion of the basic principles behind the various medical imaging tec...

  10. A midas plugin to enable construction of reproducible web-based image processing pipelines.

    Science.gov (United States)

    Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A; Oguz, Ipek

    2013-01-01

    Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  11. A Midas Plugin to Enable Construction of Reproducible Web-based Image Processing Pipelines

    Directory of Open Access Journals (Sweden)

    Michael eGrauer

    2013-12-01

    Full Text Available Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based UI, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  12. Image-guided radiotherapy quality control: Statistical process control using image similarity metrics.

    Science.gov (United States)

    Shiraishi, Satomi; Grams, Michael P; Fong de Los Santos, Luis E

    2018-05-01

    The purpose of this study was to demonstrate an objective quality control framework for the image review process. A total of 927 cone-beam computed tomography (CBCT) registrations were retrospectively analyzed for 33 bilateral head and neck cancer patients who received definitive radiotherapy. Two registration tracking volumes (RTVs) - cervical spine (C-spine) and mandible - were defined, within which a similarity metric was calculated and used as a registration quality tracking metric over the course of treatment. First, sensitivity to large misregistrations was analyzed for normalized cross-correlation (NCC) and mutual information (MI) in the context of statistical analysis. The distribution of metrics was obtained for displacements that varied according to a normal distribution with standard deviation of σ = 2 mm, and the detectability of displacements greater than 5 mm was investigated. Then, similarity metric control charts were created using a statistical process control (SPC) framework to objectively monitor the image registration and review process. Patient-specific control charts were created using NCC values from the first five fractions to set a patient-specific process capability limit. Population control charts were created using the average of the first five NCC values for all patients in the study. For each patient, the similarity metrics were calculated as a function of unidirectional translation, referred to as the effective displacement. Patient-specific action limits corresponding to 5 mm effective displacements were defined. Furthermore, effective displacements of the ten registrations with the lowest similarity metrics were compared with a three dimensional (3DoF) couch displacement required to align the anatomical landmarks. Normalized cross-correlation identified suboptimal registrations more effectively than MI within the framework of SPC. Deviations greater than 5 mm were detected at 2.8σ and 2.1σ from the mean for NCC and MI

  13. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix

    2014-11-19

    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  14. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix; Egiazarian, Karen; Kautz, Jan; Pulli, Kari; Steinberger, Markus; Tsai, Yun-Ta; Rouf, Mushfiqur; Pająk, Dawid; Reddy, Dikpal; Gallo, Orazio; Liu, Jing; Heidrich, Wolfgang

    2014-01-01

    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  15. Design of light-small high-speed image data processing system

    Science.gov (United States)

    Yang, Jinbao; Feng, Xue; Li, Fei

    2015-10-01

    A light-small high speed image data processing system was designed in order to meet the request of image data processing in aerospace. System was constructed of FPGA, DSP and MCU (Micro-controller), implementing a video compress of 3 million pixels@15frames and real-time return of compressed image to the upper system. Programmable characteristic of FPGA, high performance image compress IC and configurable MCU were made best use to improve integration. Besides, hard-soft board design was introduced and PCB layout was optimized. At last, system achieved miniaturization, light-weight and fast heat dispersion. Experiments show that, system's multifunction was designed correctly and worked stably. In conclusion, system can be widely used in the area of light-small imaging.

  16. Referential processing: reciprocity and correlates of naming and imaging.

    Science.gov (United States)

    Paivio, A; Clark, J M; Digdon, N; Bons, T

    1989-03-01

    To shed light on the referential processes that underlie mental translation between representations of objects and words, we studied the reciprocity and determinants of naming and imaging reaction times (RT). Ninety-six subjects pressed a key when they had covertly named 248 pictures or imaged to their names. Mean naming and imagery RTs for each item were correlated with one another, and with properties of names, images, and their interconnections suggested by prior research and dual coding theory. Imagery RTs correlated .56 (df = 246) with manual naming RTs and .58 with voicekey naming RTs from prior studies. A factor analysis of the RTs and of 31 item characteristics revealed 7 dimensions. Imagery and naming RTs loaded on a common referential factor that included variables related to both directions of processing (e.g., missing names and missing images). Naming RTs also loaded on a nonverbal-to-verbal factor that included such variables as number of different names, whereas imagery RTs loaded on a verbal-to-nonverbal factor that included such variables as rated consistency of imagery. The other factors were verbal familiarity, verbal complexity, nonverbal familiarity, and nonverbal complexity. The findings confirm the reciprocity of imaging and naming, and their relation to constructs associated with distinct phases of referential processing.

  17. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment

    Directory of Open Access Journals (Sweden)

    Meng Kuan eLin

    2013-07-01

    Full Text Available Digital Imaging Processing (DIP requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and digital imaging processing service, called M-DIP. The objective of the system is to (1 automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC, Neuroimaging Informatics Technology Initiative (NIFTI to RAW formats; (2 speed up querying of imaging measurement; and (3 display high level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle- layer database, a stand-alone DIP server and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data a multiple zoom levels and to increase its quality to meet users expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  18. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  19. Super-resolution processing for pulsed neutron imaging system using a high-speed camera

    International Nuclear Information System (INIS)

    Ishizuka, Ken; Kai, Tetsuya; Shinohara, Takenao; Segawa, Mariko; Mochiki, Koichi

    2015-01-01

    Super-resolution and center-of-gravity processing improve the resolution of neutron-transmitted images. These processing methods calculate the center-of-gravity pixel or sub-pixel of the neutron point converted into light by a scintillator. The conventional neutron-transmitted image is acquired using a high-speed camera by integrating many frames when a transmitted image with one frame is not provided. It succeeds in acquiring the transmitted image and calculating a spectrum by integrating frames of the same energy. However, because a high frame rate is required for neutron resonance absorption imaging, the number of pixels of the transmitted image decreases, and the resolution decreases to the limit of the camera performance. Therefore, we attempt to improve the resolution by integrating the frames after applying super-resolution or center-of-gravity processing. The processed results indicate that center-of-gravity processing can be effective in pulsed-neutron imaging with a high-speed camera. In addition, the results show that super-resolution processing is effective indirectly. A project to develop a real-time image data processing system has begun, and this system will be used at J-PARC in JAEA. (author)

  20. New domain for image analysis: VLSI circuits testing, with Romuald, specialized in parallel image processing

    Energy Technology Data Exchange (ETDEWEB)

    Rubat Du Merac, C; Jutier, P; Laurent, J; Courtois, B

    1983-07-01

    This paper describes some aspects of specifying, designing and evaluating a specialized machine, Romuald, for the capture, coding, and processing of video and scanning electron microscope (SEM) pictures. First the authors present the functional organization of the process unit of romuald and its hardware, giving details of its behaviour. Then they study the capture and display unit which, thanks to its flexibility, enables SEM images coding. Finally, they describe an application which is now being developed in their laboratory: testing VLSI circuits with new methods: sem+voltage contrast and image processing. 15 references.

  1. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  2. Image processing and Quality Control for the first 10,000 brain imaging datasets from UK Biobank.

    Science.gov (United States)

    Alfaro-Almagro, Fidel; Jenkinson, Mark; Bangerter, Neal K; Andersson, Jesper L R; Griffanti, Ludovica; Douaud, Gwenaëlle; Sotiropoulos, Stamatios N; Jbabdi, Saad; Hernandez-Fernandez, Moises; Vallee, Emmanuel; Vidaurre, Diego; Webster, Matthew; McCarthy, Paul; Rorden, Christopher; Daducci, Alessandro; Alexander, Daniel C; Zhang, Hui; Dragonu, Iulius; Matthews, Paul M; Miller, Karla L; Smith, Stephen M

    2018-02-01

    UK Biobank is a large-scale prospective epidemiological study with all data accessible to researchers worldwide. It is currently in the process of bringing back 100,000 of the original participants for brain, heart and body MRI, carotid ultrasound and low-dose bone/fat x-ray. The brain imaging component covers 6 modalities (T1, T2 FLAIR, susceptibility weighted MRI, Resting fMRI, Task fMRI and Diffusion MRI). Raw and processed data from the first 10,000 imaged subjects has recently been released for general research access. To help convert this data into useful summary information we have developed an automated processing and QC (Quality Control) pipeline that is available for use by other researchers. In this paper we describe the pipeline in detail, following a brief overview of UK Biobank brain imaging and the acquisition protocol. We also describe several quantitative investigations carried out as part of the development of both the imaging protocol and the processing pipeline. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Quality assessment of the digitalization process of analog x-ray images

    International Nuclear Information System (INIS)

    Georgieva, D.

    2014-01-01

    Computer-assisted diagnosis enabled doctors for a second point-of-view on the test results. This improves the diseases' early detection and significantly reduces the chance of errors. These methods very nicely complemented the possibilities of digital medical imaging apparatus, but in analog images their applicability and results entirely depend on the quality of analog images digitalisation. Today many standards and remarks for good practices discuss the digital apparatus image quality but the digitalisation process of analog medical images is not a part of them. Medical imaging apparatus have become digital, but within an entirely digital medical environment is necessary for their ability to blend with the old analog medical imaging carriers. The life of patients doesn't start with the beginning of digital era and for the aim of tracking diseases it is necessary to use the new digital images as well as older analog ones. For the generation of 40-50 years a large archive of images is piled up, which should be accounted of in the diagnosis process. This article is the author's study of the digitalized image quality problem. It offers a new approach to the x-ray image digitalisation - getting the HDR-image by optical sensor. After the HDR-image generation method offers to be used a digital signal processing to improve the quality of the final 16 bit gray scale medical image. The new method for medical image enhancement is proposed - it improves the image contrast, it increases or preserves the dynamic range and it doesn't lead to the loss of small low contrast structures in the image. Key words: Quality of Digital X-Rays Images

  4. Real-time Image Processing for Microscopy-based Label-free Imaging Flow Cytometry in a Microfluidic Chip.

    Science.gov (United States)

    Heo, Young Jin; Lee, Donghyeon; Kang, Junsu; Lee, Keondo; Chung, Wan Kyun

    2017-09-14

    Imaging flow cytometry (IFC) is an emerging technology that acquires single-cell images at high-throughput for analysis of a cell population. Rich information that comes from high sensitivity and spatial resolution of a single-cell microscopic image is beneficial for single-cell analysis in various biological applications. In this paper, we present a fast image-processing pipeline (R-MOD: Real-time Moving Object Detector) based on deep learning for high-throughput microscopy-based label-free IFC in a microfluidic chip. The R-MOD pipeline acquires all single-cell images of cells in flow, and identifies the acquired images as a real-time process with minimum hardware that consists of a microscope and a high-speed camera. Experiments show that R-MOD has the fast and reliable accuracy (500 fps and 93.3% mAP), and is expected to be used as a powerful tool for biomedical and clinical applications.

  5. Information theoretic methods for image processing algorithm optimization

    Science.gov (United States)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  6. Establishing an international reference image database for research and development in medical image processing

    NARCIS (Netherlands)

    Horsch, A.D.; Prinz, M.; Schneider, S.; Sipilä, O; Spinnler, K.; Vallée, J-P; Verdonck-de Leeuw, I; Vogl, R.; Wittenberg, T.; Zahlmann, G.

    2004-01-01

    INTRODUCTION: The lack of comparability of evaluation results is one of the major obstacles of research and development in Medical Image Processing (MIP). The main reason for that is the usage of different image datasets with different quality, size and Gold standard. OBJECTIVES: Therefore, one of

  7. Computer vision applications for coronagraphic optical alignment and image processing.

    Science.gov (United States)

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  8. Remote Sensing Image Enhancement Based on Non-subsampled Shearlet Transform and Parameterized Logarithmic Image Processing Model

    Directory of Open Access Journals (Sweden)

    TAO Feixiang

    2015-08-01

    Full Text Available Aiming at parts of remote sensing images with dark brightness and low contrast, a remote sensing image enhancement method based on non-subsampled Shearlet transform and parameterized logarithmic image processing model is proposed in this paper to improve the visual effects and interpretability of remote sensing images. Firstly, a remote sensing image is decomposed into a low-frequency component and high frequency components by non-subsampled Shearlet transform.Then the low frequency component is enhanced according to PLIP (parameterized logarithmic image processing model, which can improve the contrast of image, while the improved fuzzy enhancement method is used to enhance the high frequency components in order to highlight the information of edges and details. A large number of experimental results show that, compared with five kinds of image enhancement methods such as bidirectional histogram equalization method, the method based on stationary wavelet transform and the method based on non-subsampled contourlet transform, the proposed method has advantages in both subjective visual effects and objective quantitative evaluation indexes such as contrast and definition, which can more effectively improve the contrast of remote sensing image and enhance edges and texture details with better visual effects.

  9. New Processing of Spaceborne Imaging Radar-C (SIR-C) Data

    Science.gov (United States)

    Meyer, F. J.; Gracheva, V.; Arko, S. A.; Labelle-Hamer, A. L.

    2017-12-01

    The Spaceborne Imaging Radar-C (SIR-C) was a radar system, which successfully operated on two separate shuttle missions in April and October 1994. During these two missions, a total of 143 hours of radar data were recorded. SIR-C was the first multifrequency and polarimetric spaceborne radar system, operating in dual frequency (L- and C- band) and with quad-polarization. SIR-C had a variety of different operating modes, which are innovative even from today's point of view. Depending on the mode, it was possible to acquire data with different polarizations and carrier frequency combinations. Additionally, different swaths and bandwidths could be used during the data collection and it was possible to receive data with two antennas in the along-track direction.The United States Geological Survey (USGS) distributes the synthetic aperture radar (SAR) images as single-look complex (SLC) and multi-look complex (MLC) products. Unfortunately, since June 2005 the SIR-C processor has been inoperable and not repairable. All acquired SLC and MLC images were processed with a course resolution of 100 m with the goal of generating a quick look. These images are however not well suited for scientific analysis. Only a small percentage of the acquired data has been processed as full resolution SAR images and the unprocessed high resolution data cannot be processed any more at the moment.At the Alaska Satellite Facility (ASF) a new processor was developed to process binary SIR-C data to full resolution SAR images. ASF is planning to process the entire recoverable SIR-C archive to full resolution SLCs, MLCs and high resolution geocoded image products. ASF will make these products available to the science community through their existing data archiving and distribution system.The final paper will describe the new processor and analyze the challenges of reprocessing the SIR-C data.

  10. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  11. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    Science.gov (United States)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  12. An Image Retrieval and Processing Expert System for the World Wide Web

    Science.gov (United States)

    Rodriguez, Ricardo; Rondon, Angelica; Bruno, Maria I.; Vasquez, Ramon

    1998-01-01

    This paper presents a system that is being developed in the Laboratory of Applied Remote Sensing and Image Processing at the University of P.R. at Mayaguez. It describes the components that constitute its architecture. The main elements are: a Data Warehouse, an Image Processing Engine, and an Expert System. Together, they provide a complete solution to researchers from different fields that make use of images in their investigations. Also, since it is available to the World Wide Web, it provides remote access and processing of images.

  13. Gaussian process regression based optimal design of combustion systems using flame images

    International Nuclear Information System (INIS)

    Chen, Junghui; Chan, Lester Lik Teck; Cheng, Yi-Cheng

    2013-01-01

    Highlights: • The digital color images of flames are applied to combustion design. • The combustion with modeling stochastic nature is developed using GP. • GP based uncertainty design is made and evaluated through a real combustion system. - Abstract: With the advanced methods of digital image processing and optical sensing, it is possible to have continuous imaging carried out on-line in combustion processes. In this paper, a method that extracts characteristics from the flame images is presented to immediately predict the outlet content of the flue gas. First, from the large number of flame image data, principal component analysis is used to discover the principal components or combinational variables, which describe the important trends and variations in the operation data. Then stochastic modeling of the combustion process is done by a Gaussian process with the aim to capture the stochastic nature of the flame associated with the oxygen content. The designed oxygen combustion content considers the uncertainty presented in the combustion. A reference image can be designed for the actual combustion process to provide an easy and straightforward maintenance of the combustion process

  14. Fission gas bubble identification using MATLAB's image processing toolbox

    Energy Technology Data Exchange (ETDEWEB)

    Collette, R. [Colorado School of Mines, Nuclear Science and Engineering Program, 1500 Illinois St, Golden, CO 80401 (United States); King, J., E-mail: kingjc@mines.edu [Colorado School of Mines, Nuclear Science and Engineering Program, 1500 Illinois St, Golden, CO 80401 (United States); Keiser, D.; Miller, B.; Madden, J.; Schulthess, J. [Nuclear Fuels and Materials Division, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-6188 (United States)

    2016-08-15

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. This study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods. - Highlights: •Automated image processing can aid in the fuel qualification process. •Routines are developed to characterize fission gas bubbles in irradiated U–Mo fuel. •Frequency domain filtration effectively eliminates FIB curtaining artifacts. •Adaptive thresholding proved to be the most accurate segmentation method. •The techniques established are ready to be applied to large scale data extraction testing.

  15. Image processing applied to automatic detection of defects during ultrasonic examination

    International Nuclear Information System (INIS)

    Moysan, J.

    1992-10-01

    This work is a study about image processing applied to ultrasonic BSCAN images which are obtained in the field of non destructive testing of weld. The goal is to define what image processing techniques can bring to ameliorate the exploitation of the data collected and, more precisely, what image processing can do to extract the meaningful echoes which enable to characterize and to size the defects. The report presents non destructive testing by ultrasounds in the nuclear field and it indicates specificities of the propagation of ultrasonic waves in austenitic weld. It gives a state of the art of the data processing applied to ultrasonic images in nondestructive evaluation. A new image analysis is then developed. It is based on a powerful tool, the co-occurrence matrix. This matrix enables to represent, in a whole representation, relations between amplitudes of couples of pixels. From the matrix analysis, a new complete and automatic method has been set down in order to define a threshold which separates echoes from noise. An automatic interpretation of the ultrasonic echoes is then possible. Complete validation has been done with standard pieces

  16. Mihaila Bulgakova Volands un Marijas Korelli Lučija (salīdzinošā analīze)

    OpenAIRE

    Suhackis, Aleksandrs

    2009-01-01

    Darbs ir veltīts sātana tēlu romānos «Sātana Sēras» Marijas Korelli un «Meistars un Margarita» Mihaila Bulgakova salidzinošās analīzei. Pētīšanas mērķis ir salīdzināt Lučio Rimanca un Volanda tēlus simvolisma un demonoloģijas tradīcijas aspektā. Īpaša uzmanība ir veltīta arī uz simvolisma tradicijas, dekadanca un romantīsma abskatiem, ar mērķi analīzēt, literārus darbus „Satana Sēras” un „Meistars un Margarita” šaja aspektā. Darbs ir dalīts uz divam daļām. Pirmā ir aprakstīti teoretiskie ...

  17. ARMA processing for NDE ultrasonic imaging

    International Nuclear Information System (INIS)

    Pao, Y.H.; El-Sherbini, A.

    1984-01-01

    This chapter describes a new method for acoustic image reconstruction for an active multiple sensor system operating in the reflection mode in the Fresnel region. The method is based on the use of an ARMA model for the reconstruction process. Algorithms for estimating the model parameters are presented and computer simulation results are shown. The AR coefficients are obtained independently of the MA coefficients. It is shown that when the ARMA reconstruction method is augmented with the multifrequency approach, it can provide a three-dimensional reconstructed image with high lateral and range resolutions, high signal to noise ratio and reduced sidelobe levels. The proposed ARMA reconstruction method results in high quality images and better performance than that obtainable with conventional methods. The advantages of the method are very high lateral resolution with a limited number of sensors, reduced sidelobes level, and high signal to noise ratio

  18. MIDAS - ESO's new image processing system

    Science.gov (United States)

    Banse, K.; Crane, P.; Grosbol, P.; Middleburg, F.; Ounnas, C.; Ponz, D.; Waldthausen, H.

    1983-03-01

    The Munich Image Data Analysis System (MIDAS) is an image processing system whose heart is a pair of VAX 11/780 computers linked together via DECnet. One of these computers, VAX-A, is equipped with 3.5 Mbytes of memory, 1.2 Gbytes of disk storage, and two tape drives with 800/1600 bpi density. The other computer, VAX-B, has 4.0 Mbytes of memory, 688 Mbytes of disk storage, and one tape drive with 1600/6250 bpi density. MIDAS is a command-driven system geared toward the interactive user. The type and number of parameters in a command depends on the unique parameter invoked. MIDAS is a highly modular system that provides building blocks for the undertaking of more sophisticated applications. Presently, 175 commands are available. These include the modification of the color-lookup table interactively, to enhance various image features, and the interactive extraction of subimages.

  19. A fuzzy art neural network based color image processing and ...

    African Journals Online (AJOL)

    To improve the learning process from the input data, a new learning rule was suggested. In this paper, a new method is proposed to deal with the RGB color image pixels, which enables a Fuzzy ART neural network to process the RGB color images. The application of the algorithm was implemented and tested on a set of ...

  20. Illuminating magma shearing processes via synchrotron imaging

    Science.gov (United States)

    Lavallée, Yan; Cai, Biao; Coats, Rebecca; Kendrick, Jackie E.; von Aulock, Felix W.; Wallace, Paul A.; Le Gall, Nolwenn; Godinho, Jose; Dobson, Katherine; Atwood, Robert; Holness, Marian; Lee, Peter D.

    2017-04-01

    Our understanding of geomaterial behaviour and processes has long fallen short due to inaccessibility into material as "something" happens. In volcanology, research strategies have increasingly sought to illuminate the subsurface of materials at all scales, from the use of muon tomography to image the inside of volcanoes to the use of seismic tomography to image magmatic bodies in the crust, and most recently, we have added synchrotron-based x-ray tomography to image the inside of material as we test it under controlled conditions. Here, we will explore some of the novel findings made on the evolution of magma during shearing. These will include observations and discussions of magma flow and failure as well as petrological reaction kinetics.

  1. An image-processing methodology for extracting bloodstain pattern features.

    Science.gov (United States)

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G

    2017-08-01

    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Digital processing methodology applied to exploring of radiological images

    International Nuclear Information System (INIS)

    Oliveira, Cristiane de Queiroz

    2004-01-01

    In this work, digital image processing is applied as a automatic computational method, aimed for exploring of radiological images. It was developed an automatic routine, from the segmentation and post-processing techniques to the radiology images acquired from an arrangement, consisting of a X-ray tube, target and filter of molybdenum, of 0.4 mm and 0.03 mm, respectively, and CCD detector. The efficiency of the methodology developed is showed in this work, through a case study, where internal injuries in mangoes are automatically detected and monitored. This methodology is a possible tool to be introduced in the post-harvest process in packing houses. A dichotomic test was applied to evaluate a efficiency of the method. The results show a success of 87.7% to correct diagnosis and 12.3% to failures to correct diagnosis with a sensibility of 93% and specificity of 80%. (author)

  3. Contour extraction of echocardiographic images based on pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana [Department of Multimedia, Faculty of Computer Science and Information Technology, Department of Computer and Communication Systems Engineering, Faculty of Engineering University Putra Malaysia 43400 Serdang, Selangor (Malaysia); Zamrin, D M [Department of Surgery, Faculty of Medicine, National University of Malaysia, 56000 Cheras, Kuala Lumpur (Malaysia); Saripan, M Iqbal

    2011-02-15

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  4. Contour extraction of echocardiographic images based on pre-processing

    International Nuclear Information System (INIS)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana; Zamrin, D M; Saripan, M Iqbal

    2011-01-01

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  5. O diferencial de salários formal-informal no Brasil: segmentação ou viés de seleção?

    Directory of Open Access Journals (Sweden)

    Naércio Aquino Menezes Filho

    2004-06-01

    Full Text Available Neste artigo são investigados os determinantes do diferencial de salários entre os mercados de trabalho formal e informal no Brasil. Utiliza-se um método econométrico de cross-section repetidas (pseudo-paineis, no qual o agrupamento dos dados por geração, tempo e escolaridade permite controlar o fenômeno estudado por características observáveis e não observáveis dos indivíduos. Há fortes evidências de viés de auto-seleção, indicando que os salários mais altos no setor formal decorrem dos melhores atributos individuais não observáveis dos empregados neste setor e não de características intrínsecas a este setor, como seria de se esperar pela hipótese de segmentação.The paper examines the determinants of the wage differential between workers in the formal and informal sectors of the Brazilian labor market. An econometric method of repeated cross-sections (pseudo-panel is used, that allows the researcher to control for observed and unobserved characteristics of the individuals in the sample. We present strong evidence of self-selection bias, meaning that the wage differentials are the consequence of better individual non-observable characteristics of the workers in the formal sector and not to the characteristics of this sector.

  6. Applications of image processing and visualization in the evaluation of murder and assault

    Science.gov (United States)

    Oliver, William R.; Rosenman, Julian G.; Boxwala, Aziz; Stotts, David; Smith, John; Soltys, Mitchell; Symon, James; Cullip, Tim; Wagner, Glenn

    1994-09-01

    Recent advances in image processing and visualization are of increasing use in the investigation of violent crime. The Digital Image Processing Laboratory at the Armed Forces Institute of Pathology in collaboration with groups at the University of North Carolina at Chapel Hill are actively exploring visualization applications including image processing of trauma images, 3D visualization, forensic database management and telemedicine. Examples of recent applications are presented. Future directions of effort include interactive consultation and image manipulation tools for forensic data exploration.

  7. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review.

    Science.gov (United States)

    Sheridan, Heather; Reingold, Eyal M

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise.

  8. IDP++: signal and image processing algorithms in C++ version 4.1

    International Nuclear Information System (INIS)

    Lehman, S.K.

    1996-11-01

    IDP++ (Image and Data Processing in C++) is a collection of signal and image processing algorithms written in C++. It is a compiled signal processing environment which supports four data types of up to four dimensions. It is developed within Lawrence Livermore National Laboratory's Image and Data Processing group as a partial replacement for View. IDP ++ takes advantage of the latest, implemented and actually working, object-oriented compiler technology to provide 'information hiding.' Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is designed for real-time environment where interpreted processing packages are less efficient. IDP++ exists for both SUNs and Silicon Graphics using their most current compilers

  9. Laser Doppler Blood Flow Imaging Using a CMOS Imaging Sensor with On-Chip Signal Processing

    Directory of Open Access Journals (Sweden)

    Cally Gill

    2013-09-01

    Full Text Available The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  10. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    Science.gov (United States)

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  11. A high performance image processing platform based on CPU-GPU heterogeneous cluster with parallel image reconstroctions for micro-CT

    International Nuclear Information System (INIS)

    Ding Yu; Qi Yujin; Zhang Xuezhu; Zhao Cuilan

    2011-01-01

    In this paper, we report the development of a high-performance image processing platform, which is based on CPU-GPU heterogeneous cluster. Currently, it consists of a Dell Precision T7500 and HP XW8600 workstations with parallel programming and runtime environment, using the message-passing interface (MPI) and CUDA (Compute Unified Device Architecture). We succeeded in developing parallel image processing techniques for 3D image reconstruction of X-ray micro-CT imaging. The results show that a GPU provides a computing efficiency of about 194 times faster than a single CPU, and the CPU-GPU clusters provides a computing efficiency of about 46 times faster than the CPU clusters. These meet the requirements of rapid 3D image reconstruction and real time image display. In conclusion, the use of CPU-GPU heterogeneous cluster is an effective way to build high-performance image processing platform. (authors)

  12. Design of a family of integrated parallel co-processors for images processing

    International Nuclear Information System (INIS)

    Court, Thierry

    1991-01-01

    The design of parallel image processing Systems joining in a same architecture, sophisticated microprocessors and specialised operators is a difficult task, because of the various problems to be taken into account. The current study identifies a certain way of realizing and interfacing such dedicated operators to a central unit with microprocessor type. The two guide lines of this work are the search for polyvalent specialized and re-configurated operators as well as their connections to a System bus, and not to specialized video buses. This research work proposes a certain architecture of circuits dedicated to image processing and two realization proposals of them. One of them was be realized in this study by using silicon compiler tools. This work belongs to a more important project, whose aim is the development of an industrial image processing System, high performing, modular, based on the parallelization, in MIMD structures, of an elementary, autonomous image processing unit integrating a microprocessor equipped with a parallel coprocessor suited to image processing. (author) [fr

  13. Izlases dizainu salīdzināšana ar simulācijas eksperimentu palīdzību

    OpenAIRE

    Rikačova, Tatjana

    2012-01-01

    Darbā tika aplūkoti trīs veidu izlases dizaini: vienkāršā gadījuma izlase, sistemātiskā gadījuma izlase un stratificētā gadījuma izlase, kā arī tika apskatīta GREG novērtējumu teorija. Pētījuma mērķis bija ar simulācijas eksperimentu palīdzību salīdzināt iepriekš minētos izlases dizainus, lai noteiktu izlases dizainu, kas dod precīzākus konkrētās populācijas parametru novērtējumus. Lai veiktu simulācijas eksperimentus tika ģenerēti dati un uzrakstītas programmas, kas tika palaistas va...

  14. Sal-like 4 (SALL4) suppresses CDH1 expression and maintains cell dispersion in basal-like breast cancer.

    Science.gov (United States)

    Itou, Junji; Matsumoto, Yoshiaki; Yoshikawa, Kiyotsugu; Toi, Masakazu

    2013-09-17

    In cell cultures, the dispersed phenotype is indicative of the migratory ability. Here we characterized Sal-like 4 (SALL4) as a dispersion factor in basal-like breast cancer. Our shRNA-mediated SALL4 knockdown system and SALL4 overexpression system revealed that SALL4 suppresses the expression of adhesion gene CDH1, and positively regulates the CDH1 suppressor ZEB1. Cell behavior analyses showed that SALL4 suppresses intercellular adhesion and maintains cell motility after cell-cell interaction and cell division, which results in the dispersed phenotype. Our findings indicate that SALL4 functions to suppress CDH1 expression and to maintain cell dispersion in basal-like breast cancer. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  15. Klīnisko atradņu salīdzinājums FemtoLASIK un LASIK metodēs

    OpenAIRE

    Bistere, Aivija

    2010-01-01

    Maģistra darbs ir uzrakstīts latviešu valodā uz 60 lpp. Tas satur 36 att., 1 tab., 4 pielikumus. Izmantoti 57 literatūras avoti. Darba mērķis:Novērtēt un salīdzināt klīniskos rezultātus starp metodēm, kur radzenes vāciņš lāzerķirurģijas laikā veidots ar VisuMax femtosekunžu lāzeri(FemtoLASIK metode) un ar mehānisko mikrokeratomu(LASIK metode). Metodika:Pētījumā tika analizēti 84 miopi pacienti(154 acis)-LASIK metodē-59 pacienti,FemtoLASIK-25. Vidējais vecums 29±8 gadi. Pirms operācijas subjek...

  16. Real time polarization sensor image processing on an embedded FPGA/multi-core DSP system

    Science.gov (United States)

    Bednara, Marcus; Chuchacz-Kowalczyk, Katarzyna

    2015-05-01

    Most embedded image processing SoCs available on the market are highly optimized for typical consumer applications like video encoding/decoding, motion estimation or several image enhancement processes as used in DSLR or digital video cameras. For non-consumer applications, on the other hand, optimized embedded hardware is rarely available, so often PC based image processing systems are used. We show how a real time capable image processing system for a non-consumer application - namely polarization image data processing - can be efficiently implemented on an FPGA and multi-core DSP based embedded hardware platform.

  17. Numerical methods in image processing for applications in jewellery industry

    OpenAIRE

    Petrla, Martin

    2016-01-01

    Presented thesis deals with a problem from the field of image processing for application in multiple scanning of jewelery stones. The aim is to develop a method for preprocessing and subsequent mathematical registration of images in order to increase the effectivity and reliability of the output quality control. For these purposes the thesis summerizes mathematical definition of digital image as well as theoretical base of image registration. It proposes a method adjusting every single image ...

  18. Papel do sal iodado na oxidação lipídica em hambúrgueres bovino e suíno (misto ou de frango The role of iodide salts on beef and chicken patties lipid oxidation

    Directory of Open Access Journals (Sweden)

    E.A.F.S. TORRES

    1998-04-01

    Full Text Available No Brasil, a adição de iodo ao sal, com o objetivo de erradicar o bócio, é obrigatório por lei. A adição de sal em produtos cárneos é um problema para a qualidade dos mesmos; tem sido associada à oxidação lipídica e à descoloração de carnes com a presença de metais atuando como catalisadores. Hambúrgueres elaborados com dianteiro bovino e gordura suína ou recortes de frangos, adicionados de gelo, sal, cebola, alho e pimenta in natura foram usados no experimento. Após moldados, os mesmos foram congelados, embalados e estocados. O experimento foi conduzido em triplicata e as amostras retiradas em duplicatas no tempos de 0 a 90 dias. O método de TBA de destilação foi utilizado para acompanhar a oxidação lipídica. Foram determinadas a composição centesimal e os teores de cloreto. Os hambúrgueres misto(carne suína e bovina apresentaram teor de lipídios de 73,31% e por um outro lado, a oxidação lipídica foi mais 54,40% mas ativa nas amostras de frango. A presença de sal iodado (menor que 14,41% parece não ter afetado a oxidação lipídica em ambas as carnes, contrariando a expectativa de que um sal halogênico deveria atuar como um oxidante.In Brazil it’s a legal practice to add iodide to salt. Lipid oxidation is a problem to the quality of meat products. NaCl added to this products acts as a catalyst in lipid oxidation. It has been associated to increasing of TBARS index and also to meat discoloration because of metals contamination. We studied the lipid stability in patties prepared with salt with or without iodide, and mixed with meat from chicken or beef. Ground meat was mixed and ice, salt, onion, garlic and white powder pepper were added, shaped, packaged and stored in freezer. Lipid oxidation was monitored by TBARS (distillation method, in frozen samples and iodide was quantitatively measured throughout 90 days of experiment. The content of lipids in beef samples was 73,31% higher than for chickens one. On

  19. Visual processing in rapid-chase systems: Image processing, attention, and awareness

    Directory of Open Access Journals (Sweden)

    Thomas eSchmidt

    2011-07-01

    Full Text Available Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed towards target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. 1 When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria. 2 Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. 3 Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. 4 When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that "fast" visuomotor measures predominantly driven by feedforward processing should supplement "slow" psychophysical measures predominantly based on visual

  20. Visual grading analysis of digital neonatal chest phantom X-ray images: Impact of detector type, dose and image processing on image quality.

    Science.gov (United States)

    Smet, M H; Breysem, L; Mussen, E; Bosmans, H; Marshall, N W; Cockmartin, L

    2018-07-01

    To evaluate the impact of digital detector, dose level and post-processing on neonatal chest phantom X-ray image quality (IQ). A neonatal phantom was imaged using four different detectors: a CR powder phosphor (PIP), a CR needle phosphor (NIP) and two wireless CsI DR detectors (DXD and DRX). Five different dose levels were studied for each detector and two post-processing algorithms evaluated for each vendor. Three paediatric radiologists scored the images using European quality criteria plus additional questions on vascular lines, noise and disease simulation. Visual grading characteristics and ordinal regression statistics were used to evaluate the effect of detector type, post-processing and dose on VGA score (VGAS). No significant differences were found between the NIP, DXD and CRX detectors (p>0.05) whereas the PIP detector had significantly lower VGAS (pProcessing did not influence VGAS (p=0.819). Increasing dose resulted in significantly higher VGAS (plevels but not image post-processing changes. VGA showed a DAK/image value above which perceived IQ did not improve, potentially useful for commissioning. • A VGA study detects IQ differences between detectors and dose levels. • The NIP detector matched the VGAS of the CsI DR detectors. • VGA data are useful in setting initial detector air kerma level. • Differences in NNPS were consistent with changes in VGAS.