WorldWideScience

Sample records for treatment requiring accurate

  1. Front Loaded Accurate Requirements Engineering (FLARE): A Requirements Analysis Concept for the 21st Century

    National Research Council Canada - National Science Library

    Leonard, Anthony

    1997-01-01

    This thesis focuses on ways to apply requirements engineering techniques and methods during the development and evolution of DoD software systems in an effort to reduce changes to system requirements...

  2. 9 CFR 442.3 - Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection. 442.3 Section 442.3 Animals and Animal... shall meet the applicable requirements contained in National Institute of Standards and Technology (NIST...

  3. Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.

    Science.gov (United States)

    Algina, James; Moulder, Bradley C.; Moser, Barry K.

    2002-01-01

    Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…

  4. Translation research: from accurate diagnosis to appropriate treatment

    Directory of Open Access Journals (Sweden)

    Pass Harvey I

    2004-10-01

    Full Text Available Abstract This review article focuses on the various aspects of translational research, where research on human subjects can ultimately enhance the diagnosis and treatment of future patients. While we will use specific examples relating to the asbestos related cancer mesothelioma, it should be stressed that the general approach outlined throughout this review is readily applicable to other diseases with an underlying molecular basis. Through the integration of molecular-based technologies, systematic tissue procurement and medical informatics, we now have the ability to identify clinically applicable "genotype"-"phenotype" associations across cohorts of patients that can rapidly be translated into useful diagnostic and treatment strategies. This review will touch on the various steps in the translational pipeline, and highlight some of the most essential elements as well as possible roadblocks that can impact success of the program. Critical issues with regard to Institutional Review Board (IRB and Health Insurance Portability and Accountability Act (HIPAA compliance, data standardization, sample procurement, quality control (QC, quality assurance (QA, data analysis, preclinical models and clinical trials are addressed. The various facets of the translational pipeline have been incorporated into a fully integrated computational system, appropriately named Dx2Tx. This system readily allows for the identification of new diagnostic tests, the discovery of biomarkers and drugable targets, and prediction of optimal treatments based upon the underlying molecular basis of the disease.

  5. Requirements for accurately diagnosing chronic partial upper urinary tract obstruction in children with hydronephrosis

    International Nuclear Information System (INIS)

    Koff, Stephen A.

    2008-01-01

    Successful management of hydronephrosis in the newborn requires early accurate diagnosis to identify or exclude ureteropelvic junction obstruction. However, the presence of hydronephrosis does not define obstruction and displays unique behavior in the newborn. The hydronephrotic kidney usually has nearly normal differential renal function at birth, has not been subjected to progressive dilation and except for pelvocaliectasis does not often show signs of high-grade obstruction. Furthermore, severe hydronephrosis resolves spontaneously in more than 65% of newborns with differential renal function stable or improving. The diagnosis of obstruction in newborn hydronephrosis is challenging because the currently available diagnostic tests, ultrasonography and diuretic renography have demonstrated inaccuracy in diagnosing obstruction and predicting which hydronephrotic kidney will undergo deterioration if untreated. Accurate diagnosis of obstruction is possible but it requires an understanding of the uniqueness of both the pathophysiology of obstruction and the biology of the kidney and renal collecting system in this age group. We examine here the requirements for making an accurate diagnosis of obstruction in the young child with hydronephrosis. (orig.)

  6. Drift correction for accurate PRF shift MR thermometry during mild hyperthermia treatments with MR-HIFU

    Science.gov (United States)

    Bing, Chenchen; Staruch, Robert; Tillander, Matti; Köhler, Max O.; Mougenot, Charles; Ylihautala, Mika; Laetsch, Theodore W.; Chopra, Rajiv

    2016-01-01

    There is growing interest in performing hyperthermia treatments with clinical MRI-guided high-intensity focused ultrasound (MR-HIFU) therapy systems designed for tissue ablation. During hyperthermia treatment, however, due to the narrow therapeutic window (41–45°C), careful evaluation of the accuracy of PRF shift MR thermometry for these types of exposures is required. Purpose The purpose of this study was to evaluate the accuracy of MR thermometry using a clinical MR-HIFU system equipped with hyperthermia treatment algorithm. Methods Mild heating was performed in a tissue-mimicking phantom with implanted temperature sensors using the clinical MR-HIFU system. The influence of image-acquisition settings and post-acquisition correction algorithms on the accuracy of temperature measurements was investigated. The ability to achieve uniform heating for up to 40 minutes was evaluated in rabbit experiments. Results Automatic center-frequency adjustments prior to image-acquisition corrected the image-shifts on the order of 0.1 mm/min. Zero and first order phase variations were observed over time, supporting the use of a combined drift correction algorithm. The temperature accuracy achieved using both center-frequency adjustment and the combined drift correction algorithm was 0.57 ± 0.58 °C in heated region and 0.54 ± 0.42 °C in unheated region. Conclusion Accurate temperature monitoring of hyperthermia exposures using PRF shift MR thermometry is possible through careful implementation of image-acquisition settings and drift correction algorithms. For the evaluated clinical MR-HIFU system, center-frequency adjustment eliminated image-shifts, and a combined drift correction algorithm achieved temperature measurements with an acceptable accuracy for monitoring and controlling hyperthermia exposures. PMID:27210733

  7. The accurate assessment and physiotherapeutic treatment of rotator cuff myofascial Pain Syndrome: A case report

    Directory of Open Access Journals (Sweden)

    B. B. Barker

    2011-01-01

    Full Text Available Management  of  patients  with  rotator  cuff  myofascial  pain syndrome varies  and  successful  intervention  is  dependent  on accurate assessment. The aim of this case report is to show the importance of accurate assessment  and  clinical  reasoning  in  the  physiotherapeutic management  of a  patient  suffering  from  ante-cubital  and  anterior shoulder  pain.  The  patient was  referred  for  physiotherapy  after proving refractory  to  treatment  with  non-steroidal anti-inflammatory medication. The physiotherapist diagnosed a rotator cuff myofascial pain syndrome and treatment proceeded on that basis. Treatment consisted of twitch-obtaining dry needling, myofascial release and exercise therapy.  The result was a change in the harryman rotator cuff functional Assessment Scale score from 22/52 to 43/52 over eight treatments. Strength was regained and subjective pain report on the visual rating scale was improved to 1/10. The case study highlights the importance of accurate assessment and consideration of alternative myofascial sources for pain even in circumstances which initially seem trauma related. Precise diagnosis of the cause - in this case rotator cuff myofascial pain syndrome – will result in effective treatment being administered.

  8. The significance of accurate dielectric tissue data for hyperthermia treatment planning

    NARCIS (Netherlands)

    van de Kamer, JB; van Wieringen, N; de Leeuw, AAC; Lagendijk, JJW

    2001-01-01

    For hyperthermia treatment planning, dielectric properties of several tissue types are required. Since it is difficult to perform patient specific dielectric imaging, default values based on literature data are used. However, these show a large spread (approximate to 50%). Consequently, it is

  9. The significance of accurate dielectric tissue data for hyperthermia treatment planning

    NARCIS (Netherlands)

    van de Kamer, J. B.; van Wieringen, N.; de Leeuw, A. A.; Lagendijk, J. J.

    2001-01-01

    For hyperthermia treatment planning, dielectric properties of several tissue types are required. Since it is difficult to perform patient specific dielectric imaging, default values based on literature data are used. However, these show a large spread (approximately 50%). Consequently, it is

  10. Purification of components required for accurate transcription of ribosomal RNA from Acanthamoeba castellanii.

    Science.gov (United States)

    Iida, C T; Paule, M R

    1992-01-01

    The components required for specific transcription of ribosomal RNA were isolated from logarithmically growing Acanthamoeba castellanii. The transcription initiation factor fraction, TIF, and RNA polymerase I were extracted from whole cells at 0.35 M KCl. The extract was fractionated with polyethylenimine, then chromatographed on phosphocellulose (P11) which resulted in the separation of TIF from RNA polymerase I. The fractions containing TIF were further chromatographed on DEAE cellulose (DE52), Heparin Affigel, and Matrex green agarose, followed by sedimentation through glycerol gradients. TIF was purified approximately 17,000-fold, and shown to have a native molecular weight of 289 kD, and to bind specifically to rRNA promoter sequences by DNase I footprinting. The addition of homogeneous RNA polymerase I to this complex permitted the initiation of specific transcription in vitro. The phosphocellulose fractions containing RNA polymerase I were chromatographed on DEAE cellulose, Heparin-Sepharose, DEAE-Sephadex, and sedimented through sucrose gradients. Polymerase I was purified to apparent homogeneity with a yield of 8.1% and a specific activity of 315. It contained one fewer subunit than previously reported. DNase I protection experiments demonstrated that in both partially purified and homogeneous fractions, RNA polymerase I was capable of stable binding to the TIF-rDNA complex, and correctly initiating transcription on rDNA templates. Images PMID:1620619

  11. Asthma control cost-utility randomized trial evaluation (ACCURATE: the goals of asthma treatment

    Directory of Open Access Journals (Sweden)

    Honkoop Persijn J

    2011-11-01

    Full Text Available Abstract Background Despite the availability of effective therapies, asthma remains a source of significant morbidity and use of health care resources. The central research question of the ACCURATE trial is whether maximal doses of (combination therapy should be used for long periods in an attempt to achieve complete control of all features of asthma. An additional question is whether patients and society value the potential incremental benefit, if any, sufficiently to concur with such a treatment approach. We assessed patient preferences and cost-effectiveness of three treatment strategies aimed at achieving different levels of clinical control: 1. sufficiently controlled asthma 2. strictly controlled asthma 3. strictly controlled asthma based on exhaled nitric oxide as an additional disease marker Design 720 Patients with mild to moderate persistent asthma from general practices with a practice nurse, age 18-50 yr, daily treatment with inhaled corticosteroids (more then 3 months usage of inhaled corticosteroids in the previous year, will be identified via patient registries of general practices in the Leiden, Nijmegen, and Amsterdam areas in The Netherlands. The design is a 12-month cluster-randomised parallel trial with 40 general practices in each of the three arms. The patients will visit the general practice at baseline, 3, 6, 9, and 12 months. At each planned and unplanned visit to the general practice treatment will be adjusted with support of an internet-based asthma monitoring system supervised by a central coordinating specialist nurse. Patient preferences and utilities will be assessed by questionnaire and interview. Data on asthma control, treatment step, adherence to treatment, utilities and costs will be obtained every 3 months and at each unplanned visit. Differences in societal costs (medication, other (health care and productivity will be compared to differences in the number of limited activity days and in quality adjusted

  12. Accurate, stable and efficient Navier-Stokes solvers based on explicit treatment of the pressure term

    International Nuclear Information System (INIS)

    Johnston, Hans; Liu Jianguo

    2004-01-01

    We present numerical schemes for the incompressible Navier-Stokes equations based on a primitive variable formulation in which the incompressibility constraint has been replaced by a pressure Poisson equation. The pressure is treated explicitly in time, completely decoupling the computation of the momentum and kinematic equations. The result is a class of extremely efficient Navier-Stokes solvers. Full time accuracy is achieved for all flow variables. The key to the schemes is a Neumann boundary condition for the pressure Poisson equation which enforces the incompressibility condition for the velocity field. Irrespective of explicit or implicit time discretization of the viscous term in the momentum equation the explicit time discretization of the pressure term does not affect the time step constraint. Indeed, we prove unconditional stability of the new formulation for the Stokes equation with explicit treatment of the pressure term and first or second order implicit treatment of the viscous term. Systematic numerical experiments for the full Navier-Stokes equations indicate that a second order implicit time discretization of the viscous term, with the pressure and convective terms treated explicitly, is stable under the standard CFL condition. Additionally, various numerical examples are presented, including both implicit and explicit time discretizations, using spectral and finite difference spatial discretizations, demonstrating the accuracy, flexibility and efficiency of this class of schemes. In particular, a Galerkin formulation is presented requiring only C 0 elements to implement

  13. Accurate treatment of nanoelectronics through improved description of van der Waals Interactions

    DEFF Research Database (Denmark)

    Kelkkanen, Kari André

    -dimensional extent of the surface and favors adsorption sites close to the surface, while the Pauli repulsion keeps the adsorbate away. Impurities, like an adatom or an adsorbed pyramid, pushes the adsorbate away from surface, giving a reduction of the attraction due to vdW forces. In this way the vdW force varies...... functionals. DFT calculations are performed for water dimer and hexamer, and for liquid water. Calculations on four low-energetic isomers of the water hexamer show that the vdW-DF accurately determines the energetic trend on these small clusters. How- ever, the dissociation-energy values with the vd...

  14. Accurate Treatment of Collisions and Water-Delivery in Models of Terrestrial Planet Formation

    Science.gov (United States)

    Haghighipour, Nader; Maindl, Thomas; Schaefer, Christoph

    2017-10-01

    It is widely accepted that collisions among solid bodies, ignited by their interactions with planetary embryos is the key process in the formation of terrestrial planets and transport of volatiles and chemical compounds to their accretion zones. Unfortunately, due to computational complexities, these collisions are often treated in a rudimentary way. Impacts are considered to be perfectly inelastic and volatiles are considered to be fully transferred from one object to the other. This perfect-merging assumption has profound effects on the mass and composition of final planetary bodies as it grossly overestimates the masses of these objects and the amounts of volatiles and chemical elements transferred to them. It also entirely neglects collisional-loss of volatiles (e.g., water) and draws an unrealistic connection between these properties and the chemical structure of the protoplanetary disk (i.e., the location of their original carriers). We have developed a new and comprehensive methodology to simulate growth of embryos to planetary bodies where we use a combination of SPH and N-body codes to accurately model collisions as well as the transport/transfer of chemical compounds. Our methodology accounts for the loss of volatiles (e.g., ice sublimation) during the orbital evolution of their careers and accurately tracks their transfer from one body to another. Results of our simulations show that traditional N-body modeling of terrestrial planet formation overestimates the amount of the mass and water contents of the final planets by over 60% implying that not only the amount of water they suggest is far from being realistic, small planets such as Mars can also form in these simulations when collisions are treated properly. We will present details of our methodology and discuss its implications for terrestrial planet formation and water delivery to Earth.

  15. Impact of appendicitis during pregnancy: no delay in accurate diagnosis and treatment.

    Science.gov (United States)

    Aggenbach, L; Zeeman, G G; Cantineau, A E P; Gordijn, S J; Hofker, H S

    2015-03-01

    Acute appendicitis during pregnancy may be associated with serious maternal and/or fetal complications. To date, the optimal clinical approach to the management of pregnant women suspected of having acute appendicitis is subject to debate. The purpose of this retrospective study was to provide recommendations for prospective clinical management of pregnant patients with suspected appendicitis. Case records of all pregnant patients suspected of having appendicitis whom underwent appendectomy at our hospital between 1990 and 2010 were reviewed. Appendicitis was histologically verified in fifteen of twenty-one pregnant women, of whom six were diagnosed with perforated appendicitis. Maternal morbidity was seen in two cases. Premature delivery occurred in two out of six cases with perforated appendicitis cases and two out of six cases following a negative appendectomy. Perinatal mortality did not occur. Both (perforated) appendicitis and negative appendectomy during pregnancy are associated with a high risk of premature delivery. Clinical presentation and imaging remains vital in deciding whether surgical intervention is indicated. We recommend to cautiously weigh the risks of delay until correct diagnosis with associated increased risk of appendiceal perforation and the risk of unnecessary surgical intervention. Based upon current literature, we recommend clinicians to consider an MRI following an inconclusive or negative abdominal ultrasound aiming to improve diagnostic accuracy to reduce the rate of negative appendectomies. Accurate and prompt diagnosis of acute appendicitis should be strived for to avoid unnecessary exploration and to aim for timely surgical intervention in pregnant women suspected of having appendicitis. Copyright © 2015. Published by Elsevier Ltd.

  16. 7 CFR 305.15 - Treatment requirements.

    Science.gov (United States)

    2010-01-01

    ... recording devices used during treatment must be password-protected and tamperproof. The devices must be able... cold-treated within the area over which the U.S. Department of Homeland Security is assigned the... consignments of fruit must be cold treated within the area over which the U.S. Department of Homeland Security...

  17. Nutrients requirements in biological industrial wastewater treatment ...

    African Journals Online (AJOL)

    Wastewaters from olive mills and pulp and paper mill industries in Jordan have been characterized and treated using laboratory scale anaerobic and aerobic sequencing batch reactors, respectively. Nutrient requirements for these two industrial wastewaters were found to be less than what is usually reported in the literature ...

  18. Cytology-based treatment decision in primary lung cancer: is it accurate enough?

    Science.gov (United States)

    Sakr, Lama; Roll, Patrice; Payan, Marie-José; Liprandi, Agnès; Dutau, Hervé; Astoul, Philippe; Robaglia-Schlupp, Andrée; Loundou, Anderson; Barlesi, Fabrice

    2012-03-01

    Accurate distinction of lung cancer types has become increasingly important as recent trials have shown differential response to chemotherapy among non-small cell lung carcinoma (NSCLC) subtypes. Cytological procedures are frequently used but their diagnostic accuracy has been previously questioned. However, new endoscopic and cytological techniques might have improved cytological accuracy in comparison with prior findings. The aim of this study was to reassess cytological accuracy for diagnosis of lung cancer subtypes. A retrospective chart review of subjects who underwent fiberoptic bronchoscopy (FOB) for suspicion of lung cancer in 2007-2008, was undertaken. Reports of bronchoscopically derived cytological specimens were compared to those of histological material. Endoscopic findings and specific investigational techniques were taken into account. A total of 467 FOB with both cytological and histological diagnostic techniques were performed in 449 subjects. Patients consisted of 345 men and 104 women (median age, 65 yrs). Cytology proved malignancy in 157 patients. Cytologically diagnosed carcinomas were classified into squamous cell carcinoma (SqCC) in 56, adenocarcinoma (ADC) in 6, small cell lung carcinoma (SCLC) in 12, non-small cell lung carcinoma not otherwise specified (NSCLC-NOS) in 71, and unclassified carcinoma in 12. Cytology correlated fairly with biopsy specimens, as agreement was observed in 83% of SCLC, 100% of ADC, 74% of SqCC and 8% of NSCLC-NOS. Interestingly, 61% of cytologically identified NSCLC-NOS were classified as ADC by histology. Cytological accuracy improved in case of an endobronchial lesion, mainly for SqCC. These results indicate that cytological accuracy remains fair with regard to diagnosis of squamous and non-squamous lung cancer subtypes. Improvement of cytological accuracy is expected however with novel diagnostic strategies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. Unified treatment for accurate and fast evaluation of the Fermi–Dirac functions

    International Nuclear Information System (INIS)

    Guseinov, I. I.; Mamedov, B. A.

    2010-01-01

    A new analytical approach to the computation of the Fermi-Dirac (FD) functions is presented, which was suggested by previous experience with various algorithms. Using the binomial expansion theorem, these functions are expressed through the binomial coefficients and familiar incomplete Gamma functions. This simplification and the use of the memory of the computer for the calculation of binomial coefficients may extend the limits to large arguments for users and result in speedier calculation, should such limits be required in practice. Some numerical results are presented for significant mapping examples and they are briefly discussed. (general)

  20. Accurate diagnosis and treatment of Vibrio vulnificus infection: a retrospective study of 12 cases

    Directory of Open Access Journals (Sweden)

    Yoshinori Matsuoka

    2013-02-01

    Full Text Available BACKGROUND AND AIMS: Vibrio vulnificus causes an infectious disease that has extremely poor convalescence and leads to necrotic fasciitis. In this study, we sought to define the characteristic epidemiology of V. vulnificus infection and clarify its diagnosis at the global level. METHODS: Over a period of 10 years, we investigated the appearance of symptoms, underlying conditions, treatment, and mortality in 12 patients (eight men, four women; >50 years old; average age, 66 years, infected with V. vulnificus. RESULTS: The development of symptoms occurred primarily between June and September, a period during which seawater temperature rises and the prevalence of V. vulnificus increases. All patients had underlying diseases, and seven patients reported a history of consuming fresh fish and uncooked shellfish. The patients developed sepsis and fever with sharp pain in the limbs. Limb abnormalities were observed on visual examination. All patients underwent debridement; however, in the survival group, the involved limb was amputated early in 80% patients. The mortality rate was 58.3%. CONCLUSION: Recognition of the characteristic epidemiology and clinical features of this disease is important, and positive debridement should be performed on suspicion. When the illness reaches an advanced stage, however, amputation should be the immediate treatment of choice.

  1. Accurate diagnosis and treatment of Vibrio vulnificus infection: a retrospective study of 12 cases

    Directory of Open Access Journals (Sweden)

    Yoshinori Matsuoka

    Full Text Available BACKGROUND AND AIMS: Vibrio vulnificus causes an infectious disease that has extremely poor convalescence and leads to necrotic fasciitis. In this study, we sought to define the characteristic epidemiology of V. vulnificus infection and clarify its diagnosis at the global level. METHODS: Over a period of 10 years, we investigated the appearance of symptoms, underlying conditions, treatment, and mortality in 12 patients (eight men, four women; >50 years old; average age, 66 years, infected with V. vulnificus. RESULTS: The development of symptoms occurred primarily between June and September, a period during which seawater temperature rises and the prevalence of V. vulnificus increases. All patients had underlying diseases, and seven patients reported a history of consuming fresh fish and uncooked shellfish. The patients developed sepsis and fever with sharp pain in the limbs. Limb abnormalities were observed on visual examination. All patients underwent debridement; however, in the survival group, the involved limb was amputated early in 80% patients. The mortality rate was 58.3%. CONCLUSION: Recognition of the characteristic epidemiology and clinical features of this disease is important, and positive debridement should be performed on suspicion. When the illness reaches an advanced stage, however, amputation should be the immediate treatment of choice.

  2. Accuracy requirements in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Buzdar, S. A.; Afzal, M.; Nazir, A.; Gadhi, M. A.

    2013-01-01

    Radiation therapy attempts to deliver ionizing radiation to the tumour and can improve the survival chances and/or quality of life of patients. There are chances of errors and uncertainties in the entire process of radiotherapy that may affect the accuracy and precision of treatment management and decrease degree of conformation. All expected inaccuracies, like radiation dose determination, volume calculation, complete evaluation of the full extent of the tumour, biological behaviour of specific tumour types, organ motion during radiotherapy, imaging, biological/molecular uncertainties, sub-clinical diseases, microscopic spread of the disease, uncertainty in normal tissue responses and radiation morbidity need sound appreciation. Conformity can be increased by reduction of such inaccuracies. With the yearly increase in computing speed and advancement in other technologies the future will provide the opportunity to optimize a greater number of variables and reduce the errors in the treatment planning process. In multi-disciplined task of radiotherapy, efforts are needed to overcome the errors and uncertainty, not only by the physicists but also by radiologists, pathologists and oncologists to reduce molecular and biological uncertainties. The radiation therapy physics is advancing towards an optimal goal that is definitely to improve accuracy where necessary and to reduce uncertainty where possible. (author)

  3. Accurate treatment of total photoabsorption cross sections by an ab initio time-dependent method

    Science.gov (United States)

    Daud, Mohammad Noh

    2014-09-01

    A detailed discussion of parallel and perpendicular transitions required for the photoabsorption of a molecule is presented within a time-dependent view. Total photoabsorption cross sections for the first two ultraviolet absorption bands of the N2O molecule corresponding to transitions from the X1 A' state to the 21 A' and 11 A'' states are calculated to test the reliability of the method. By fully considering the property of the electric field polarization vector of the incident light, the method treats the coupling of angular momentum and the parity differently for two kinds of transitions depending on the direction of the vector whether it is: (a) situated parallel in a molecular plane for an electronic transition between states with the same symmetry; (b) situated perpendicular to a molecular plane for an electronic transition between states with different symmetry. Through this, for those transitions, we are able to offer an insightful picture of the dynamics involved and to characterize some new aspects in the photoabsorption process of N2O. Our calculations predicted that the parallel transition to the 21 A' state is the major dissociation pathway which is in qualitative agreement with the experimental observations. Most importantly, a significant improvement in the absolute value of the total cross section over previous theoretical results [R. Schinke, J. Chem. Phys. 134, 064313 (2011), M.N. Daud, G.G. Balint-Kurti, A. Brown, J. Chem. Phys. 122, 054305 (2005), S. Nanbu, M.S. Johnson, J. Phys. Chem. A 108, 8905 (2004)] was obtained.

  4. 40 CFR 125.60 - Primary or equivalent treatment requirements.

    Science.gov (United States)

    2010-07-01

    ... Modifying the Secondary Treatment Requirements Under Section 301(h) of the Clean Water Act § 125.60 Primary... requirements. 125.60 Section 125.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... results of the monitoring, that the effluent it discharges has received primary or equivalent treatment...

  5. 40 CFR 141.83 - Source water treatment requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Source water treatment requirements. 141.83 Section 141.83 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.83 Source water treatment requirements. Systems shall...

  6. Research and application of a fast 3D-reconstruction method in accurate radiotherapy treatment planning system

    International Nuclear Information System (INIS)

    Li Jia; Long Pengcheng; Huang Shanqing; Li Gui; Song Gang; Luo Yuetong; Yan Feng; Wu Yican; Fds Team

    2010-01-01

    In radiotherapy treatment planning,in order to delivery of a high dose to the tumor accurately while maintaining an acceptably low dose to the normal tissues, particularly those adjacent to the target, it is necessary to reconstruct the three dimensional anatomical structure from planar contour information. The existing methods could not satisfy the clinical demand in terms of the speed and accuracy. By improving the isosurface extraction algorithm, we designed a fast 3D-reconstruction algorithm pipeline implemented by Visualization Tool Kit (VTK). A serial of test results from real patient image dataset show that this method could reconstruct the surface smoothly and evade the 'ladder effect' effectively. The number of points and triangles had been reduced in great extent. The rendering time had been decreased from 8 seconds to less than 3 seconds by comparing with standard iso-extraction algorithm. On the premise of preserving the original anatomical structure, this method improved the reconstruction effect, accelerated the rendering speed and it would be applied to not only accurate radiotherapy treatment planning system but also other fields that need 3D reconstruction. (authors)

  7. Heparin removal by ecteola-cellulose pre-treatment enables the use of plasma samples for accurate measurement of anti-Yellow fever virus neutralizing antibodies.

    Science.gov (United States)

    Campi-Azevedo, Ana Carolina; Peruhype-Magalhães, Vanessa; Coelho-Dos-Reis, Jordana Grazziela; Costa-Pereira, Christiane; Yamamura, Anna Yoshida; Lima, Sheila Maria Barbosa de; Simões, Marisol; Campos, Fernanda Magalhães Freire; de Castro Zacche Tonini, Aline; Lemos, Elenice Moreira; Brum, Ricardo Cristiano; de Noronha, Tatiana Guimarães; Freire, Marcos Silva; Maia, Maria de Lourdes Sousa; Camacho, Luiz Antônio Bastos; Rios, Maria; Chancey, Caren; Romano, Alessandro; Domingues, Carla Magda; Teixeira-Carvalho, Andréa; Martins-Filho, Olindo Assis

    2017-09-01

    Technological innovations in vaccinology have recently contributed to bring about novel insights for the vaccine-induced immune response. While the current protocols that use peripheral blood samples may provide abundant data, a range of distinct components of whole blood samples are required and the different anticoagulant systems employed may impair some properties of the biological sample and interfere with functional assays. Although the interference of heparin in functional assays for viral neutralizing antibodies such as the functional plaque-reduction neutralization test (PRNT), considered the gold-standard method to assess and monitor the protective immunity induced by the Yellow fever virus (YFV) vaccine, has been well characterized, the development of pre-analytical treatments is still required for the establishment of optimized protocols. The present study intended to optimize and evaluate the performance of pre-analytical treatment of heparin-collected blood samples with ecteola-cellulose (ECT) to provide accurate measurement of anti-YFV neutralizing antibodies, by PRNT. The study was designed in three steps, including: I. Problem statement; II. Pre-analytical steps; III. Analytical steps. Data confirmed the interference of heparin on PRNT reactivity in a dose-responsive fashion. Distinct sets of conditions for ECT pre-treatment were tested to optimize the heparin removal. The optimized protocol was pre-validated to determine the effectiveness of heparin plasma:ECT treatment to restore the PRNT titers as compared to serum samples. The validation and comparative performance was carried out by using a large range of serum vs heparin plasma:ECT 1:2 paired samples obtained from unvaccinated and 17DD-YFV primary vaccinated subjects. Altogether, the findings support the use of heparin plasma:ECT samples for accurate measurement of anti-YFV neutralizing antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. High Specificity in Circulating Tumor Cell Identification Is Required for Accurate Evaluation of Programmed Death-Ligand 1.

    Directory of Open Access Journals (Sweden)

    Jennifer L Schehr

    Full Text Available Expression of programmed-death ligand 1 (PD-L1 in non-small cell lung cancer (NSCLC is typically evaluated through invasive biopsies; however, recent advances in the identification of circulating tumor cells (CTCs may be a less invasive method to assay tumor cells for these purposes. These liquid biopsies rely on accurate identification of CTCs from the diverse populations in the blood, where some tumor cells share characteristics with normal blood cells. While many blood cells can be excluded by their high expression of CD45, neutrophils and other immature myeloid subsets have low to absent expression of CD45 and also express PD-L1. Furthermore, cytokeratin is typically used to identify CTCs, but neutrophils may stain non-specifically for intracellular antibodies, including cytokeratin, thus preventing accurate evaluation of PD-L1 expression on tumor cells. This holds even greater significance when evaluating PD-L1 in epithelial cell adhesion molecule (EpCAM positive and EpCAM negative CTCs (as in epithelial-mesenchymal transition (EMT.To evaluate the impact of CTC misidentification on PD-L1 evaluation, we utilized CD11b to identify myeloid cells. CTCs were isolated from patients with metastatic NSCLC using EpCAM, MUC1 or Vimentin capture antibodies and exclusion-based sample preparation (ESP technology.Large populations of CD11b+CD45lo cells were identified in buffy coats and stained non-specifically for intracellular antibodies including cytokeratin. The amount of CD11b+ cells misidentified as CTCs varied among patients; accounting for 33-100% of traditionally identified CTCs. Cells captured with vimentin had a higher frequency of CD11b+ cells at 41%, compared to 20% and 18% with MUC1 or EpCAM, respectively. Cells misidentified as CTCs ultimately skewed PD-L1 expression to varying degrees across patient samples.Interfering myeloid populations can be differentiated from true CTCs with additional staining criteria, thus improving the

  9. Uptake and partitioning of nutrients in blackberry and raspberry and evaluating plant nutrient status for accurate assessment of fertilizer requirements

    Science.gov (United States)

    Raspberry and blackberry plantings have a relatively low nutrient requirement compared to many other perennial fruit crops. Annual total N accumulation ranged from 62-110 lb/a in red raspberry and 33-39 lb/a in blackberry. Primocanes rely primarily on fertilizer N for growth, whereas floricane growt...

  10. An Iml3-Chl4 Heterodimer Links the Core Centromere to Factors Required for Accurate Chromosome Segregation

    Directory of Open Access Journals (Sweden)

    Stephen M. Hinshaw

    2013-10-01

    Full Text Available Accurate segregation of genetic material in eukaryotes relies on the kinetochore, a multiprotein complex that connects centromeric DNA with microtubules. In yeast and humans, two proteins—Mif2/CENP-C and Chl4/CNEP-N—interact with specialized centromeric nucleosomes and establish distinct but cross-connecting axes of chromatin-microtubule linkage. Proteins recruited by Chl4/CENP-N include a subset that regulates chromosome transmission fidelity. We show that Chl4 and a conserved member of this subset, Iml3, both from Saccharomyces cerevisiae, form a stable protein complex that interacts with Mif2 and Sgo1. We have determined the structures of an Iml3 homodimer and an Iml3-Chl4 heterodimer, which suggest a mechanism for regulating the assembly of this functional axis of the kinetochore. We propose that at the core centromere, the Chl4-Iml3 complex participates in recruiting factors, such as Sgo1, that influence sister chromatid cohesion and encourage sister kinetochore biorientation.

  11. The Alzheimer's β-secretase enzyme BACE1 is required for accurate axon guidance of olfactory sensory neurons and normal glomerulus formation in the olfactory bulb

    Directory of Open Access Journals (Sweden)

    Rajapaksha Tharinda W

    2011-12-01

    Full Text Available Abstract Background The β-secretase, β-site amyloid precursor protein cleaving enzyme 1 (BACE1, is a prime therapeutic target for lowering cerebral β-amyloid (Aβ levels in Alzheimer's disease (AD. Clinical development of BACE1 inhibitors is being intensely pursued. However, little is known about the physiological functions of BACE1, and the possibility exists that BACE1 inhibition may cause mechanism-based side effects. Indeed, BACE1-/- mice exhibit a complex neurological phenotype. Interestingly, BACE1 co-localizes with presynaptic neuronal markers, indicating a role in axons and/or terminals. Moreover, recent studies suggest axon guidance molecules are potential BACE1 substrates. Here, we used a genetic approach to investigate the function of BACE1 in axon guidance of olfactory sensory neurons (OSNs, a well-studied model of axon targeting in vivo. Results We bred BACE1-/- mice with gene-targeted mice in which GFP is expressed from the loci of two odorant-receptors (ORs, MOR23 and M72, and olfactory marker protein (OMP to produce offspring that were heterozygous for MOR23-GFP, M72-GFP, or OMP-GFP and were either BACE1+/+ or BACE1-/-. BACE1-/- mice had olfactory bulbs (OBs that were smaller and weighed less than OBs of BACE1+/+ mice. In wild-type mice, BACE1 was present in OSN axon terminals in OB glomeruli. In whole-mount preparations and tissue sections, many OB glomeruli from OMP-GFP; BACE1-/- mice were malformed compared to wild-type glomeruli. MOR23-GFP; BACE1-/- mice had an irregular MOR23 glomerulus that was innervated by randomly oriented, poorly fasciculated OSN axons compared to BACE1+/+ mice. Most importantly, M72-GFP; BACE1-/- mice exhibited M72 OSN axons that were mis-targeted to ectopic glomeruli, indicating impaired axon guidance in BACE1-/- mice. Conclusions Our results demonstrate that BACE1 is required for the accurate targeting of OSN axons and the proper formation of glomeruli in the OB, suggesting a role for BACE1 in

  12. Is intraoperative surgeon's opinion an accurate tool to assess the outcome of endoscopic treatment for vesicoureteral reflux?

    Science.gov (United States)

    Parente, Alberto; Tardáguila, Ana-Rosa; Romero, Rosa; Burgos, Laura; Rivas, Susana; Angulo, José-María

    2013-12-01

    Our experience in the endoscopic treatment of vesicoureteral reflux (VUR) has significantly increased during the last decade. To help develop diagnostic tests to check the success of this procedure, we evaluated the accuracy of surgeons' intraoperative observations as a predictor of treatment results. We performed a prospective study of patients with VUR who were endoscopically treated during 1 year (106 renal units). Patients' age and gender, laterality, material used, grade of reflux, presence of ureteral duplication or associated pathology, and morphology of ureteral orifice were recorded as predictive factors related to the success rate. Surgeon and assistant indicated at the end of the endoscopic procedure whether the VUR was cured or not for each renal unit. These estimations were compared with postoperative voiding cystourethrogram results. Overall cure rate was 75.5%. Positive predictive value (PPV) for surgeon's opinion was 0.79 and negative predictive value (NPV) was 0.40. Statistical analysis demonstrated that the association between the surgeon's opinion and the cure rate was low with a Kappa value of 0.171 (p = 0.30). PPV of assistant's opinion was 0.80 and NPV was 0.40, with a Kappa value of 0.2 (p = 0.13). Concordance of surgeon and assistant's opinion resulted in PPV of 0.79 and NPV of 0.53 (Kappa = 0.261). Kappa value did not improve when surgeon's opinion was related to other factors such as the material employed, grade of reflux, presence of ureteral duplication or associated pathology and morphology of the ureteral orifice. In our experience, surgeon's opinion is not an accurate tool to predict the outcome of endoscopic treatment of VUR. Copyright © 2013 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  13. Motivation for Treatment as a Requirement for Success in the ...

    African Journals Online (AJOL)

    The case report is to demonstrate to clinicians that motivation is required at all stages of the treatment of patients with mental and behavioural disorder due to use of pentazocine in dependence. Two patients presented with pentazocine dependence. While the first patient presented following some persuasion from her ...

  14. Cystatin C more accurately detects mildly impaired renal function than creatinine in children receiving treatment for malignancy.

    Science.gov (United States)

    Blufpand, Hester N; Tromp, Jorien; Abbink, Floor C H; Stoffel-Wagner, Birgit; Bouman, Anneke A; Schouten-van Meeteren, Antoinette Y N; van Wijk, Joanna A E; Kaspers, Gertjan J L; Bökenkamp, Arend

    2011-08-01

    Monitoring of renal function is crucial in pediatric oncology. The use of creatinine to estimate glomerular filtration rate (GFR) is hampered by its dependency on muscle mass. Muscle wasting is common in children with cancer, leading to overestimation of GFR. Data on cystatin C are sparse in pediatric oncology, although this marker could be particularly useful in this population. Inulin clearance, estimated GFR using serum cystatin C according to Filler (eGFRcys) and serum creatinine according to Schwartz (eGFRcrea) were measured in 68 children with malignancy and 121 controls. We analyzed the difference between measured and estimated GFR and performance, bias and accuracy. Multiple linear regression analysis showed overestimation of GFR by eGFRcrea in females (B = -21.18; P = 0.001), and in patients with malignancy (B = -21.77; P = 0.014). eGFRcys overestimated GFR in females (B = -10.47; P = 0.001), but was independent of treatment for malignancy. Agreement with gold standard in detecting GFR below 90 ml/min/1.73 m(2) is better for eGFRcys (AUC 0.854) than for eGFRcrea (AUC 0.675) in the group with cancer. They performed comparably in the control group. Bland-Altman analysis showed considerable bias for eGFRcrea compared to eGFRcys (-14.3 ml/min/1.73 m(2) vs. -7.3 ml/min/1.73 m(2)). The proportion of estimates within 30% of true GFR for eGFRcrea (72.1%) was lower than for eGFRcys (82.4%) in the group with cancer. In the control group eGFRcrea (84.3%) outperformed eGFRcys (76.0%). When using the 50% limits of agreement, eGFRcys outperformed eGFRcrea in both groups. Cystatin C more accurately detects mildly impaired renal function than creatinine in children receiving treatment for malignancy. Copyright © 2011 Wiley-Liss, Inc.

  15. Legal requirements for optimal haemophilia treatment in Germany.

    Science.gov (United States)

    Brackmann, Hans-Hermann

    2014-11-01

    The clinical benefits of early prophylaxis in the treatment of haemophilia have been unquestioned since publication of the results of the first randomized study. The question of whether or not prophylaxis is cost-effective remains to be proven. For European physicians treating haemophilia patients, and for German clinicians in particular, the law largely supports the use of prophylaxis in haemophilia, but many doctors are unaware of this. The aim of this review was therefore to describe the German legal framework and outline how it can be used to support appropriate clinical decision-making in the treatment of haemophilia and justify the use of prophylaxis to health insurers and third-party payers. The German Disability Equalisation Law and German Social Law Books V and IX outline legal requirements to prevent or ameliorate disability, and support the argument that all haemophilia patients, including adults, have the right to receive appropriate, adequate, and cost-effective treatment. "Appropriate" treatment means that it must be in accordance with state-of-the-art medical knowledge taking into account medical progress. "Adequate" treatment must be conducive to the goals of haemophilia management, which are to prevent bleeds, treat bleeding episodes, maintain and/or restore joint function, and integrate patients into a normal social life. This can only be achieved when long-term treatment is adequately dosed and regularly administered for as long as it is required. Thankfully, with the availability of virus-safe factor concentrates, the introduction of home treatment programmes, and the law on our side, we are in a very strong position to achieve these goals. © 2013.

  16. Disease severity and treatment requirements in familial inflammatory bowel disease.

    Science.gov (United States)

    Ballester, María Pilar; Martí, David; Tosca, Joan; Bosca-Watts, Marta Maia; Sanahuja, Ana; Navarro, Pablo; Pascual, Isabel; Antón, Rosario; Mora, Francisco; Mínguez, Miguel

    2017-08-01

    Several studies demonstrate an increased prevalence and concordance of inflammatory bowel disease among the relatives of patients. Other studies suggest that genetic influence is over-estimated. The aims of this study are to evaluate the phenotypic expression and the treatment requirements in familial inflammatory bowel disease, to study the relationship between number of relatives and degree of kinship with disease severity and to quantify the impact of family aggregation compared to other environmental factors. Observational analytical study of 1211 patients followed in our unit. We analyzed, according to the existence of familial association, number and degree of consanguinity, the phenotypic expression, complications, extraintestinal manifestations, treatment requirements, and mortality. A multivariable analysis considering smoking habits and non-steroidal-anti-inflammatory drugs was performed. 14.2% of patients had relatives affected. Median age at diagnosis tended to be lower in the familial group, 32 vs 29, p = 0.07. In familial ulcerative colitis, there was a higher proportion of extraintestinal manifestations: peripheral arthropathy (OR = 2.3, p = 0.015) and erythema nodosum (OR = 7.6, p = 0.001). In familial Crohn's disease, there were higher treatment requirements: immunomodulators (OR = 1.8, p = 0.029); biologics (OR = 1.9, p = 0.011); and surgery (OR = 1.7, p = 0.044). The abdominal abscess increased with the number of relatives affected: 5.1% (sporadic), 7.0% (one), and 14.3% (two or more), p=0.039. These associations were maintained in the multivariate analysis. Familial aggregation is considered a risk factor for more aggressive disease and higher treatment requirements, a tendency for earlier onset, more abdominal abscess, and extraintestinal manifestations, remaining a risk factor analyzing the influence of some environmental factors.

  17. Treatment of corneal astigmatism with the new small-incision lenticule extraction (SMILE) laser technique: Is treatment of high degree astigmatism equally accurate, stable and safe as treatment of low degree astigmatism?

    DEFF Research Database (Denmark)

    Hansen, Rasmus Søgaard; Grauslund, Jakob; Lyhne, Niels

    Field: Ophthalmology Introduction: SMILE has proven effective in treatment of myopia and low degrees of astigmatism (less than 2 dioptres (D)), but there are no studies on treatment of high degrees of astigmatism (2 or more D). The aim of this study was to compare results after SMILE treatment...... for low or high degrees of astigmatism concerning accuracy, stability, and safety. Methods: Retrospective study of 1017 eyes treated with SMILE for myopia with low astigmatism or myopia with high astigmatism from 2011-2013 at the Department of Ophthalmology, Odense University Hospital, Denmark. Inclusion.......6%) treated for low astigmatism and four eyes (3.2%) treated for high astigmatism (P=0.02) had lost two or more lines of BSCVA after three months. Conclusion: This study is the first of its kind, and our results indicate that SMILE treatment of high degree astigmatism is equally accurate and stable...

  18. METHODS FOR DETERMINING AGITATOR MIXING REQUIREMENTS FOR A MIXING & SAMPLING FACILITY TO FEED WTP (WASTE TREATMENT PLANT)

    Energy Technology Data Exchange (ETDEWEB)

    GRIFFIN PW

    2009-08-27

    The following report is a summary of work conducted to evaluate the ability of existing correlative techniques and alternative methods to accurately estimate impeller speed and power requirements for mechanical mixers proposed for use in a mixing and sampling facility (MSF). The proposed facility would accept high level waste sludges from Hanford double-shell tanks and feed uniformly mixed high level waste to the Waste Treatment Plant. Numerous methods are evaluated and discussed, and resulting recommendations provided.

  19. METHODS FOR DETERMINING AGITATOR MIXING REQUIREMENTS FOR A MIXING and SAMPLING FACILITY TO FEED WTP (WASTE TREATMENT PLANT)

    International Nuclear Information System (INIS)

    Griffin, P.W.

    2009-01-01

    The following report is a summary of work conducted to evaluate the ability of existing correlative techniques and alternative methods to accurately estimate impeller speed and power requirements for mechanical mixers proposed for use in a mixing and sampling facility (MSF). The proposed facility would accept high level waste sludges from Hanford double-shell tanks and feed uniformly mixed high level waste to the Waste Treatment Plant. Numerous methods are evaluated and discussed, and resulting recommendations provided.

  20. Vega library for processing DICOM data required in Monte Carlo verification of radiotherapy treatment plans

    International Nuclear Information System (INIS)

    Locke, C.; Zavgorodni, S.; British Columbia Cancer Agency, Vancouver Island Center, Victoria BC

    2008-01-01

    Monte Carlo (MC) methods provide the most accurate to-date dose calculations in heterogeneous media and complex geometries, and this spawns increasing interest in incorporating MC calculations into treatment planning quality assurance process. This involves MC dose calculations for clinically produced treatment plans. To perform these calculations, a number of treatment plan parameters specifying radiation beam

  1. SU-E-T-273: Do Task Group External Beam QA Recommendations Guarantee Accurate Treatment Plan Dose Delivery?

    International Nuclear Information System (INIS)

    Templeton, A; Liao, Y; Redler, G; Zhen, H

    2015-01-01

    Purpose: AAPM task groups 40/142 have provided an invaluable set of goals for physicists designing QA programs, attempting to standardize what would otherwise likely be a highly variable phenomenon across institutions. However, with the complexity of modalities such as VMAT, we hypothesize that following these guidelines to the letter might still allow unacceptable dose discrepancies. To explore this hypothesis we simulated machines bordering on QA acceptability, and calculated the effect on patient plans. Methods: Two errant machines were simulated in Aria/Eclipse, each just within task group criteria for output, percent depth dose, beam profile, gantry and collimator rotations, and jaw and MLC positions. One machine minimized dose to the PTV (machine A) and the other maximized dose to the OARs (machine B). Clinical treatment plans (3-phase prostate, n=3; hypofractionated lung, n=1) were calculated on these machines and the dose distributions compared. A prostate case was examined for contribution of error sources and evaluated using delivery QA data. Results: The prostate plans showed mean decreases in target D95 of 9.9% of prescription dose on machine A. On machine B, The rectal and bladder V70Gy each increased by 7.1 percentage points, while their V45Gy increased by 16.2% and 15.0% respectively. In the lung plan, the target D95 decreased by 12.8% and the bronchial tree Dmax increased by 21% of prescription dose, on machines A and B. One prostate plan showed target dose errors of 3.8% from MLC changes, 2% from output, ∼3% from energy and ∼0.5% from other factors. This plan achieved an 88.4% gamma passing rate using 3%/3mm using ArcCHECK. Conclusion: In the unlikely event that a machine exhibits all maximum errors allowed by TG 40/142, unacceptably large changes in dose delivered are possible especially in highly modulated VMAT plans, despite the machine passing routine QA

  2. Surgeon-performed touch preparation of breast core needle biopsies may provide accurate same-day diagnosis and expedite treatment planning.

    Science.gov (United States)

    Gadgil, Pranjali V; Korourian, Soheila; Malak, Sharp; Ochoa, Daniela; Lipschitz, Riley; Henry-Tillman, Ronda; Suzanne Klimberg, V

    2014-04-01

    We aimed to determine the accuracy of surgeon-performed touch-preparation cytology (TPC) of breast core-needle biopsies (CNB) and the ability to use TPC results to initiate treatment planning at the same patient visit. A single-institution retrospective review of TPC results of ultrasound-guided breast CNB was performed. All TPC slides were prepared by surgeons performing the biopsy and interpreted by the pathologist. TPC results were reported as positive/suspicious, atypical, negative/benign, or deferred; these were compared with final pathology of cores to calculate accuracy. Treatment planning was noted as having taken place if the patient had requisition of advanced imaging, referrals, or surgical planning undertaken during the same visit. Four hundred forty-seven CNB specimens with corresponding TPC were evaluated from 434 patient visits, and 203 samples (45.4 %) were malignant on final pathology. When the deferred, atypical, and benign results were considered negative and positive/suspicious results were considered positive, sensitivity and specificity were 83.7 % (77.9-88.5 %) and 98.4 % (95.9-99.6 %), respectively; positive and negative predictive values were 97.7 % (94.2-99.4 %) and 87.9 % (83.4-91.5 %), respectively. In practice, patients with atypical or deferred results were asked to await final pathology. An accurate same-day diagnosis (TPC positive/suspicious) was hence feasible in 83.7 % (170 of 203) of malignant and 79.5 % (194 of 244) of benign cases (TPC negative). Of patients who had a same-day diagnosis of a new malignancy, 77.3 % had treatment planning initiated at the same visit. Surgeon-performed TPC of breast CNB is an accurate method of same-day diagnosis that allows treatment planning to be initiated at the same visit and may serve to expedite patient care.

  3. Requirements Elicitation in a Telemedicine Pain-treatment Trial

    NARCIS (Netherlands)

    Widya, I.A.; Bults, Richard G.A.; van Beijnum, Bernhard J.F.; Sandsjö, L.; Schaake, L.; Huis in 't Veld, M.H.A.; Jones, Valerie M.; Hermens, Hermanus J.; Ryan, K.; Robinson, W.

    2009-01-01

    This paper presents the early phase requirements elicitation for a work-related neck-shoulder pain teletreatment trial and the assessment of those requirements in respect of their importance to the trial and the feasibility of the needed software adaptations of the telemedicine system within the

  4. When does treatment plan optimization require inverse planning?

    International Nuclear Information System (INIS)

    Sherouse, George W.

    1995-01-01

    Increasing maturity of image-based computer-aided design of three-dimensional conformal radiotherapy has recently sparked a great deal of work in the area of treatment plan optimization. Optimization of a conformal photon beam treatment plan is that exercise through which a set of intensity-modulated static beams or arcs is specified such that, when the plan is executed, 1) a region of homogeneous dose is produced in the patient with a shape which geometrically conforms (within a specified tolerance) to the three-dimensional shape of a designated target volume and 2) acceptably low incidental dose is delivered to non-target tissues. Interest in conformal radiotherapy arise from a fundamental assumption that there is significant value to be gained from aggressive customization of the treatment for each individual patient In our efforts to design optimal treatments, however, it is important to remember that, given the biological and economic realities of clinical radiotherapy, mathematical optimization of dose distribution metrics with respect to some minimal constraint set is not a necessary or even sufficient condition for design of a clinically optimal treatment. There is wide variation in the complexity of the clinical situations encountered in practice and there are a number of non-physical criteria to be considered in planning. There is also a complementary variety of computational and engineering means for achieving optimization. To date, the scientific dialogue regarding these techniques has concentrated on development of solutions to worst-case scenarios, largely in the absence of consideration of appropriate matching of solution complexity to problem complexity. It is the aim of this presentation to propose a provisional stratification of treatment planning problems, stratified by relative complexity, and to identify a corresponding stratification of necessary treatment planning techniques. It is asserted that the subset of clinical radiotherapy cases for

  5. Heat treatment of firewood : meeting the phytosanitary requirements

    Science.gov (United States)

    Xiping Wang; Richard Bergman; Brian K. Brashaw; Scott Myers; Marc Joyal

    2011-01-01

    The movement of firewood within emerald ash borer- (EAB-) infested states and into adjoining states has been a major contributor to the spread of EAB throughout the United States and Canada. In an effort to stop the further spread of EAB from infested areas and to facilitate interstate commerce, USDA Animal and Plant Health Inspection Service (APHIS) has required and...

  6. 46 CFR 54.25-7 - Requirement for postweld heat treatment (modifies UCS-56).

    Science.gov (United States)

    2010-10-01

    ... ENGINEERING PRESSURE VESSELS Construction With Carbon, Alloy, and Heat Treated Steels § 54.25-7 Requirement for postweld heat treatment (modifies UCS-56). (a) Postweld heat treatment is required for all carbon...) for applicable requirements.) (b) Cargo tanks which are fabricated of carbon or low alloy steel as...

  7. Complex rectal polyps: other treatment modalities required when offering a transanal endoscopic microsurgery service.

    LENUS (Irish Health Repository)

    Joyce, Myles R

    2011-09-01

    Complex rectal polyps may present a clinical challenge. The study aim was to assess different treatment modalities required in the management of patients referred for transanal endoscopic microsurgery.

  8. A Requirements-Driven Optimization Method for Acoustic Treatment Design

    Science.gov (United States)

    Berton, Jeffrey J.

    2016-01-01

    Acoustic treatment designers have long been able to target specific noise sources inside turbofan engines. Facesheet porosity and cavity depth are key design variables of perforate-over-honeycomb liners that determine levels of noise suppression as well as the frequencies at which suppression occurs. Layers of these structures can be combined to create a robust attenuation spectrum that covers a wide range of frequencies. Looking to the future, rapidly-emerging additive manufacturing technologies are enabling new liners with multiple degrees of freedom, and new adaptive liners with variable impedance are showing promise. More than ever, there is greater flexibility and freedom in liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. The subject of this paper is an analytical method that derives the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground.

  9. Combination of blood lactate level with assessment of blood consumption (ABC) scoring system: A more accurate predictor of massive transfusion requirement.

    Science.gov (United States)

    Chaochankit, Wongsakorn; Akaraborworn, Osaree; Sangthong, Burapat; Thongkhao, Komet

    2018-03-03

    Exsanguination is the most common leading cause of death in trauma patients. The massive transfusion (MT) protocol may influence therapeutic strategies and help provide blood components in timely manner. The assessment of blood consumption (ABC) score is a popular MT protocol but has low predictability. The lactate level is a good parameter to reflect poor tissue perfusion or shock states that can guide the management. This study aimed to modify the ABC scoring system by adding the lactate level for better prediction of MT. The data were retrospectively collected from 165 trauma patients following the trauma activated criteria at Songklanagarind Hospital from January 2014 to December 2014. The ABC scoring system was applied in all patients. The patients who had an ABC score ≥2 as the cut point for MT were defined as the ABC group. All patients who had a score ≥2 with a lactate level >4 mmol/dL were defined as the ABC plus lactate level (ABC + L) group. The prediction for the requirement of massive blood transfusion was compared between the ABC and ABC + L groups. The ability of ABC and ABC + L groups to predict MT was estimated by the area under the receiver operating characteristic curve. Among 165 patients, 15 patients (9%) required massive blood transfusion. There were no significant differences in age, gender, mechanism of injury or initial vital signs between the MT group and the non-MT group. The group that required MT had a higher Injury Severity Score and mortality. The sensitivity and specificity of the ABC scoring system in our institution were low (81%, 34%, AUC 0.573). The sensitivity and specificity were significantly better in the ABC + L group (92%, 42%, AUC = 0.745). The ABC scoring system plus lactate increased the sensitivity and specificity compared with the ABC scoring system alone. Copyright © 2018 Daping Hospital and the Research Institute of Surgery of the Third Military Medical University. Production and hosting by Elsevier B

  10. 42 CFR 54.13 - Educational requirements for personnel in drug treatment programs.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Educational requirements for personnel in drug... TREATMENT BLOCK GRANTS AND/OR PROJECTS FOR ASSISTANCE IN TRANSITION FROM HOMELESSNESS GRANTS § 54.13 Educational requirements for personnel in drug treatment programs. In determining whether personnel of a...

  11. Analysis of waste treatment requirements for DOE mixed wastes: Technical basis

    International Nuclear Information System (INIS)

    1995-02-01

    The risks and costs of managing DOE wastes are a direct function of the total quantities of 3wastes that are handled at each step of the management process. As part of the analysis of the management of DOE low-level mixed wastes (LLMW), a reference scheme has been developed for the treatment of these wastes to meet EPA criteria. The treatment analysis in a limited form was also applied to one option for treatment of transuranic wastes. The treatment requirements in all cases analyzed are based on a reference flowsheet which provides high level treatment trains for all LLMW. This report explains the background and basis for that treatment scheme. Reference waste stream chemical compositions and physical properties including densities were established for each stream in the data base. These compositions are used to define the expected behavior for wastes as they pass through the treatment train. Each EPA RCRA waste code was reviewed, the properties, chemical composition, or characteristics which are of importance to waste behavior in treatment were designated. Properties that dictate treatment requirements were then used to develop the treatment trains and identify the unit operations that would be included in these trains. A table was prepared showing a correlation of the waste physical matrix and the waste treatment requirements as a guide to the treatment analysis. The analysis of waste treatment loads is done by assigning wastes to treatment steps which would achieve RCRA compliant treatment. These correlation's allow one to examine the treatment requirements in a condensed manner and to see that all wastes and contaminant sets are fully considered

  12. Ground Water Monitoring Requirements for Hazardous Waste Treatment, Storage and Disposal Facilities

    Science.gov (United States)

    The groundwater monitoring requirements for hazardous waste treatment, storage and disposal facilities (TSDFs) are just one aspect of the Resource Conservation and Recovery Act (RCRA) hazardous waste management strategy for protecting human health and the

  13. 12 CFR 204.136 - Treatment of trust overdrafts for reserve requirement reporting purposes.

    Science.gov (United States)

    2010-01-01

    ... transfer, the trust department's normal posting procedures may not reflect receipt of the cash collateral... 12 Banks and Banking 2 2010-01-01 2010-01-01 false Treatment of trust overdrafts for reserve...) Interpretations § 204.136 Treatment of trust overdrafts for reserve requirement reporting purposes. (a) Authority...

  14. SU-E-T-619: Planning 131I Thyroid Treatments for Patients Requiring Hemodialysis

    International Nuclear Information System (INIS)

    Stroud, D

    2015-01-01

    Purpose: Treatment of 131I thyroid cancer patients who also require regular hemodialysis (HD) treatments requires consideration of the administered activity and the HD schedule. In this work the red bone marrow is considered the dose limiting organ and the treatment plan optimized the HD schedule with the amount of radioactivity administered. Methods: The ‘Safe’ dose was considered to be 2 Gy (200 rad) to the red bone marrow.1 131Iodine doses of 50 mCi to 100 mCi were modeled and found to require a range of HD schedules. In order to achieve the safe dose to the red marrow, more aggressive HD schedules are required. 100 mCi required an aggressive HD treatment of every 24 hours for at least one week to achieve the ‘safe’ dose and an exposure appropriate for release from the hospital. A more normal schedule of HD beginning at 18 hours then every 48 hours allowed for up to 60 mCi administered dose allowed for a safe dose and expected release after less than one week.2In addition room was equipped with video cameras cameras for monitoring the patient and their vital signs from an adjacent room during HD. In this way the dialysis nurses were able to monitor the patient closely from an adjoining room. Results: Two HD patients were administered adjusted doses of about 50 mCi. The medical and nursing staff were exposed to no more than 4 mR for the entire treatment. The residual Iodine in the patient appeared to be normal after 4 to 6 days when the patient was released. Conclusion: With careful treatment planning 131Iodine treatments can be performed safely for patients needing HD and treatments appear to be as effective as those for patients with normal renal function

  15. SU-E-T-619: Planning 131I Thyroid Treatments for Patients Requiring Hemodialysis

    Energy Technology Data Exchange (ETDEWEB)

    Stroud, D [Kaiser Permanente, Los Angeles Ca, CA (United States)

    2015-06-15

    Purpose: Treatment of 131I thyroid cancer patients who also require regular hemodialysis (HD) treatments requires consideration of the administered activity and the HD schedule. In this work the red bone marrow is considered the dose limiting organ and the treatment plan optimized the HD schedule with the amount of radioactivity administered. Methods: The ‘Safe’ dose was considered to be 2 Gy (200 rad) to the red bone marrow.1 131Iodine doses of 50 mCi to 100 mCi were modeled and found to require a range of HD schedules. In order to achieve the safe dose to the red marrow, more aggressive HD schedules are required. 100 mCi required an aggressive HD treatment of every 24 hours for at least one week to achieve the ‘safe’ dose and an exposure appropriate for release from the hospital. A more normal schedule of HD beginning at 18 hours then every 48 hours allowed for up to 60 mCi administered dose allowed for a safe dose and expected release after less than one week.2In addition room was equipped with video cameras cameras for monitoring the patient and their vital signs from an adjacent room during HD. In this way the dialysis nurses were able to monitor the patient closely from an adjoining room. Results: Two HD patients were administered adjusted doses of about 50 mCi. The medical and nursing staff were exposed to no more than 4 mR for the entire treatment. The residual Iodine in the patient appeared to be normal after 4 to 6 days when the patient was released. Conclusion: With careful treatment planning 131Iodine treatments can be performed safely for patients needing HD and treatments appear to be as effective as those for patients with normal renal function.

  16. Benefit requirements for substance use disorder treatment in state health insurance exchanges.

    Science.gov (United States)

    Tran Smith, Bikki; Seaton, Kathleen; Andrews, Christina; Grogan, Colleen M; Abraham, Amanda; Pollack, Harold; Friedmann, Peter; Humphreys, Keith

    2017-12-20

    Established in 2014, state health insurance exchanges have greatly expanded substance use disorder (SUD) treatment coverage in the United States as qualified health plans (QHPs) within the exchanges are required to conform to parity provisions laid out by the Affordable Care Act and the Mental Health Parity and Addiction Equity Act (MHPAEA). Coverage improvements, however, have not been even as states have wide discretion over how they meet these regulations. How states regulate SUD treatment benefits offered by QHPs has implications for the accessibility and quality of care. In this study, we assessed the extent to which state insurance departments regulate the types of SUD services and medications plans must provide, as well as their use of utilization controls. Data were collected as part of the National Drug Abuse Treatment System Survey, a nationally-representative, longitudinal study of substance use disorder treatment. Data were obtained from state Departments of Insurance via a 15-minute internet-based survey. States varied widely in regulations on QHPs' administration of SUD treatment benefits. Some states required plans to cover all 11 SUD treatment services and medications we assessed in the study, whereas others did not require plans to cover anything at all. Nearly all states allowed the plans to employ utilization controls, but reported little guidance regarding how they should be used. Although some states have taken full advantage of the health insurance exchanges to increase access to SUD treatment, others seem to have done the bare minimum required by the ACA. By not requiring coverage for the entire SUD continuum of care, states are hindering client access to appropriate types of care necessary for recovery.

  17. [Guideline-adherent inpatient psychiatric psychotherapeutic treatment of borderline personality disorder : Normative definition of personnel requirements].

    Science.gov (United States)

    Bohus, M; Schmahl, C; Herpertz, S C; Lieb, K; Berger, M; Roepke, S; Heinz, A; Gallinat, J; Lyssenko, L

    2016-07-01

    Borderline personality disorders (BPD) are severe mental diseases which place high pressure on the psychiatric healthcare system. Nowadays, well-tested, disorder-specific treatment concepts are available also for inpatient treatment in Germany. These show very good and long-term improvements in the psychopathology as well as posttreatment social participation; however, prerequisites for the implementation of these evidence-based inpatient psychotherapy programs are well-trained treatment teams and appropriate financing of resource expenditure. The aim was to formulate a definition of normative needs for treatment duration and intensity for a guideline-conform, empirically proven and effective inpatient treatment of borderline personality disorder as well as the derived personnel requirements in comparison to the currently available resources within the framework of the Psychiatry Personnel Act (Psych-PV). The resource requirements were established based on evaluated hospital ward models, the recommendations of the S2 guidelines and the criteria of specialist societies and compared with the personnel stipulations according to the Psych-PV. The results for a normatively established treatment program showed a pronounced deficit in the financing of the evaluated resource requirements, even when the stipulations laid down in the Psych-PV were implemented to 100 %. Disorder-specific inpatient treatment programs for borderline personality disorder have been scientifically proven to be highly effective; however, resource analyses show that the personnel requirements necessary for effective implementation of these programs are much higher than those allocated by the funding according to the Pysch-PV. The current underfunding leads to inadequate treatment outcomes with high readmission rates and as a result high direct and indirect costs of illness.

  18. Accurate and fast treatment of large molecular systems: Assessment of CEPA and pCCSD within the local pair natural orbital approximation.

    Science.gov (United States)

    Schwabe, Tobias

    2012-10-05

    The local pair natural orbital approach, which has been combined with two post-Hartree-Fock methods, CEPA-1 and pCCSD-1a, recently, is assessed for its applicability to large real-world problems without abundant computing resources. Test cases are selected based on being representative for computational chemistry problems and availability of reliable reference data. Both methods show a good performance and can be applied easily to systems of up to 100 atoms when very accurate energies are sought after. A considerable demand for basis sets of good quality has been identified and practical guidelines to satisfy this are mapped out. Copyright © 2012 Wiley Periodicals, Inc.

  19. Minimum Performance Requirements for Microbial Fuel Cells to Achieve Energy-Neutral Wastewater Treatment

    Directory of Open Access Journals (Sweden)

    Zachary A. Stoll

    2018-02-01

    Full Text Available Microbial fuel cells (MFCs have recently achieved energy-positive wastewater treatment at pilot scale. Despite these achievements, there is still a limited understanding as to whether all wastewaters contain sufficient amounts of energy and, if so, whether MFCs can capture a sufficient amount of energy to offset electrical energy requirements in the wastewater treatment process. Currently, there are no tools or methods available that can determine whether an MFC can be energy-neutral a priori. To address this, we derived a simple relationship by setting the electrical energy requirements of a wastewater treatment facility equal to the net energy output of the MFC, such that the resulting expression describes the minimum chemical oxygen demand (COD removal needed to achieve energy-neutral treatment. The resulting equation is simply a function of electrical energy requirements, Coulombic Efficiency, and cell voltage. This work provides the first ever quantitative method for determining if the MFCs are feasible to achieve energy-neutral treatment for a given wastewater and what level of performance is needed.

  20. Accurate fixation of plates and screws for the treatment of acetabular fractures using 3D-printed guiding templates: An experimental study.

    Science.gov (United States)

    Chen, Xu; Chen, Xuanhuang; Zhang, Guodong; Lin, Haibin; Yu, Zhengxi; Wu, Changfu; Li, Xing; Lin, Yijun; Huang, Wenhua

    2017-06-01

    To investigate the feasibility of the use of 3D-printed guiding templates for accurate placement of plates and screws for internal fixation of acetabular fractures. 3D models of the pelvises of 14 adult cadavers were reconstructed using computed tomography (CT). Twenty-eight acetabular fractures were simulated and placement positions for plates and screw trajectories were designed. Bending module was obtained by 3D cutting; guiding template was manufactured using 3D printing, and the plate was pre-bent according to the bending module. Plates and screws were placed in cadaveric pelvises using the guiding template, and 3D model was reconstructed using CT. The designed and real trajectories were matched using 3D registration including the coordinates (entry and exit points) of designed trajectory. The number of qualified points with different accuracy levels was compared using Chi-squared test. Sixty-four plates and 339 screws were placed with no cortical breach. The absolute difference of the X, Y, and Z coordinates between the designed and real entry points were 0.52±0.45, 0.43±0.36, and 0.53±0.44mm, respectively. The corresponding values for the exit points were 0.83±0.67, 1.22±0.87, and 1.26±0.83mm, respectively. With an accuracy degree ≥1.9mm for the entry points and ≥3.8mm for the exit points, there was no significant difference between the designed and the real trajectories. The 3D-printed guiding template helped achieve accurate placement of plates and screws in the pelvis of adult cadavers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. 21 CFR 111.90 - What requirements apply to treatments, in-process adjustments, and reprocessing when there is a...

    Science.gov (United States)

    2010-04-01

    ... Requirement to Establish a Production and Process Control System § 111.90 What requirements apply to... reprocessing, treatment or in-process adjustment is permitted by § 111.77; (c) Any batch of dietary supplement... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to treatments, in-process...

  2. MO-A-BRD-10: A Fast and Accurate GPU-Based Proton Transport Monte Carlo Simulation for Validating Proton Therapy Treatment Plans

    Energy Technology Data Exchange (ETDEWEB)

    Wan Chan Tseung, H; Ma, J; Beltran, C [Mayo Clinic, Rochester, MN (United States)

    2014-06-15

    Purpose: To build a GPU-based Monte Carlo (MC) simulation of proton transport with detailed modeling of elastic and non-elastic (NE) protonnucleus interactions, for use in a very fast and cost-effective proton therapy treatment plan verification system. Methods: Using the CUDA framework, we implemented kernels for the following tasks: (1) Simulation of beam spots from our possible scanning nozzle configurations, (2) Proton propagation through CT geometry, taking into account nuclear elastic and multiple scattering, as well as energy straggling, (3) Bertini-style modeling of the intranuclear cascade stage of NE interactions, and (4) Simulation of nuclear evaporation. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions with therapeutically-relevant nuclei, (2) Pencil-beam dose calculations in homogeneous phantoms, (3) A large number of treatment plan dose recalculations, and compared with Geant4.9.6p2/TOPAS. A workflow was devised for calculating plans from a commercially available treatment planning system, with scripts for reading DICOM files and generating inputs for our MC. Results: Yields, energy and angular distributions of secondaries from NE collisions on various nuclei are in good agreement with the Geant4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%–2mm for 70–230 MeV pencil-beam dose distributions in water, soft tissue, bone and Ti phantoms is 100%. The pass rate at 2%–2mm for treatment plan calculations is typically above 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is around 20s for 1×10{sup 7} proton histories. Conclusion: Our GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle NE interactions on any nucleus. Dosimetric calculations demonstrate very good agreement with Geant4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil

  3. Canadian uranium mines and mills evolution of regulatory expectations and requirements for effluent treatment

    International Nuclear Information System (INIS)

    LeClair, J.; Ashley, F.

    2006-01-01

    The regulation of uranium mining in Canada has changed over time as our understanding and concern for impacts on both human and non-human biota has evolved. Since the mid-1970s and early 1980s, new uranium mine and mill developments have been the subject of environmental assessments to assess and determine the significance of environmental effects throughout the project life cycle including the post-decommissioning phase. Water treatment systems have subsequently been improved to limit potential effects by reducing the concentration of radiological and non-radiological contaminants in the effluent discharge and the total loadings to the environment. This paper examines current regulatory requirements and expectations and how these impact uranium mining/milling practices. It also reviews current water management and effluent treatment practices and performance. Finally, it examines the issues and challenges for existing effluent treatment systems and identifies factors to be considered in optimizing current facilities and future facility designs. (author)

  4. Ozonation for source treatment of pharmaceuticals in hospital wastewater - ozone lifetime and required ozone dose

    DEFF Research Database (Denmark)

    Hansen, Kamilla Marie Speht; Spiliotopoulou, Aikaterini; Chhetri, Ravi Kumar

    2016-01-01

    Ozonation aimed at removing pharmaceuticals was studied in an effluent from an experimental pilot system using staged moving bed biofilm reactor (MBBR) tanks for the optimal biological treatment of wastewater from a medical care unit of Aarhus University Hospital. Dissolved organic carbon (DOC......) and pH in samples varied considerably, and the effect of these two parameters on ozone lifetime and the efficiency of ozone in removing pharmaceuticals were determined. The pH in the effluent varied from 5.0 to 9.0 resulting in approximately a doubling of the required ozone dose at the highest p......H for each pharmaceutical. DOC varied from 6 to 20 mg-DOC/L. The ozone required for removing each pharmaceutical, varied linearly with DOC and thus, ozone doses normalized to DOC (specific ozone dose) agreed between water samples (typically within 15%). At neutral pH the specific ozone dose required...

  5. STATUS KESEHATAN GIGI, PERFORMED TREATMENT INDEX DAN REQUIRED TREATMENT INDEX ANAK SEKOLAH DASAR DI KABUPATEN CIANJUR, KARAWANG DAN SERANG

    Directory of Open Access Journals (Sweden)

    Magdarina D. Agtini

    2012-10-01

    Full Text Available Community dental health status for permanent teeth in determined by the DMF-T (Decayed Misssing and Filled Teeth. The DMF-T in a number of industrialized nations shows a tendency to drop while the trend in developing nations is increasing. In Indonesia dental caries is still the foremost problem in oral and dental diseases, Prevalence of dental caries is around 85%-99%. The DMF-T is varied, reflect geographic and ages variations. In three decade's the intensity of dental caries is increasing with each decade, 0,70 DMF-T in 1970 to be 2,30 DMF-T in 1980 and 2,70 DMF-Tin 1990. Goal of dental caries control for 2010 is DMF-T ≤ 1 for 12 years age group. Baseline study is therefore necessary to ascertain DMF-T, PTI (Performed Treatment Index and RTI (Required Treatment Index. The study was implemented in the districts of Cianjur, Karawang and Serang in West Java for two years. The study design was cohort. The respondents  were 1200 pupils aged 8 years, randomly selected with a signifzcantcy grade of p < 0,05 and power of 0,20. The study results showed average of DMF-T at the beginning was 1,52 ± 1,21, and after two years the DMF-T was 2.45 ± 1.51 comprise only of the decayed component, and There was significant different (p=0,000. The DMF-T was low WHO criteria, nevertheless the PTI (1,2% was very low and the RTI (98% was very high. The DMF_T average shows a tendency to increase, with D as the largest component and F (Filling the smallest one. The control of dental caries faces several problems, like the limited number of manpower and facilities, and limited supply of water and electricity incertain areas. Therefore treatment of dental caries with GIC fillings using the ART method should be taken into consideration receive the necessary attention.   Key words:  Status kesehatan gigi, Karies, Performed treatment Index (PTI,  Required Treatment  Index (RTI

  6. Argonne-West facility requirements for a radioactive waste treatment demonstration

    International Nuclear Information System (INIS)

    Dwight, C.C.; Felicione, F.S.; Black, D.B.; Kelso, R.B.; McClellan, G.C.

    1995-01-01

    At Argonne National Laboratory-West (ANL-W), near Idaho Falls, Idaho, facilities that were originally constructed to support the development of liquid-metal reactor technology are being used and/or modified to meet the environmental and waste management research needs of DOE. One example is the use of an Argonne-West facility to conduct a radioactive waste treatment demonstration through a cooperative project with Science Applications International Corporation (SAIC) and Lockheed Idaho Technologies Company. The Plasma Hearth Process (PBP) project will utilize commercially-adapted plasma arc technology to demonstrate treatment of actual mixed waste. The demonstration on radioactive waste will be conducted at Argonne's Transient Reactor Test Facility (TREAT). Utilization of an existing facility for a new and different application presents a unique set of issues in meeting applicable federal state, and local requirements as well as the additional constraints imposed by DOE Orders and ANL-W site requirements. This paper briefly describes the PHP radioactive demonstrations relevant to the interfaces with the TREAT facility. Safety, environmental design, and operational considerations pertinent to the PHP radioactive demonstration are specifically addressed herein. The personnel equipment, and facility interfaces associated with a radioactive waste treatment demonstration are an important aspect of the demonstration effort. Areas requiring significant effort in preparation for the PBP Project being conducted at the TREAT facility include confinement design, waste handling features, and sampling and analysis considerations. Information about the facility in which a radioactive demonstration will be conducted, specifically Argonne's TREAT facility in the case of PHP, may be of interest to other organizations involved in developing and demonstrating technologies for mixed waste treatment

  7. The Role of Rotavirus Infection Requiring Treatment in a Hospital in Children with Diarrheal Syndrome

    Directory of Open Access Journals (Sweden)

    Yu.Yu. Stepanova

    2013-11-01

    Full Text Available The aim of this research was to investigate the prognostic and diagnostic value of some clinical and biological factors in the development of acute intestinal rotavirus infection in children. We observed 161 children aged 1 month to 6 years. 81 children were diagnosed with rotavirus infection, and 80 — with secretory diarrhea of unknown origin. Treatment of children was carried out in a hospital. Analysis of clinical and anamnestic data enabled to choose the most prognostically and diagnostically important predictors out of 117 studied risk factors. The authors investigated the association of clinical and biological factors with the severity of rotavirus infection, which would highlight the groups of children with high risk of rotavirus infection, which requires treatment in a hospital. The practical application of the results of this work should be the optimization of early diagnostics of rotavirus infections and timeliness of preventive measures.

  8. Brown seaweed processing: enzymatic saccharification of Laminaria digitata requires no pre-treatment

    DEFF Research Database (Denmark)

    Manns, Dirk; Andersen, Stinus K.; Saake, Bodo

    2016-01-01

    This study assesses the effect of different milling pre-treatments on enzymatic glucose release from the brown seaweed Laminaria digitata having high glucan (laminarin) content. Wet refiner milling, using rotating disc distances of 0.1–2 mm, generated populations of differently sized pieces...... of lamina having decreasing average surface area (100–0.1 mm2) with increased milling severity. Higher milling severity (lower rotating disc distance) also induced higher spontaneous carbohydrate solubilization from the material. Due to the seaweed material consisting of flat blades, the milling did...... not increase the overall surface area of the seaweed material, and size diminution of the laminas by milling did not improve the enzymatic glucose release. Milling was thus not required for enzymatic saccharification because all available glucose was released even from unmilled material. Treatment...

  9. The role of unfinished root canal treatment in odontogenic maxillofacial infections requiring hospital care.

    Science.gov (United States)

    Grönholm, L; Lemberg, K K; Tjäderhane, L; Lauhio, A; Lindqvist, C; Rautemaa-Richardson, R

    2013-01-01

    The aim of this study was to evaluate clinical and radiological findings and the role of periapical infection and antecedent dental treatment of infected focus teeth in odontogenic maxillofacial abscesses requiring hospital care. In this retrospective cohort study, we evaluated medical records and panoramic radiographs during the hospital stay of patients (n = 60) admitted due to odontogenic maxillofacial infection originating from periapical periodontitis. Twenty-three (38 %) patients had received endodontic treatment and ten (17 %) other acute dental treatment. Twenty-seven (45 %) had not visited the dentist in the near past. Median age of the patients was 45 (range 20-88) years and 60 % were males. Unfinished root canal treatment (RCT) was the major risk factor for hospitalisation in 16 (27 %) of the 60 cases (p = .0065). Completed RCT was the source only in 7 (12 %) of the 60 cases. Two of these RCTs were adequate and five inadequate. The initiation of inadequate or incomplete primary RCT of acute periapical periodontitis appears to open a risk window for locally invasive spread of infection with local abscess formation and systemic symptoms. Thereafter, the quality of the completed RCT appears to have minor impact. However, a considerable proportion of the patients had not received any dental treatment confirming the importance of good dental health. Thus, thorough canal debridement during the first session is essential for minimising the risk for spread of infection in addition to incision and drainage of the abscess. If this cannot be achieved, tooth extraction should be considered. Incomplete or inadequate canal debridement and drainage of the abscess may increase the risk for spread of endodontic infection.

  10. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Directory of Open Access Journals (Sweden)

    John M Lachin

    Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

  11. Studies with group treatments required special power calculations, allocation methods, and statistical analyses.

    Science.gov (United States)

    Faes, Miriam C; Reelick, Miriam F; Perry, Marieke; Olde Rikkert, Marcel G M; Borm, George F

    2012-02-01

    In some trials, the intervention is delivered to individuals in groups, for example, groups that exercise together. The group structure of such trials has to be taken into consideration in the analysis and has an impact on the power of the trial. Our aim was to provide optimal methods for the design and analysis of such trials. We described various treatment allocation methods and presented a new allocation algorithm: optimal batchwise minimization (OBM). We carried out a simulation study to evaluate the performance of unrestricted randomization, stratification, permuted block randomization, deterministic minimization, and OBM. Furthermore, we described appropriate analysis methods and derived a formula to calculate the study size. Stratification, deterministic minimization, and OBM had considerably less risk of imbalance than unrestricted randomization and permuted block randomization. Furthermore, OBM led to unpredictable treatment allocation. The sample size calculation and the analysis of the study must be based on a multilevel model that takes the group structure of the trial into account. Trials evaluating interventions that are carried out in subsequent groups require adapted treatment allocation, power calculation, and analysis methods. From the perspective of obtaining overall balance, we conclude that minimization is the method of choice. When the number of prognostic factors is low, stratification is an excellent alternative. OBM leads to better balance within the batches, but it is more complicated. It is probably most worthwhile in trials with many prognostic factors. From the perspective of predictability, a treatment allocation method, such as OBM, that allocates several subjects at the same time, is superior to other methods because it leads to the lowest possible predictability. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. EVALUATION OF SUPPLEMENTAL PRE-TREATMENT DEVELOPMENT REQUIREMENTS TO MEET TRL 6 ROTARY MICROFILTRATION

    Energy Technology Data Exchange (ETDEWEB)

    HUBER HJ

    2011-10-03

    In spring 2011, the Technology Maturation Plan (TMP) for the Supplemental Treatment Project (RPP-PLAN-49827, Rev. 0), Technology Maturation Plan for the Treatment Project (T4S01) was developed. This plan contains all identified actions required to reach technical maturity for a field-deployable waste feed pretreatment system. The supplemental pretreatment system has a filtration and a Cs-removal component. Subsequent to issuance of the TMP, rotary microfiltration (RMF) has been identified as the prime filtration technology for this application. The prime Cs-removal technology is small column ion exchange (ScIX) using spherical resorcinol formaldehyde (sRF) as the exchange resin. During fiscal year 2011 (FY2011) some of the tasks identified in the TMP have been completed. As of September 2011, the conceptual design package has been submitted to DOE as part of the critical decision (CD-1) process. This document describes the remaining tasks identified in the TMP to reach technical maturity and evaluates the validity of the proposed tests to fill the gaps as previously identified in the TMP. The potential vulnerabilities are presented and the completed list of criteria for the DOE guide DOE G 413.3-4 different technology readiness levels are added in an attachment. This evaluation has been conducted from a technology development perspective - all programmatic and manufacturing aspects were excluded from this exercise. Compliance with the DOE G 413.3-4 programmatic and manufacturing requirements will be addressed directly by the Treatment Project during the course of engineering design. The results of this evaluation show that completion of the proposed development tasks in the TMP are sufficient to reach TRL 6 from a technological point of view. The tasks involve actual waste tests using the current baseline configuration (2nd generation disks, 40 psi differential pressure, 30 C feed temperature) and three different simulants - the PEP, an AP-Farm and an S

  13. Requirements to MRI and MRS data to be applicable for radiation treatment

    International Nuclear Information System (INIS)

    Torresin, A.; Brambilla, M.; Colombo, P.; Minella, M.; Monti, A.; Moscato, A.

    2012-01-01

    Magnetic Resonance Imaging (MRI) use for Radiotherapy planning began in the 1980 and is still developing today. MRI is now an important morphological and functional imaging able to open new diagnostic and planning scenario. As new technologies become more complex, their best clinical application often becomes very difficult. The intent of this paper is to highlight methods and requirements necessary to apply MRI in radiation treatments. The concept of image is still in progress, following the evolution of human body knowledge. Many different methods to get diagnostic information of a tumour will be acquired; MRI images will be more and more important in the next future. The combination of information from complementary imaging modalities is expected to have a great benefit in cancer treatment. This fact is particularly relevant for target definition, which remains, one of the most important sources of error in Radiotherapy. Anatomical imaging with CT and MRI produces different gross tumour volumes. Functional imaging with modalities such as MRI and PET will generally reveal an even different volume. Thus a decision has to be taken about how to combine such information in clinical applications. The solutions of these new problems are 'in progress' and a lot of researches in clinical applications are in discussion

  14. Risk Factors for the Requirement of Antenatal Insulin Treatment in Gestational Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Mayu Watanabe

    2016-01-01

    Full Text Available Poor maternal glycemic control increases maternal and fetal risk for adverse outcomes, and strict management of gestational diabetes mellitus (GDM is recommended to prevent neonatal and maternal complications. However, risk factors for the requirement of antenatal insulin treatment (AIT are not well-investigated in the pregnant women with GDM. We enrolled 37 pregnant women with GDM and investigated the risk for AIT by comparing the patients with AIT (AIT group; n=10 and without insulin therapy (Diet group; n=27. The 1-h and 2-h plasma glucose levels and the number of abnormal values in 75 g OGTT were significantly higher in AIT group compared with Diet group. By logistic regression analysis, plasma glucose level at 1-h was significant predictor for AIT and the odds ratios were 1.115 (1.004–1.239 using forward selection method and 1.192 (1.006–1.413 using backward elimination method. There were no significant differences in obstetrical outcomes and neonatal complications. 1-h plasma glucose levels in 75 g OGTT are useful parameters in predicting the requirement for AIT in GDM. Both maternal and neonatal complications are comparable in GDM patients with and without insulin therapy.

  15. Neurodevelopmental long-term outcome in children with hydrocephalus requiring neonatal surgical treatment.

    Science.gov (United States)

    Melot, A; Labarre, A; Vanhulle, C; Rondeau, S; Brasseur, M; Gilard, V; Castel, H; Marret, S; Proust, F

    2016-04-01

    To assess long-term neurodevelopmental outcome in children with hydrocephalus requiring neurosurgical treatment during the neonatal period. This prospective longitudinal population-based study included 43 children with neonatal shunted hydrocephalus. The 43 children were prospectively reviewed in the presence of their parents at the outpatient clinic. Cognitive and motor outcomes were assessed respectively using different Wechsler scales according to age and Gross Motor Function Classification System (GMFCS). Postoperative MRI was routinely performed. The mean gestational age at birth of the 43 consecutive children with neonatal hydrocephalus (sex ratio M/F: 1.39) was 34.5±5.4 weeks of gestation. At mean follow-up of 10.4±4 years, mean total IQ was 73±27.7, with equivalent results in mean verbal and mean performance IQ. Of the 33 children with IQ evaluation, 18 presented an IQ≥85 (41.9%). Efficiency in walking without a mobility device (GMFCS≤2) was obtained in 37 children (86%). Only severity of postoperative ventricular dilation was significantly associated with unfavorable outcome (Evans index>0.37; odds ratio: 0.16, P=0.03). This information could be provided to those families concerned who often experience anxiety when multi-disciplinary management of neonatal hydrocephalus is required. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  16. [Surgical treatment of Marfan syndrome; analysis of the patients required multiple surgical interventions].

    Science.gov (United States)

    Yamazaki, F; Shimamoto, M; Fujita, S; Nakai, M; Aoyama, A; Chen, F; Nakata, T; Yamada, T

    2002-07-01

    Without treatment, the life expectancy of patients with Marfan syndrome is reduced by the associated cardiovascular abnormalities. In this study, we reviewed our experience of the patients with Marfan syndrome who required multiple surgical interventions to identify the optimal treatment for these patients. Between January 1986 and December 2000, 44 patients with Marfan syndrome were operated on at Shizuoka City Hospital (SCH). Among them, 10 patients (22.7%) underwent multiple surgical interventions. There were 5 male and 5 female patients with a mean age of 40.6 +/- 16.1 years at the initial surgery. Only one patient was operated on at another hospital for his first, second, and third operations. His fourth operation was carried out at SCH. The remaining 9 patients underwent a total of 14 additional surgical procedures at SCH. Computed tomography (CT) scans were taken every 6 months postoperatively, and aortic diameter greater than 60 mm was considered as the indication for the additional surgery. There were no early death and one late death. The causes of additional surgery were enlargement of true aneurysm in 6, enlargement of residual dissection in 4, new dissection in 4, false aneurysm at the coronary anastomosis of Bentall procedure in 1. In 9 patients, both ascending and descending aorta were replaced. Among these 9 patients, only 3 patients underwent total arch replacement, and remaining 6 patients had their arch left in place with or without dissection. Our current strategy of the treatment of Marfan patients with acute type A dissection is total arch replacement with an elephant trunk at the initial emergent surgery.

  17. Dynamic Computational Model of Symptomatic Bacteremia to Inform Bacterial Separation Treatment Requirements.

    Directory of Open Access Journals (Sweden)

    Sinead E Miller

    Full Text Available The rise of multi-drug resistance has decreased the effectiveness of antibiotics, which has led to increased mortality rates associated with symptomatic bacteremia, or bacterial sepsis. To combat decreasing antibiotic effectiveness, extracorporeal bacterial separation approaches have been proposed to capture and separate bacteria from blood. However, bacteremia is dynamic and involves host-pathogen interactions across various anatomical sites. We developed a mathematical model that quantitatively describes the kinetics of pathogenesis and progression of symptomatic bacteremia under various conditions, including bacterial separation therapy, to better understand disease mechanisms and quantitatively assess the biological impact of bacterial separation therapy. Model validity was tested against experimental data from published studies. This is the first multi-compartment model of symptomatic bacteremia in mammals that includes extracorporeal bacterial separation and antibiotic treatment, separately and in combination. The addition of an extracorporeal bacterial separation circuit reduced the predicted time of total bacteria clearance from the blood of an immunocompromised rodent by 49%, compared to antibiotic treatment alone. Implementation of bacterial separation therapy resulted in predicted multi-drug resistant bacterial clearance from the blood of a human in 97% less time than antibiotic treatment alone. The model also proposes a quantitative correlation between time-dependent bacterial load among tissues and bacteremia severity, analogous to the well-known 'area under the curve' for characterization of drug efficacy. The engineering-based mathematical model developed may be useful for informing the design of extracorporeal bacterial separation devices. This work enables the quantitative identification of the characteristics required of an extracorporeal bacteria separation device to provide biological benefit. These devices will potentially

  18. Groundwater recharge: Accurately representing evapotranspiration

    CSIR Research Space (South Africa)

    Bugan, Richard DH

    2011-09-01

    Full Text Available Groundwater recharge is the basis for accurate estimation of groundwater resources, for determining the modes of water allocation and groundwater resource susceptibility to climate change. Accurate estimations of groundwater recharge with models...

  19. 20 CFR 416.1725 - Effect of your failure to comply with treatment requirements for your drug addiction or alcoholism.

    Science.gov (United States)

    2010-04-01

    ... treatment requirements for your drug addiction or alcoholism. 416.1725 Section 416.1725 Employees' Benefits... Persons Eligible for Supplemental Security Income to Other Agencies Referral for Treatment of Alcoholism... drug addiction or alcoholism. (a) Suspension of benefits. Your eligibility for benefits will be...

  20. PHARMACOECONOMIC ASPECTS OF NICOTINE ADDICTION TREATMENT IN PATIENTS WITH ANGINA REQUIRING CARDIAC SURGERY

    Directory of Open Access Journals (Sweden)

    A. V. Rudakova

    2012-01-01

    Full Text Available Smoking is a major risk factor in patients with angina pectoris. Interventions that facilitate the rejection of it are an important part of the treatment. Aim. To analyze the cost effectiveness of the partial agonist of nicotinic receptors, varenicline, in patients with angina who require cardiac interventions. Material and methods. The estimation was conducted using a Markov model based on the results of clinical trials and epidemiological studies. The cost of treatment of complications were calculated on the basis of compulsory medical insurance rates for St. Petersburg in 2011. Results. The varenicline therapy in 70-year-old patients before cardiac surgery reduces hospital mortality at an extremely high cost-effectiveness (the cost of preventing one death - 148.8 thousand rubles. The cost/effectiveness ratio in the analysis for the period of survival of patients in this situation was 31.3 thousand rubles for 1 additional year of life. Life expectancy will be increased by an average of 0.147 years. Analysis for the period of survival of 50-year-old patients has shown that in patients after cardiac surgery cost-effectiveness of varenicline is extremely high (in the analysis from the perspective of the health care system the cost/effectiveness ratio was 36.0 thousand rubles for 1 additional year of life, in the analysis, taking into account the social perspective – 17.9 thousand rubles for 1 additional year of life. Increase in the life expectancy of 50 year-old patients will be 0.291 year in average. Conclusion. Varenicline therapy of patients with angina pectoris is the economy before cardiac surgery , and after their execution, and this applies not only young, but older patients. The desirability of varenicline including to federal and regional programs to reduce cardiovascular morbidity and mortality is shown.

  1. Characteristics and management of patients requiring hospitalization for treatment of odontogenic infections.

    Science.gov (United States)

    Gonçalves, Lélia; Lauriti, Leandro; Yamamoto, Marcos Kazuo; Luz, João Gualberto C

    2013-01-01

    Odontogenic infections usually respond well to outpatient care; however, these can be very complicated and demand hospitalization. The aim of this study was to assess retrospectively the characteristics and medical management of patients needing hospitalization for the treatment of odontogenic infections. The personal data, symptoms presented, and therapeutic procedures adopted were analyzed. The predominant age group was from 0 to 10 years (30%), and a sex relation of 1:1 was found, but there was no significant difference (P = 0.337). The most frequent diagnosis was of dentoalveolar abscess (86.3%). Pain (47.1%) was the prevailing reason for hospitalization, with pulpal necrosis (67.5%) as the main cause. There was a prevalence of involvement of the lower permanent teeth (41.4%) and lower deciduous teeth (23%). The prevalent clinical aspect was submandibular or facial swelling (61.4%). The most administered antibiotic was penicillin G associated with metronidazole (25.3%). Most cases (58.7%) presented regression with antibiotic therapy, and in some cases, surgical drainage was necessary (18.7%). One case of Ludwig angina resulted in death. The mean length of hospital stay was 4.4 days, being higher in the cases of Ludwig angina. It was concluded that most cases of odontogenic infections requiring hospitalization were of dentoalveolar abscess occurring in young people of both sexes, associated to the lower permanent molar teeth, presenting with swelling, with regression of the symptoms after antibiotic therapy and hospitalization for some days, with some of the cases requiring drainage.

  2. Multi-objective optimization of inverse planning for accurate radiotherapy

    International Nuclear Information System (INIS)

    Cao Ruifen; Pei Xi; Cheng Mengyun; Li Gui; Hu Liqin; Wu Yican; Jing Jia; Li Guoli

    2011-01-01

    The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment plan were transformed into a multi-objective optimization problem with multiple constraints. Then, the fast and elitist multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) was introduced to optimize the problem. A clinical example was tested using this method. The results show that an obtained set of non-dominated solutions were uniformly distributed and the corresponding dose distribution of each solution not only approached the expected dose distribution, but also met the dose-volume constraints. It was indicated that the clinical requirements were better satisfied using the method and the planner could select the optimal treatment plan from the non-dominated solution set. (authors)

  3. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  4. Are levothyroxine requirements lower in thyroidectomized diabetic patients on metformin treatment?

    Science.gov (United States)

    Casteràs, Anna; Zafon, Carles; Ciudin, Andreea; Mesa, Jordi

    2013-12-01

    Recent studies have suggested that metformin (MF) may lower thyrotropin concentration. This suggests a possible need for a dose reduction of levothyroxine in hypothyroid patients taking MF. However, contradictory results from heterogeneous study populations indicate that the underlying causes have not been completely elucidated. Patients with postoperative hypothyroidism-a condition not influenced by endogenous thyroid hormone production-have not been evaluated in order to evaluate the impact of MF. To determine the impact of MF in total thyroidectomized patients receiving levothyroxine replacement. One hundred ninety-two patients underwent total thyroidectomy during three years and were receiving levothyroxine substitution. Patients were divided into two groups depending on MF use: the non-MF group included 159 patients, of whom 134 were women [mean (SD) age, 52 (15.7) years; mean (SD) body weight, 70.2 (13.5) kg; 56 with differentiated thyroid cancer]; the MF group comprised 33 patients, of whom 24 were women [mean (SD) age, 63 (9.8) years; mean (SD) body weight, 79.3 (13.9) kg; 9 with differentiated thyroid cancer]. Levothyroxine requirements were compared between the groups, and the differentiated thyroid cancer cases were also analyzed separately. Thyrotropin levels did not differ significantly between the MF and the non-MF groups. No differences in total levothyroxine dosage were found: 114 (100-150) [median (Q1-Q3)] μg in the non-MF group versus 125 (100-142) μg in the MF group (p=0.9). When calculating the weight-adjusted levothyroxine dose, significant differences were evident: 1.66 (1.38-2.08) μg/kg in the non-MF group versus 1.53 (1.26-1.70) μg/kg in the MF group (p=0.010). However, in a multivariate regression model with thyrotropin levels, age, body mass index, sex, and type of thyroid disease, MF treatment lost its significance. Thyroidectomized patients receiving MF treatment need a lower thyroxine dose than patients who do not receive the drug

  5. Expectation requires treatment to boost pain relief: an fMRI study.

    Science.gov (United States)

    Schenk, Lieven A; Sprenger, Christian; Geuter, Stephan; Büchel, Christian

    2014-01-01

    We investigated the effect of a possible interaction between topical analgesic treatment and treatment expectation on pain at the behavioral and neuronal level by combining topical lidocaine/prilocaine treatment with an expectancy manipulation in a 2 by 2 within-subject design (open treatment, hidden treatment, placebo, control). Thirty-two healthy subjects received heat pain stimuli on capsaicin-pretreated skin and rated their experienced pain during functional magnetic resonance imaging. This allowed us to separate drug- and expectancy-related effects at the behavioral and neuronal levels and to test whether they interact during the processing of painful stimuli. Pain ratings were reduced during active treatment and were associated with reduced activity in the anterior insular cortex. Pain ratings were lower in open treatment compared with hidden treatment and were related to reduced activity in the anterior insular cortex, the anterior cingulate cortex, the secondary somatosensory cortex, and the thalamus. Testing for an interaction revealed that the expectation effect was significantly larger in the active treatment conditions compared with the no-treatment conditions and was associated with signal changes in the anterior insular cortex, the anterior cingulate cortex, and the ventral striatum. In conclusion, this study shows that even in the case of a topical analgesic, expectation interacts with treatment at the level of pain ratings and neuronal responses in placebo-related brain regions. Our results are highly relevant in the clinical context as they show (i) that expectation can boost treatment and (ii) that expectation and treatment are not necessarily additive as assumed in placebo-controlled clinical trials. Copyright © 2013 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  6. [Normative definition of staff requirement for a guideline-adherent inpatient qualified detoxification treatment in alcohol dependence].

    Science.gov (United States)

    Kiefer, F; Koopmann, A; Godemann, F; Wolff, J; Batra, A; Mann, K

    2016-03-01

    The central element of the "qualified withdrawal treatment" of alcohol dependence is - in addition to physical withdrawal treatment - psychotherapy. The treatment of the underlying addictive disorder that is displayed by intoxication, harmful behaviour and withdrawal symptoms is only possible with a combination of somatic and psychotherapeutic treatment elements. The successfully established multimodal therapy of the "qualified alcohol withdrawal treatment", postulated in the current S3-Treatment Guidelines, requires a multi-disciplinary treatment team with psychotherapeutic competence. The aim of the present work is to calculate the normative staff requirement of a guideline-based 21-day qualified withdrawal treatment and to compare the result with the staffing regulations of the German Institute for Hospital Reimbursement. The present data support the hypothesis that even in the case of a hundred per cent implementation of these data, adequate therapy of alcohol-related disorders, according to the guidelines, is not feasible. This has to be considered when further developing the finance compensation system based on the described superseded elements of the German Institute for Hospital Reimbursement.

  7. 49 CFR 40.289 - Are employers required to provide SAP and treatment services to employees?

    Science.gov (United States)

    2010-10-01

    ... Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Substance Abuse... subsequent recommended education or treatment for an employee who has violated a DOT drug and alcohol...

  8. 7 CFR 319.37-6 - Specific treatment and other requirements.

    Science.gov (United States)

    2010-01-01

    ..., Hong Kong, India, Indonesia, Ivory Coast, Japan, Kampuchea, Korea, Madagascar, Malaysia, Mauritius... owner and the plant protection service of the exporting country, in which the treatment facility owner... plant protection service of the exporting country access to the treatment facility as necessary to...

  9. 40 CFR 141.82 - Description of corrosion control treatment requirements.

    Science.gov (United States)

    2010-07-01

    ... sufficient to maintain an effective residual concentration in all test tap samples. (2) The water system... adversely affected other water treatment processes when used by another water system with comparable water... ineffective or adversely affects other water quality treatment processes. (5) The water system shall evaluate...

  10. 20 CFR 416.936 - Treatment required for individuals whose drug addiction or alcoholism is a contributing factor...

    Science.gov (United States)

    2010-04-01

    ... addiction or alcoholism is a contributing factor material to the determination of disability. 416.936... AGED, BLIND, AND DISABLED Determining Disability and Blindness Drug Addiction and Alcoholism § 416.936 Treatment required for individuals whose drug addiction or alcoholism is a contributing factor material to...

  11. 20 CFR 404.1536 - Treatment required for individuals whose drug addiction or alcoholism is a contributing factor...

    Science.gov (United States)

    2010-04-01

    ... addiction or alcoholism is a contributing factor material to the determination of disability. 404.1536... Treatment required for individuals whose drug addiction or alcoholism is a contributing factor material to... alcoholism is a contributing factor material to the determination of disability (as described in § 404.1535...

  12. Adjunctive treatment of decompression illness with a non-steroidal anti-inflammatory drug (tenoxicam) reduces compression requirement.

    Science.gov (United States)

    Bennett, M; Mitchell, S; Dominguez, A

    2003-01-01

    We report a randomized trial examining adjunctive administration of the NSAID, tenoxicam, to divers suffering with DCI. 180 subjects were graded for severity on admission and randomized according to a stratified random number schedule. Subjects were recompressed and treatment continued daily until symptom stabilization or complete resolution. Tenoxicam 20 mg or a placebo preparation was administered at the first air break during the initial recompression and continued daily for seven days. The subjects were assessed using a recovery status score at the completion of treatment and at 4-6 weeks. The proportion of patients with mild residual symptoms at discharge and final follow-up was not significantly different (discharge placebo 30% versus tenoxicam 37%, P=0.41; six weeks placebo 20% versus tenoxicam 17%, P=0.58). There was a significant reduction in the number of treatments required to achieve discharge (median treatments placebo 3, tenoxicam 2, P=0.01). 61% of patients in the tenoxicam group required less than 3 compressions, versus 40% in the placebo group (P=0.01, RRR 33 % [95%CI 9%-56%], NNT=5 [95%CI 3-18]). There was no evidence of increased complications of treatment in the tenoxicam group. When given this NSAID, patients with DCI require fewer hyperbaric oxygen (HBO2) sessions to achieve a standard clinical end-point and there is likely to be an associated cost saving.

  13. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  14. Quantitative and qualitative characteristics of grey water for reuse requirements and treatment alternatives: the case of Jordan.

    Science.gov (United States)

    Ghunmi, Lina Abu; Zeeman, Grietje; van Lier, Jules; Fayyed, Manar

    2008-01-01

    The objective of this work is to assess the potentials and requirements for grey water reuse in Jordan. The results revealed that urban, rural and dormitory grey water production rate and concentration of TS, BOD(5), COD and pathogens varied between 18-66 L cap(-1)d(-1), 848-1,919, 200-1,056, and 560-2,568 mg L(-1) and 6.9E2-2.7E5 CFU mL(-1), respectively. The grey water compromises 64 to 85% of the total water flow in the rural and urban areas. Storing grey water is inevitable to meet reuse requirements in terms of volume and timing. All the studied grey waters need treatment, in terms of solids, BOD(5), COD and pathogens, before storage and reuse. Storage and physical treatment, as a pretreatment step should be avoided, since it produces unstable effluents and non-stabilized sludge. However, extensive biological treatment can combine storage and physical treatments. Furthermore, a batch-fed biological treatment system combining anaerobic and aerobic processes copes with the fluctuations in the hydrographs and pollutographs as well as the present nutrients. The inorganic content of grey water in Jordan is about drinking water quality and does not need treatment. Moreover, the grey water SAR values were 3-7, revealing that the concentrations of monovalent and divalent cations comply with agricultural demand in Jordan. The observed patterns in the hydrographs and pollutographs showed that the hydraulic load could be used for the design of both physical and biological treatment units for dormitories and hotels. For family houses the hydraulic load was identified as the key design parameter for physical treatment units and the organic load is the key design parameter for biological treatment units. Copyright IWA Publishing 2008.

  15. Successful treatment of Batrachochytrium salamandrivorans infections in salamanders requires synergy between voriconazole, polymyxin E and temperature.

    Science.gov (United States)

    Blooi, M; Pasmans, F; Rouffaer, L; Haesebrouck, F; Vercammen, F; Martel, A

    2015-06-30

    Chytridiomycosis caused by the chytrid fungus Batrachochytrium salamandrivorans (Bsal) poses a serious threat to urodelan diversity worldwide. Antimycotic treatment of this disease using protocols developed for the related fungus Batrachochytrium dendrobatidis (Bd), results in therapeutic failure. Here, we reveal that this therapeutic failure is partly due to different minimum inhibitory concentrations (MICs) of antimycotics against Bsal and Bd. In vitro growth inhibition of Bsal occurs after exposure to voriconazole, polymyxin E, itraconazole and terbinafine but not to florfenicol. Synergistic effects between polymyxin E and voriconazole or itraconazole significantly decreased the combined MICs necessary to inhibit Bsal growth. Topical treatment of infected fire salamanders (Salamandra salamandra), with voriconazole or itraconazole alone (12.5 μg/ml and 0.6 μg/ml respectively) or in combination with polymyxin E (2000 IU/ml) at an ambient temperature of 15 °C during 10 days decreased fungal loads but did not clear Bsal infections. However, topical treatment of Bsal infected animals with a combination of polymyxin E (2000 IU/ml) and voriconazole (12.5 μg/ml) at an ambient temperature of 20 °C resulted in clearance of Bsal infections. This treatment protocol was validated in 12 fire salamanders infected with Bsal during a field outbreak and resulted in clearance of infection in all animals.

  16. Restrictive Behaviour Management Procedures with People with Intellectual Disabilities Who Require Dental Treatment

    Science.gov (United States)

    Newton, J. T.

    2009-01-01

    Background: Dental disease is more common among people with intellectual disabilities than in the general population. Improvements in oral health require individuals to engage in daily oral hygiene and regular visits to a dental practitioner; both may be challenging for the individual with intellectual impairment. Materials and Methods: A review…

  17. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  18. Analysis of indirect treatment comparisons in national health technology assessment assessments and requirements for industry submissions.

    Science.gov (United States)

    Es-Skali, Ischa J; Spoors, John

    2018-03-28

    To determine the preferred methodologies of health technology assessment (HTA) agencies across Europe, Canada and Australia to ascertain acceptance of indirect treatment comparisons (ITC) as a source of comparative evidence. A review of official submission guidelines and analysis of comments in HTA submissions that have used different ITC methodologies. ITC is generally accepted as a technique that allows demonstration of noninferiority to a comparator provided the chosen methodology and underlying assumptions are clear and justified. However, HTA agencies are more likely to closely scrutinize submitted data and evaluate statistical significance of results when superiority is claimed. In addition, the HTA agencies in scope tended to be cautious and only accept ITC data as support for similarity of treatments.

  19. Malocclusion and early orthodontic treatment requirements in the mixed dentitions of a population of Nigerian children.

    Science.gov (United States)

    daCosta, Oluranti Olatokunbo; Aikins, Elfleda Angelina; Isiekwe, Gerald Ikenna; Adediran, Virginia Efunyemi

    2016-01-01

    The aims of this study were to establish the prevalence of dental features that indicate a need for early intervention and to ascertain the prevalence of different methods of early treatment among a population of Nigerian children in mixed dentition. Occlusal relationships were evaluated in 101 children in mixed dentition between the ages of 6 and 12 years who presented at the Orthodontic Unit, Department of Child Dental Health, Lagos University Teaching Hospital over a 2 years period. The need for different modes of early orthodontic treatment was also recorded. Anterior tooth rotations (61.4%) and increased overjet (44.6%) were the most prevalent occlusal anomalies. Others included deep bite (31.7%), reverse overjet (13.9%), and anterior open bite (14.8%). Severe maxillary spacing and crowding were exhibited in 12.0% and 5.0%, respectively. About a third (35.7%) of the subjects presented with crossbite while lip incompetence was observed in 43.6% of the subjects. About 44% of the subjects also presented with various oral habits with digit (15.8%) and lip sucking (9.9%) being the most prevalent. Subjects were recommended for treatment with 2 by 4 fixed orthodontic appliances (22.3%), habit breakers (20.7%), removable orthodontic appliances (16.5%), and extractions (15.7%). Increased overjet and anterior tooth rotation were the majority of occlusal anomalies seen, which are not only esthetically displeasing but may also cause an increased susceptibility to trauma to these teeth. Treatment options varied from extractions only to the use of appliance therapy.

  20. Nerve growth factor beta polypeptide (NGFB) genetic variability: association with the methadone dose required for effective maintenance treatment

    OpenAIRE

    Levran, Orna; Peles, Einat; Hamon, Sara; Randesi, Matthew; Zhao, Connie; Zhang, Bin; Adelson, Miriam; Kreek, Mary Jeanne

    2011-01-01

    Opioid addiction is a chronic disease with high genetic contribution and a large inter-individual variability in therapeutic response. The goal of this study was to identify pharmacodynamic factors that modulate methadone dose requirement. The neurotrophin family is involved in neural plasticity, learning memory and behavior and deregulated neural plasticity may underlie the pathophysiology of drug addiction. BDNF was shown to affect the response to methadone maintenance treatment. This study...

  1. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power...

  2. Upgrade of Al-Aziziah Wastewater Treatment (Wasit to Meet Nutrient Removal Requirements

    Directory of Open Access Journals (Sweden)

    Mohammed Siwan Shamkhi

    2016-03-01

    Full Text Available The aim of this paper is to verify of suggestions to upgrade the existing process of wastewater treatment to achieve nutrient removal (phosphorus and nitrogen from the treated wastewater. The results show that the adding a cyclic anaerobic, anoxic and aerobic condition helped to biological nutrient removal efficiencies. The effluent phosphorus and nitrogen contaminants concentrations were below the maximum permissible concentration under various conditions of flow and temperature except considerable release of phosphorus during summer (July and August because the sensitivity of phosphate accumulating organisms PAOs to the temperature effect.

  3. Demonstration of New Technologies Required for the Treatment of Mixed Waste Contaminated with {ge}260 ppm Mercury

    Energy Technology Data Exchange (ETDEWEB)

    Morris, M.I.

    2002-02-06

    The Resource Conservation and Recovery Act (RCRA) defines several categories of mercury wastes, each of which has a defined technology or concentration-based treatment standard, or universal treatment standard (UTS). RCRA defines mercury hazardous wastes as any waste that has a TCLP value for mercury of 0.2 mg/L or greater. Three of these categories, all nonwastewaters, fall within the scope of this report on new technologies to treat mercury-contaminated wastes: wastes as elemental mercury; hazardous wastes with less than 260 mg/kg [parts per million (ppm)] mercury; and hazardous wastes with 260 ppm or more of mercury. While this report deals specifically with the last category--hazardous wastes with 260 ppm or more of mercury--the other two categories will be discussed briefly so that the full range of mercury treatment challenges can be understood. The treatment methods for these three categories are as follows: Waste as elemental mercury--RCRA identifies amalgamation (AMLGM) as the treatment standard for radioactive elemental mercury. However, radioactive mercury condensates from retorting (RMERC) processes also require amalgamation. In addition, incineration (IMERC) and RMERC processes that produce residues with >260 ppm of radioactive mercury contamination and that fail the RCRA toxicity characteristic leaching procedure (TCLP) limit for mercury (0.20 mg/L) require RMERC, followed by AMLGM of the condensate. Waste with <260 ppm mercury--No specific treatment method is specified for hazardous wastes containing <260 ppm. However, RCRA regulations require that such wastes (other than RMERC residues) that exceed a TCLP mercury concentration of 0.20 mg/L be treated by a suitable method to meet the TCLP limit for mercury of 0.025 mg/L. RMERC residues must meet the TCLP value of {ge}0.20 mg/L, or be stabilized and meet the {ge}0.025 mg/L limit. Waste with {ge}260 ppm mercury--For hazardous wastes with mercury contaminant concentrations {ge}260 ppm and RCRA

  4. Who Are the Subjects with Gambling-Related Problems Requiring Treatment? A Study in Northern Italy

    Science.gov (United States)

    Fioritti, Angelo; Marani, Silvia; Gambini, Daniele; Turino, Elsa; Piazza, Antonella

    2018-01-01

    Background: This study analyzes data related to Hospital (HOS), Public Treatment Service Dedicated to Drug Addicts (SERD), or Community Mental Health Center (CMHC) clients with a first diagnosis of Pathological Gambling (PG) in the period 2000/2016 in Northern Italy. The aims were to describe trends and characteristics of pathological gamblers (PGs) and to estimate the prevalence of other diagnoses before or after the diagnosis of PG. Methods: Participants aged over 17 years with an ICD-9 or ICD-10 PG diagnosis were selected. Results: 680 PGs were identified, mean age 47.4 years, 20% female, 13% non-natives, 30% had other mental disorders diagnoses, 9% had alcohol dependence syndrome, and 11% had drug dependence. Most participants with comorbid disorders were diagnosed before PG, with a more elevated prevalence regarding mental disorders. Almost seven years had elapsed on average between the first admission and the diagnosis of PG. Conclusions: The results of this study highlight a growing demand for PG treatment addressed not only to SERD, but also to psychiatric and hospital services, based on the increase in SERD attendance from 2013. Many of them had already been treated for mental health problems before, but their percentage remained costant over time. PMID:29652821

  5. Who Are the Subjects with Gambling-Related Problems Requiring Treatment? A Study in Northern Italy

    Directory of Open Access Journals (Sweden)

    Raimondo Maria Pavarin

    2018-04-01

    Full Text Available Background: This study analyzes data related to Hospital (HOS, Public Treatment Service Dedicated to Drug Addicts (SERD, or Community Mental Health Center (CMHC clients with a first diagnosis of Pathological Gambling (PG in the period 2000/2016 in Northern Italy. The aims were to describe trends and characteristics of pathological gamblers (PGs and to estimate the prevalence of other diagnoses before or after the diagnosis of PG. Methods: Participants aged over 17 years with an ICD-9 or ICD-10 PG diagnosis were selected. Results: 680 PGs were identified, mean age 47.4 years, 20% female, 13% non-natives, 30% had other mental disorders diagnoses, 9% had alcohol dependence syndrome, and 11% had drug dependence. Most participants with comorbid disorders were diagnosed before PG, with a more elevated prevalence regarding mental disorders. Almost seven years had elapsed on average between the first admission and the diagnosis of PG. Conclusions: The results of this study highlight a growing demand for PG treatment addressed not only to SERD, but also to psychiatric and hospital services, based on the increase in SERD attendance from 2013. Many of them had already been treated for mental health problems before, but their percentage remained costant over time.

  6. Effective and accurate screening for diabetic retinopathy using a 60 ...

    African Journals Online (AJOL)

    The endocrinologist was very accurate in determining cases requiring referral; there was 97% agreement with the reference standard, viz. the combined highest score of two experienced ophthalmologists (gold standard). Correlation on the determination of any retinopathy was less accurate (80% agreement), mostly owing ...

  7. Software to support planning for future waste treatment, storage, transport, and disposal requirements

    International Nuclear Information System (INIS)

    Holter, G.M.; Shay, M.R.; Stiles, D.L.

    1990-04-01

    Planning for adequate and appropriate treatment, storage, transport and disposal of wastes to be generated or received in the future is a complex but critical task that can be significantly enhanced by the development and use of appropriate software. This paper describes a software system that has been developed at Pacific Northwest Laboratory to aid in such planning. The basic needs for such a system are outlined, and the approach adopted in developing the software is described. The individual components of the system, and their integration into a unified system, are discussed. Typical analytical applications of this type of software are summarized. Conclusions concerning the development of such software systems and the necessary supporting data are then presented. 2 figs

  8. Vitamin D and Calcium Are Required during Denosumab Treatment in Osteoporosis with Rheumatoid Arthritis

    Directory of Open Access Journals (Sweden)

    Yukio Nakamura

    2017-04-01

    Full Text Available The aim of this 12-month retrospective study was to evaluate differences in the outcomes of denosumab alone or denosumab combined with vitamin D and calcium supplementation in patients having osteoporosis (OP with rheumatoid arthritis (RA. Patients were divided into the denosumab monotherapy group (denosumab group, 22 cases or denosumab plus vitamin D supplementation group (combination group, 21 cases. We measured serum bone alkaline phosphatase (BAP, N-terminal propeptide of type 1 collagen (P1NP, tartrate-resistant acid phosphatase (TRACP-5b, and urinary N-terminal telopeptide of type-I collagen (NTX at baseline, 1 week, and 1, 2, 4, 6, 8, and 12 months. We also assessed bone mineral density (BMD of the lumbar 1-4 vertebrae (L-BMD and bilateral total hips (H-BMD at baseline and 4, 8, and 12 months. Matrix metalloprotanase-3 (MMP-3, Disease Activity Score-28 C-reactive protein (DAS28-CRP, Simplified Disease Activity Index (SDAI, and Health Assessment Questionnaire-Disability Index (HAQ-DI were assessed before treatment and at 12 months to evaluate RA conditions. The study results showed that BAP, TRACP-5b, and NTX were significantly decreased, but tended to return to pre-treatment levels around 6 and 12 months in both groups. While L-BMD and H-BMD substantially increased in both groups, H-BMD had become significantly higher in the combination group at 12 months (p < 0.01 as compared with the denosumab group. There were no significant differences between the groups regarding MMP-3, DAS28-CRP, SDAI, or HAQ-DI. Compared with denosumab monotherapy, combination therapy of denosumab with vitamin D and calcium significantly increased H-BMD in patients having OP with RA.

  9. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  10. Accurate x-ray spectroscopy

    International Nuclear Information System (INIS)

    Deslattes, R.D.

    1987-01-01

    Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data

  11. The Accurate Particle Tracer Code

    OpenAIRE

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...

  12. Epidemiological survey on third molar agenesis and facial pattern among adolescents requiring orthodontic treatment.

    Science.gov (United States)

    Gómez de Diego, Rafael; Montero, Javier; López-Valverde, Nansi; Ignacio de Nieves, José; Prados-Frutos, Juan-Carlos; López-Valverde, Antonio

    2017-09-01

    The aim of this study was to determine the association between facial pattern according to Ricketts cephalometric analysis, and prevalence of third molar agenesis, taking subject age and gender as control variables. An epidemiological survey was conducted based on a sample of 224 candidates for orthodontic treatment aged 12 to 24 (n=224). Third molar agenesis was recorded using Ricketts cephalometric analyses of lateral teleradiographs and panoramic radiographs. The risk for agenesis was predicted considering the 5 Vert Index parameters (facial axis, facial depth, mandibular plane angle, lower facial height and mandibular arch), facial type (brachyfacial, mesofacial, dolichofacial) and sociodemographic variables (age and sex), using odds ratio (OR) calculated by logistic regression. Third molar agenesis was observed in 25% of the sample. Risk for agenesis is significantly determined by sociodemographic factors (age, OR: 1.2), cephalic patterns (mesofacial vs dolichofacial, OR:4.3; and brachyfacial vs dolichofacial OR: 3.2) and cephalometric patterns (facial axis, OR: 0.8; lower facial height, OR: 0.8; and mandibular plane angle, OR:0.9). Facial parameters (facial axis, lower facial height, and mandibular plane angle) proved to be strong predictors of the risk for third molar agenesis, the prevalence of agenesis being significantly lower in dolichofacial individuals. Key words: Facial Pattern, Ricketts Analysis, Third Molar Agenesis.

  13. Antiretroviral treatment in HIV-infected children who require a rifamycin-containing regimen for tuberculosis.

    Science.gov (United States)

    Rabie, Helena; Decloedt, Eric H; Garcia-Prats, Anthony J; Cotton, Mark F; Frigati, Lisa; Lallemant, Marc; Hesseling, Anneke; Schaaf, H Simon

    2017-04-01

    In high prevalence settings, tuberculosis and HIV dual infection and co-treatment is frequent. Rifamycins, especially rifampicin, in combination with isoniazid, ethambutol and pyrazinamide are key components of short-course antituberculosis therapy. Areas covered: We reviewed available data, for which articles were identified by a Pubmed search, on rifamycin-antiretroviral interactions in HIV-infected children. Rifamycins have potent inducing effects on phase I and II drug metabolising enzymes and transporters. Antiretroviral medications are often metabolised by the enzymes induced by rifamycins or may suppress specific enzyme activity leading to drug-drug interactions with rifamycins. These may cause significant alterations in their phamacokinetic and pharmacodynamic properties, and sometimes that of the rifamycin. Recommended strategies to adapt to these interactions include avoidance and dose adjustment. Expert opinion: Despite the importance and frequency of tuberculosis as an opportunistic disease in HIV-infected children, current data on the management of co-treated children is based on few studies. We need new strategies to rapidly assess the use of rifamycins, new anti-tuberculosis drugs and antiretroviral drugs together as information on safety and dosing of individual drugs becomes available.

  14. Approximal sealings on lesions in neighbouring teeth requiring operative treatment: an in vitro study.

    Science.gov (United States)

    Cartagena, Alvaro; Bakhshandeh, Azam; Ekstrand, Kim Rud

    2018-02-07

    With this in vitro study we aimed to assess the possibility of precise application of sealant on accessible artificial white spot lesions (WSL) on approximal surfaces next to a tooth surface under operative treatment. A secondary aim was to evaluate whether the use of magnifying glasses improved the application precision. Fifty-six extracted premolars were selected, approximal WSL lesions were created with 15% HCl gel and standardized photographs were taken. The premolars were mounted in plaster-models in contact with a neighbouring molar with Class II/I-II restoration (Sample 1) or approximal, cavitated dentin lesion (Sample 2). The restorations or the lesion were removed, and Clinpro Sealant was placed over the WSL. Magnifying glasses were used when sealing half the study material. The sealed premolar was removed from the plaster-model and photographed. Adobe Photoshop was used to measure the size of WSL and sealed area. The degree of match between the areas was determined in Photoshop. Interclass agreement for WSL, sealed, and matched areas were found as excellent (κ = 0.98-0.99). The sealant covered 48-100% of the WSL-area (median = 93%) in Sample 1 and 68-100% of the WSL-area (median = 95%) in Sample 2. No statistical differences were observed concerning uncovered proportions of the WSL-area between groups with and without using magnifying glasses (p values ≥ .19). However, overextended sealed areas were more pronounced when magnification was used (p = .01). The precision did not differ between the samples (p = .31). It was possible to seal accessible approximal lesions with high precision. Use of magnifying glasses did not improve the precision.

  15. Incidence of Postoperative Hematomas Requiring Surgical Treatment in Neurosurgery: A Retrospective Observational Study.

    Science.gov (United States)

    Lillemäe, Kadri; Järviö, Johanna Annika; Silvasti-Lundell, Marja Kaarina; Antinheimo, Jussi Juha-Pekka; Hernesniemi, Juha Antero; Niemi, Tomi Tapio

    2017-12-01

    We aimed to characterize the occurrence of postoperative hematoma (POH) after neurosurgery overall and according to procedure type and describe the prevalence of possible confounders. Patient data between 2010 and 2012 at the Department of Neurosurgery in Helsinki University Hospital were retrospectively analyzed. A data search was performed according to the type of surgery including craniotomies; shunt procedures, spine surgery, and spinal cord stimulator implantation. We analyzed basic preoperative characteristics, as well as data about the initial intervention, perioperative period, revision operation and neurologic recovery (after craniotomy only). The overall incidence of POH requiring reoperation was 0.6% (n = 56/8783) to 0.6% (n = 26/4726) after craniotomy, 0% (n = 0/928) after shunting procedure, 1.1% (n = 30/2870) after spine surgery, and 0% (n = 0/259) after implantation of a spinal cord stimulator. Craniotomy types with higher POH incidence were decompressive craniectomy (7.9%, n = 7/89), cranioplasty (3.6%, n = 4/112), bypass surgery (1.7%, n = 1/60), and epidural hematoma evacuation (1.6%, n = 1/64). After spinal surgery, POH was observed in 1.1% of cervical and 2.1% of thoracolumbar operations, whereas 46.7% were multilevel procedures. 64.3% of patients with POH and 84.6% of patients undergoing craniotomy had postoperative hypertension (systolic blood pressure >160 mm Hg or lower if indicated). Poor outcome (Glasgow Outcome Scale score 1-3), whereas death at 6 months after craniotomy was detected in 40.9% and 21.7%. respectively, of patients with POH who underwent craniotomy. POH after neurosurgery was rare in this series but was associated with poor outcome. Identification of risk factors of bleeding, and avoiding them, if possible, might decrease the incidence of POH. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Requirements Identification Towards a Design of Adaptive ICTs for Supporting Bipolar Disorder Treatment in Different Healthcare Contexts

    Directory of Open Access Journals (Sweden)

    Emanuele Torri

    2015-10-01

    Full Text Available This paper presents patient and caregiver perspective on ICTs supporting bipolar disorder management in multinational healthcare provisioning contexts. The envisioned mHealth solutions should adopt general requirements that could be instantiated into different clinical settings. Engagement of users in designing new technologies for mental health is crucial to ensure empowerment and patient-centeredness of services. We performed focus groups to understand user needs, attitudes and experiences towards the supportive ICTs in two target regions where the expected solutions will operate. The survey offered valuable inputs for the construction of the clinical requirements used to produce a trans-national call for tender on mobile health solutions aimed at supporting bipolar disorders treatment among public purchasers in different European countries. The study was part of the NYMHPA-MD (Next Generation Mobile Platform for Health in Mental Disorders project, co-funded by the European Commission.

  17. Early chest tube removal following cardiac surgery is associated with pleural and/or pericardial effusions requiring invasive treatment.

    Science.gov (United States)

    Andreasen, Jan J; Sørensen, Gustav V B; Abrahamsen, Emil R; Hansen-Nord, Erika; Bundgaard, Kristian; Bendtsen, Mette D; Troelsen, Pernille

    2016-01-01

    Different opinions exist as to when chest tube removal should be performed following cardiac surgery. The aim of this study was to compare early chest tube removal with removal of the tubes in the morning day 1 postoperatively. Primary combined end point was the risk of postoperative accumulation of fluid in the pericardial and/or pleural cavities requiring invasive treatment. A retrospective observational cohort study was performed among patients undergoing coronary artery bypass grafting (CABG) and/or conventional valve surgery between July 2010 and June 2013. Patients in whom chest tube output was tubes removed around midnight on the day of surgery, whereas Group 2 kept their tubes until next morning. Using Poisson regression, we estimated crude and adjusted relative risks (RRs) for developing postoperative pleural and/or pericardial effusion within 14 days requiring interventional treatment. A total of 1232 patients underwent CABG, conventional valve or combined surgery during the study period. Of these, 782 patients fulfilled the criteria for early chest tube removal, which was performed in 385 of the patients. A total of 76 patients in Group 1 (20%) and 51 patients in Group 2 (13%) developed postoperative pleural and/or pericardial effusions requiring invasive treatment (P = 0.011). A positive association between early chest tube removal and the development of pleural and/or pericardial effusions was seen [crude RR: 1.54 (95% CI: 1.11-2.13); adjusted RR: 1.70 (95% CI: 1.24-2.33)]. The association became stronger investigating pleural effusions alone (adjusted RR = 1.77; 95% CI: 1.27-2.46), whereas the association with pericardial effusions was less clear. Removal of all chest tubes around midnight on the day of surgery is associated with an increased risk of postoperative pleural and/or pericardial effusions requiring invasive treatment even if chest tube output during the last 4 h is tubes next morning. © The Author 2015. Published by Oxford University

  18. Emergence of multiresistant methicillin-resistant Staphylococcus aureus in two patients with atopic dermatitis requiring linezolid treatment.

    Science.gov (United States)

    Rosa, Jaime S; Ross, Lawrence A; Ong, Peck Y

    2014-01-01

    We report two patients with atopic dermatitis who developed methicillin-resistant Staphylococcus aureus (MRSA) skin infections resistant to clindamycin and trimethoprim-sulfamethoxazole requiring repeated linezolid treatment. For one patient and family members who received an aggressive regimen of staphylococcal decolonization, including intranasal mupirocin, dilute bleach baths, and bleach cleansing of household items and surfaces, subsequent culture results demonstrated methicillin-susceptible S. aureus colonization and infection. These findings underscore the challenges presented by multiresistant MRSA infections in children with atopic dermatitis. © 2012 Wiley Periodicals, Inc.

  19. SEISMIC DESIGN REQUIREMENTS SELECTION METHODOLOGY FOR THE SLUDGE TREATMENT and M-91 SOLID WASTE PROCESSING FACILITIES PROJECTS

    International Nuclear Information System (INIS)

    RYAN GW

    2008-01-01

    In complying with direction from the U.S. Department of Energy (DOE), Richland Operations Office (RL) (07-KBC-0055, 'Direction Associated with Implementation of DOE-STD-1189 for the Sludge Treatment Project,' and 08-SED-0063, 'RL Action on the Safety Design Strategy (SDS) for Obtaining Additional Solid Waste Processing Capabilities (M-91 Project) and Use of Draft DOE-STD-I 189-YR'), it has been determined that the seismic design requirements currently in the Project Hanford Management Contract (PHMC) will be modified by DOE-STD-1189, Integration of Safety into the Design Process (March 2007 draft), for these two key PHMC projects. Seismic design requirements for other PHMC facilities and projects will remain unchanged. Considering the current early Critical Decision (CD) phases of both the Sludge Treatment Project (STP) and the Solid Waste Processing Facilities (M-91) Project and a strong intent to avoid potentially costly re-work of both engineering and nuclear safety analyses, this document describes how Fluor Hanford, Inc. (FH) will maintain compliance with the PHMC by considering both the current seismic standards referenced by DOE 0 420.1 B, Facility Safety, and draft DOE-STD-1189 (i.e., ASCE/SEI 43-05, Seismic Design Criteria for Structures, Systems, and Components in Nuclear Facilities, and ANSI ANS 2.26-2004, Categorization of Nuclear Facility Structures, Systems and Components for Seismic Design, as modified by draft DOE-STD-1189) to choose the criteria that will result in the most conservative seismic design categorization and engineering design. Following the process described in this document will result in a conservative seismic design categorization and design products. This approach is expected to resolve discrepancies between the existing and new requirements and reduce the risk that project designs and analyses will require revision when the draft DOE-STD-1189 is finalized

  20. Accurate thickness measurement of graphene

    International Nuclear Information System (INIS)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-01-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)

  1. Spanish-language community-based mental health treatment programs, policy-required language-assistance programming, and mental health treatment access among Spanish-speaking clients.

    Science.gov (United States)

    Snowden, Lonnie R; McClellan, Sean R

    2013-09-01

    We investigated the extent to which implementing language assistance programming through contracting with community-based organizations improved the accessibility of mental health care under Medi-Cal (California's Medicaid program) for Spanish-speaking persons with limited English proficiency, and whether it reduced language-based treatment access disparities. Using a time series nonequivalent control group design, we studied county-level penetration of language assistance programming over 10 years (1997-2006) for Spanish-speaking persons with limited English proficiency covered under Medi-Cal. We used linear regression with county fixed effects to control for ongoing trends and other influences. When county mental health plans contracted with community-based organizations, those implementing language assistance programming increased penetration rates of Spanish-language mental health services under Medi-Cal more than other plans (0.28 percentage points, a 25% increase on average; P language-related disparities. Mental health treatment programs operated by community-based organizations may have moderately improved access after implementing required language assistance programming, but the programming did not reduce entrenched disparities in the accessibility of mental health services.

  2. Spanish-Language Community-Based Mental Health Treatment Programs, Policy-Required Language-Assistance Programming, and Mental Health Treatment Access Among Spanish-Speaking Clients

    Science.gov (United States)

    McClellan, Sean R.

    2013-01-01

    Objectives. We investigated the extent to which implementing language assistance programming through contracting with community-based organizations improved the accessibility of mental health care under Medi-Cal (California’s Medicaid program) for Spanish-speaking persons with limited English proficiency, and whether it reduced language-based treatment access disparities. Methods. Using a time series nonequivalent control group design, we studied county-level penetration of language assistance programming over 10 years (1997–2006) for Spanish-speaking persons with limited English proficiency covered under Medi-Cal. We used linear regression with county fixed effects to control for ongoing trends and other influences. Results. When county mental health plans contracted with community-based organizations, those implementing language assistance programming increased penetration rates of Spanish-language mental health services under Medi-Cal more than other plans (0.28 percentage points, a 25% increase on average; P language-related disparities. Conclusions. Mental health treatment programs operated by community-based organizations may have moderately improved access after implementing required language assistance programming, but the programming did not reduce entrenched disparities in the accessibility of mental health services. PMID:23865663

  3. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  4. Oronasal Masks Require a Higher Pressure than Nasal and Nasal Pillow Masks for the Treatment of Obstructive Sleep Apnea.

    Science.gov (United States)

    Deshpande, Sheetal; Joosten, Simon; Turton, Anthony; Edwards, Bradley A; Landry, Shane; Mansfield, Darren R; Hamilton, Garun S

    2016-09-15

    Oronasal masks are frequently used for continuous positive airway pressure (CPAP) treatment in patients with obstructive sleep apnea (OSA). The aim of this study was to (1) determine if CPAP requirements are higher for oronasal masks compared to nasal mask interfaces and (2) assess whether polysomnography and patient characteristics differed among mask preference groups. Retrospective analysis of all CPAP implementation polysomnograms between July 2013 and June 2014. Prescribed CPAP level, polysomnography results and patient data were compared according to mask type (n = 358). Oronasal masks were used in 46%, nasal masks in 35% and nasal pillow masks in 19%. There was no difference according to mask type for baseline apnea-hypopnea index (AHI), body mass index (BMI), waist or neck circumference. CPAP level was higher for oronasal masks, 12 (10-15.5) cm H2O compared to nasal pillow masks, 11 (8-12.5) cm H2O and nasal masks, 10 (8-12) cm H2O, p CPAP pressure (p CPAP ≥ 15 cm H2O, there was an odds ratio of 4.5 (95% CI 2.5-8.0) for having an oronasal compared to a nasal or nasal pillow mask. Residual median AHI was higher for oronasal masks (11.3 events/h) than for nasal masks (6.4 events/h) and nasal pillows (6.7 events/h), p nasal mask types, oronasal masks are associated with higher CPAP pressures (particularly pressures ≥ 15 cm H2O) and a higher residual AHI. Further evaluation with a randomized control trial is required to definitively establish the effect of mask type on pressure requirements. A commentary on this article appears in this issue on page 1209. © 2016 American Academy of Sleep Medicine.

  5. Accurate Numerical Approximation to the Gauss-Lorentz Lineshape

    Science.gov (United States)

    Grivet

    1997-03-01

    The Gauss-Lorentz lineshape often observed in EPR or NMR is shown to be simply related to the complex error function. Using numerical algorithms developed for the evaluation of this function, experimental lineshapes can be accurately and rapidly simulated. Formulas are presented for the derivatives of the line profile with respect to the parameters and for the approximate computation of the overall linewidth. It is observed that accurate integrals require use of a wide integration interval.

  6. Effective and Accurate Colormap Selection

    Science.gov (United States)

    Thyng, K. M.; Greene, C. A.; Hetland, R. D.; Zimmerle, H.; DiMarco, S. F.

    2016-12-01

    Science is often communicated through plots, and design choices can elucidate or obscure the presented data. The colormap used can honestly and clearly display data in a visually-appealing way, or can falsely exaggerate data gradients and confuse viewers. Fortunately, there is a large resource of literature in color science on how color is perceived which we can use to inform our own choices. Following this literature, colormaps can be designed to be perceptually uniform; that is, so an equally-sized jump in the colormap at any location is perceived by the viewer as the same size. This ensures that gradients in the data are accurately percieved. The same colormap is often used to represent many different fields in the same paper or presentation. However, this can cause difficulty in quick interpretation of multiple plots. For example, in one plot the viewer may have trained their eye to recognize that red represents high salinity, and therefore higher density, while in the subsequent temperature plot they need to adjust their interpretation so that red represents high temperature and therefore lower density. In the same way that a single Greek letter is typically chosen to represent a field for a paper, we propose to choose a single colormap to represent a field in a paper, and use multiple colormaps for multiple fields. We have created a set of colormaps that are perceptually uniform, and follow several other design guidelines. There are 18 colormaps to give options to choose from for intuitive representation. For example, a colormap of greens may be used to represent chlorophyll concentration, or browns for turbidity. With careful consideration of human perception and design principles, colormaps may be chosen which faithfully represent the data while also engaging viewers.

  7. 40 CFR Appendix J to Part 122 - NPDES Permit Testing Requirements for Publicly Owned Treatment Works (§ 122.21(j))

    Science.gov (United States)

    2010-07-01

    ... Publicly Owned Treatment Works (§ 122.21(j)) J Appendix J to Part 122 Protection of Environment... POLLUTANT DISCHARGE ELIMINATION SYSTEM Pt. 122, App. J Appendix J to Part 122—NPDES Permit Testing Requirements for Publicly Owned Treatment Works (§ 122.21(j)) Table 1A—Effluent Parameters for All POTWS...

  8. Accurate dispersion calculations: AUSTAL2000

    International Nuclear Information System (INIS)

    Janicke, U.

    2005-01-01

    Until the 2002 amendment of the Clean Air Technical Code of 1968, Annex C of this regulation required standard pollutant emission forecasts to be based on the Gaussian flag model. It was clear even at the time of the Code's initial promulgation that this model is only valid in a very narrow application range and in particular not in cases of sources close to ground level, low ground surface roughness and complex dispersion situations. In German licensing procedures there has been for this reason an increasing use of more complex models over the past 10 years, the most frequently used of which today is a Lagrangian dispersion model. This model type was standardised in VDI (Association of German Engineers) Guideline 3945 Sheet 3 in the year 2000. In the course of amending the Clean Air Technical Code in accordance with the new EU Framework Directive the decision was taken at the Environmental Protection Office to replace the Gaussian model with the Lagrangian model as described in VDI 3945 Sheet 3. Using the LASAT dispersion model as a basis the AUSTAL2000 program system has now been developed, providing an example of how the algorithms of Annex 3 of the Clean Air Technical Code can be used in practice. AUSTAL2000 has been available on the Internet since the year 2002 along with source text, documentation and example calculations

  9. Reducing dose calculation time for accurate iterative IMRT planning

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe

    2002-01-01

    A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation

  10. The Clinical Features, Risk Factors, and Surgical Treatment of Cervicogenic Headache in Patients With Cervical Spine Disorders Requiring Surgery.

    Science.gov (United States)

    Shimohata, Keiko; Hasegawa, Kazuhiro; Onodera, Osamu; Nishizawa, Masatoyo; Shimohata, Takayoshi

    2017-07-01

    To clarify the clinical features and risk factors of cervicogenic headache (CEH; as diagnosed according to the International Classification of Headache Disorders-Third Edition beta) in patients with cervical spine disorders requiring surgery. CEH is caused by cervical spine disorders. The pathogenic mechanism of CEH is hypothesized to involve a convergence of the upper cervical afferents from the C1, C2, and C3 spinal nerves and the trigeminal afferents in the trigeminocervical nucleus of the upper cervical cord. According to this hypothesis, functional convergence of the upper cervical and trigeminal sensory pathways allows the bidirectional (afferent and efferent) referral of pain to the occipital, frontal, temporal, and/or orbital regions. Previous prospective studies have reported an 86-88% prevalence of headache in patients with cervical myelopathy or radiculopathy requiring anterior cervical surgery; however, these studies did not diagnose headache according to the International Classification of Headache Disorders criteria. Therefore, a better understanding of the prevalence rate, clinical features, risk factors, and treatment responsiveness of CEH in patients with cervical spine disorders requiring surgery is necessary. We performed a single hospital-based prospective cross-sectional study and enrolled 70 consecutive patients with cervical spine disorders such as cervical spondylotic myelopathy, ossification of the posterior longitudinal ligament, cervical spondylotic radiculopathy, and cervical spondylotic myeloradiculopathy who had been scheduled to undergo anterior cervical fusion or dorsal cervical laminoplasty between June 2014 and December 2015. Headache was diagnosed preoperatively according to the International Classification of Headache Disorders-Third Edition beta. The Japanese Orthopaedic Association Cervical Myelopathy Evaluation Questionnaire, Neck Disability Index, and a 0-100 mm visual analog scale (VAS) were used to evaluate clinical

  11. An update to the HIV-TRePS system: the development of new computational models that do not require a genotype to predict HIV treatment outcomes.

    Science.gov (United States)

    Revell, Andrew D; Wang, Dechao; Wood, Robin; Morrow, Carl; Tempelman, Hugo; Hamers, Raph; Alvarez-Uria, Gerardo; Streinu-Cercel, Adrian; Ene, Luminita; Wensing, Annemarie; Reiss, Peter; van Sighem, Ard I; Nelson, Mark; Emery, Sean; Montaner, Julio S G; Lane, H Clifford; Larder, Brendan A

    2014-04-01

    The optimal individualized selection of antiretroviral drugs in resource-limited settings is challenging because of the limited availability of drugs and genotyping. Here we describe the development of the latest computational models to predict the response to combination antiretroviral therapy without a genotype, for potential use in such settings. Random forest models were trained to predict the probability of a virological response to therapy (HIV RNA/mL) following virological failure using the following data from 22,567 treatment-change episodes including 1090 from southern Africa: baseline viral load and CD4 cell count, treatment history, drugs in the new regimen, time to follow-up and follow-up viral load. The models were assessed during cross-validation and with an independent global test set of 1000 cases including 100 from southern Africa. The models' accuracy [area under the receiver-operating characteristic curve (AUC)] was evaluated and compared with genotyping using rules-based interpretation systems for those cases with genotypes available. The models achieved AUCs of 0.79-0.84 (mean 0.82) during cross-validation, 0.80 with the global test set and 0.78 with the southern African subset. The AUCs were significantly lower (0.56-0.57) for genotyping. The models predicted virological response to HIV therapy without a genotype as accurately as previous models that included a genotype. They were accurate for cases from southern Africa and significantly more accurate than genotyping. These models will be accessible via the online treatment support tool HIV-TRePS and have the potential to help optimize antiretroviral therapy in resource-limited settings where genotyping is not generally available.

  12. A fast and accurate numerical method for solving simulated moving bed (SMB) chromatographic separation problems

    DEFF Research Database (Denmark)

    Lim, Young-il; Jørgensen, Sten Bay

    2003-01-01

    numerical solutions are obtained. Stable solutions are guaranteed if the Courant-Friedrichs-Lewy (CFL) condition is satisfied. The boundary condition and recycle flow treatments are much simpler than for the time integrator in the framework of the method of lines. Applying the CE/SE method for SMB......Solving simulated moving bed (SMB) chromatography models requires fast and accurate numerical techniques, since their system size computed is large due to multi-columns and multi-components, in addition the axial solution profiles contain steep moving fronts. The space-time conservation element....../solution element (CE/SE) method addressed in this study enforces both local and global flux conservation in space and time, and uses a simple stencil structure (two points at the previous time level and one point at the present time level) on staggered space-time grids. Thus, accurate and computationally efficient...

  13. Rapid and accurate determination of the lignin content of lignocellulosic biomass by solid-state NMR.

    Science.gov (United States)

    Fu, Li; McCallum, Scott A; Miao, Jianjun; Hart, Courtney; Tudryn, Gregory J; Zhang, Fuming; Linhardt, Robert J

    2015-02-01

    Biofuels and biomaterials, produced from lignocellulosic feedstock, require facile access to cellulose and hemicellulose to be competitive with petroleum processing and sugar-based fermentation. Physical-chemical barriers resulting from lignin complicates the hydrolysis biomass into fermentable sugars. Thus, the amount of lignin within a substrate is critical in determining biomass processing. The application of 13 C cross-polarization, magic-angle spinning, and solid-state nuclear magnetic resonance for the direct quantification of lignin content in biomass is examined. Using a standard curve constructed from pristine lignin and cellulose, the lignin content of a biomass sample is accurately determined through direct measurement without chemical or enzymatic pre-treatment.

  14. The SUMO protease SENP1 is required for cohesion maintenance and mitotic arrest following spindle poison treatment

    Energy Technology Data Exchange (ETDEWEB)

    Era, Saho [Fondazione IFOM, Istituto FIRC di Oncologia Molecolare, IFOM-IEO campus, Via Adamello 16, 20139 Milan (Italy); Radiation Genetics, Graduate School of Medicine, Kyoto University, Yoshida-Konoe, Sakyo-ku, Kyoto 606-8501 (Japan); Abe, Takuya; Arakawa, Hiroshi [Fondazione IFOM, Istituto FIRC di Oncologia Molecolare, IFOM-IEO campus, Via Adamello 16, 20139 Milan (Italy); Kobayashi, Shunsuke [Radiation Genetics, Graduate School of Medicine, Kyoto University, Yoshida-Konoe, Sakyo-ku, Kyoto 606-8501 (Japan); Szakal, Barnabas [Fondazione IFOM, Istituto FIRC di Oncologia Molecolare, IFOM-IEO campus, Via Adamello 16, 20139 Milan (Italy); Yoshikawa, Yusuke; Motegi, Akira; Takeda, Shunichi [Radiation Genetics, Graduate School of Medicine, Kyoto University, Yoshida-Konoe, Sakyo-ku, Kyoto 606-8501 (Japan); Branzei, Dana, E-mail: dana.branzei@ifom.eu [Fondazione IFOM, Istituto FIRC di Oncologia Molecolare, IFOM-IEO campus, Via Adamello 16, 20139 Milan (Italy)

    2012-09-28

    Highlights: Black-Right-Pointing-Pointer SENP1 knockout chicken DT40 cells are hypersensitive to spindle poisons. Black-Right-Pointing-Pointer Spindle poison treatment of SENP1{sup -/-} cells leads to increased mitotic slippage. Black-Right-Pointing-Pointer Mitotic slippage in SENP1{sup -/-} cells associates with apoptosis and endoreplication. Black-Right-Pointing-Pointer SENP1 counteracts sister chromatid separation during mitotic arrest. Black-Right-Pointing-Pointer Plk1-mediated cohesion down-regulation is involved in colcemid cytotoxicity. -- Abstract: SUMO conjugation is a reversible posttranslational modification that regulates protein function. SENP1 is one of the six SUMO-specific proteases present in vertebrate cells and its altered expression is observed in several carcinomas. To characterize SENP1 role in genome integrity, we generated Senp1 knockout chicken DT40 cells. SENP1{sup -/-} cells show normal proliferation, but are sensitive to spindle poisons. This hypersensitivity correlates with increased sister chromatid separation, mitotic slippage, and apoptosis. To test whether the cohesion defect had a causal relationship with the observed mitotic events, we restored the cohesive status of sister chromatids by introducing the TOP2{alpha}{sup +/-} mutation, which leads to increased catenation, or by inhibiting Plk1 and Aurora B kinases that promote cohesin release from chromosomes during prolonged mitotic arrest. Although TOP2{alpha} is SUMOylated during mitosis, the TOP2{alpha}{sup +/-} mutation had no obvious effect. By contrast, inhibition of Plk1 or Aurora B rescued the hypersensitivity of SENP1{sup -/-} cells to colcemid. In conclusion, we identify SENP1 as a novel factor required for mitotic arrest and cohesion maintenance during prolonged mitotic arrest induced by spindle poisons.

  15. Laser interference lithography with highly accurate interferometric alignment

    NARCIS (Netherlands)

    van Soest, Frank J.; van Wolferen, Hendricus A.G.M.; Hoekstra, Hugo; Worhoff, Kerstin; Lambeck, Paul; de Ridder, R.M.; de Ridder, R.M; Altena, G.; Altena, G; Geuzebroek, D.H.; Dekker, R.; Dekker, R

    2003-01-01

    Three dimensional photonic crystals, e.g. for obtaining the so-called woodpile structure, can, among others, be fabricated by vertical stacking of multiple gratings. One of the requirements for obtaining a full photonic bandgap in such a photonic crystal is an accurate angular and lateral alignment

  16. Novel multi-beam radiometers for accurate ocean surveillance

    DEFF Research Database (Denmark)

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions, co...

  17. Accurate adiabatic correction in the hydrogen molecule

    Energy Technology Data Exchange (ETDEWEB)

    Pachucki, Krzysztof, E-mail: krp@fuw.edu.pl [Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw (Poland); Komasa, Jacek, E-mail: komasa@man.poznan.pl [Faculty of Chemistry, Adam Mickiewicz University, Umultowska 89b, 61-614 Poznań (Poland)

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  18. A stiffly accurate integrator for elastodynamic problems

    KAUST Repository

    Michels, Dominik L.

    2017-07-21

    We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.

  19. Spectrally accurate initial data in numerical relativity

    Science.gov (United States)

    Battista, Nicholas A.

    Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.

  20. Accurate torque-speed performance prediction for brushless dc motors

    Science.gov (United States)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  1. MINIMAL REQUIREMENTS FOR THE DIAGNOSIS, CLASSIFICATION, AND EVALUATION OF THE TREATMENT OF CHILDHOOD ACUTE LYMPHOBLASTIC-LEUKEMIA (ALL) IN THE BFM FAMILY COOPERATIVE GROUP

    NARCIS (Netherlands)

    VANDERDOESVANDENBERG, A; BARTRAM, CR; BASSO, G; BENOIT, YCM; BIONDI, A; DEBATIN, KM; HAAS, OA; HARBOTT, J; KAMPS, WA; KOLLER, U; LAMPERT, F; LUDWIG, WD; NIEMEYER, CM; VANWERING, ER

    1992-01-01

    Minimal requirements and their rationale for the diagnosis and the response to treatment in childhood acute lymphoblastic leukemia (ALL) were defined in the recently instituted "BFM-Family"-Group, in which the German, Austrian, Dutch, Italian, Belgian, French and Hungarian childhood leukemia study

  2. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  3. How flatbed scanners upset accurate film dosimetry

    International Nuclear Information System (INIS)

    Van Battum, L J; Verdaasdonk, R M; Heukelom, S; Huizenga, H

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2–2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red–green–blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. (paper)

  4. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    Directory of Open Access Journals (Sweden)

    Jinying Jia

    2014-01-01

    Full Text Available This paper tackles location privacy protection in current location-based services (LBS where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user’s accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR, nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user’s accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  5. Nonexposure accurate location K-anonymity algorithm in LBS.

    Science.gov (United States)

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  6. Clinical and laboratory parameters predicting a requirement for the reevaluation of growth hormone status during growth hormone treatment: Retesting early in the course of GH treatment.

    Science.gov (United States)

    Vuralli, Dogus; Gonc, E Nazli; Ozon, Z Alev; Alikasifoglu, Ayfer; Kandemir, Nurgun

    2017-06-01

    We aimed to define the predictive criteria, in the form of specific clinical, hormonal and radiological parameters, for children with growth hormone deficiency (GHD) who may benefit from the reevaluation of GH status early in the course of growth hormone (GH) treatment. Two hundred sixty-five children with growth hormone deficiency were retested by GH stimulation at the end of the first year of GH treatment. The initial clinical and laboratory characteristics of those with a normal (GH≥10ng/ml) response and those with a subnormal (GHGH status during reassessment. Sixty-nine patients (40.6%) out of the 170 patients with isolated growth hormone deficiency (IGHD) had a peak GH of ≥10ng/ml during the retest. None of the patients with multiple pituitary hormone deficiency (MPHD) had a peak GH of ≥10ng/ml. Puberty and sex steroid priming in peripubertal cases increased the probability of a normal GH response. Only one patient with IGHD who had an ectopic posterior pituitary without stalk interruption on MRI analysis showed a normal GH response during the retest. Patients with a peak GH between 5 and 10ng/ml, an age at diagnosis of ≥9years or a height gain below 0.61 SDS during the first year of treatment had an increased probability of having a normal GH response at the retest. Early reassessment of GH status during GH treatment is unnecessary in patients who have MPHD with at least 3 hormone deficiencies. Retesting at the end of the first year of therapy is recommended for patients with IGHD who have a height gain of <0.61 SDS in the first year of treatment, especially those with a normal or 'hypoplastic' pituitary on imaging. Priming can increase the likelihood of a normal response in patients in the pubertal age group who do not show overt signs of pubertal development. Copyright © 2017. Published by Elsevier Ltd.

  7. Accurate 3D Mapping Algorithm for Flexible Antennas

    Directory of Open Access Journals (Sweden)

    Saed Asaly

    2018-01-01

    Full Text Available This work addresses the problem of performing an accurate 3D mapping of a flexible antenna surface. Consider a high-gain satellite flexible antenna; even a submillimeter change in the antenna surface may lead to a considerable loss in the antenna gain. Using a robotic subreflector, such changes can be compensated for. Yet, in order to perform such tuning, an accurate 3D mapping of the main antenna is required. This paper presents a general method for performing an accurate 3D mapping of marked surfaces such as satellite dish antennas. Motivated by the novel technology for nanosatellites with flexible high-gain antennas, we propose a new accurate mapping framework which requires a small-sized monocamera and known patterns on the antenna surface. The experimental result shows that the presented mapping method can detect changes up to 0.1-millimeter accuracy, while the camera is located 1 meter away from the dish, allowing an RF antenna optimization for Ka and Ku frequencies. Such optimization process can improve the gain of the flexible antennas and allow an adaptive beam shaping. The presented method is currently being implemented on a nanosatellite which is scheduled to be launched at the end of 2018.

  8. Efforts to enrich evidence for accurate diagnoses

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-10-01

    Full Text Available There are always difficulties to gain accurate diagnoses for complicated disease conditions, especially for the diagnoses of disorders with similar characteristics. With advanced technology, clinicians are able to detect tiny changes during the on-going diseased processes. Research papers in the current issue help to understand some features of some disorders. In this case, the current issue would provide some references or hints to the accurate diagnoses and the precisional therapies for some disorders.

  9. A New Accurate Finite-Difference Scheme Based on the Optimally Accurate Operators and Boundary-Condition Consistent Material Parameterization

    Science.gov (United States)

    Kristek, J.; Moczo, P.; Galis, M.

    2005-12-01

    Geller and Takeuchi (1995) developed optimally accurate finite-difference (FD) operators. The operators minimize the error of the numerical solution of the discretized equation of motion. The criterion for obtaining the optimally accurate operators requires that the leading term of the truncation error of the discretized homogeneous (without body-force term) equation of motion (that is if operand is an eigenfunction and frequency is equal to eigenfrequency) is zero. Consequently, the optimally accurate operators satisfy (up to the leading term of the truncation error) homogeneous equation of motion. The grid dispersion of an optimally accurate FD scheme is significantly smaller than that of a standard FD scheme. A heterogeneous FD scheme cannot be anything else than a FD approximation to the heterogeneous formulation of the equation of motion (the same form of the equation for a point away from a material discontinuity and a point at the material discontinuity). If an optimally accurate FD scheme for heterogeneous media is to be obtained, the optimally accurate operators have to be applied to the heterogeneous formulation of the equation of motion. Moczo et al. (2002) found a heterogeneous formulation and developed a FD scheme based on standard staggered-grid 4th-order operators. The scheme is capable to sense both smooth material heterogeneity and material discontinuity at any position in a spatial grid. We present a new FD scheme that combines optimally accurate operators of Geller and Takeuchi (1995) with a material parameterization of Moczo et al. (2002). Models of a single material discontinuity, interior constant-velocity layer, and interior layer with the velocity gradient were calculated with the new scheme, conventional-operator scheme and analytically. Numerical results clearly isolate and demonstrate effects of the boundary and grid dispersion. The results demonstrate significant accuracy improvement compared to previous FD schemes.

  10. Cabergoline and cardiac valve disease in prolactinoma patients: additional studies during long-term treatment are required

    NARCIS (Netherlands)

    Kars, M.; Pereira, A. M.; Bax, J. J.; Romijn, J. A.

    2008-01-01

    The increased risk of cardiac valve disease in patients treated for Parkinson's disease with cabergoline has raised concerns about the safety of treatment with ergot-derived dopamine agonists in patients with endocrine diseases, especially prolactinoma. Six cross-sectional studies have been

  11. Physicochemical properties of germinated dehulled rice flour and energy requirement in germination as affected by ultrasound treatment.

    Science.gov (United States)

    Ding, Junzhou; Hou, Gary G; Dong, Mengyi; Xiong, Shanbai; Zhao, Siming; Feng, Hao

    2018-03-01

    Limited data are published regarding changes in the physicochemical properties of rice flours from germinated de-hulled rice treated by ultrasound. This work was undertaken to evaluate the effect of ultrasound treatment (25 kHz, 16 W/L, 5 min) on starch hydrolysis and functional properties of rice flours produced from ultrasound-treated red rice and brown rice germinated for up to 36 h. Environmental Scanning Electron Microscopy (ESEM) microimages showed that the ultrasound treatment altered the surface microstructure of rice, which helped to improve moisture transfer during steam-cooking. The flours from sonicated germinated de-hulled rice exhibited significantly (p germinating red rice and brown rice displayed different sensitivity to ultrasonic treatment. The ultrasonic pre-treatment resulted in a significant reduction in energy use during germination with a potential to further reduce energy use in germinated rice cooking process. The present study indicated that ultrasound could be a low-power consumption method to modify the rheological behavior of germinated rice flour, as well as an efficient approach to improve the texture, flavor, and nutrient properties of steam-cooked germinated rice. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Quantitative and qualitative characteristics of grey water for reuse requirements and treatment alternatives: the case of Jordan

    NARCIS (Netherlands)

    Abu-Ghunmi, L.N.A.H.; Zeeman, G.; Lier, van J.B.; Fayyed, M.

    2008-01-01

    The objective of this work is to assess the potentials and requirements for grey water reuse in Jordan. The results revealed that urban, rural and dormitory grey water production rate and concentration of TS, BOD5, COD and pathogens varied between 18-66 L cap(-1) d(-1), 848-1,919, 200-1,056, and

  13. Accurate transfer maps for realistic beam-line elements: Straight elements

    Directory of Open Access Journals (Sweden)

    Chad E. Mitchell

    2010-06-01

    Full Text Available The behavior of orbits in charged-particle beam transport systems, including both linear and circular accelerators as well as final focus sections and spectrometers, can depend sensitively on nonlinear fringe-field and high-order-multipole effects in the various beam-line elements. The inclusion of these effects requires a detailed and realistic model of the interior and fringe fields, including their high spatial derivatives. A collection of surface fitting methods has been developed for extracting this information accurately from three-dimensional field data on a grid, as provided by various three-dimensional finite-element field codes. Based on these realistic field models, Lie or other methods may be used to compute accurate design orbits and accurate transfer maps about these orbits. Part I of this work presents a treatment of straight-axis magnetic elements, while part II will treat bending dipoles with large sagitta. An exactly soluble but numerically challenging model field is used to provide a rigorous collection of performance benchmarks.

  14. Evaluation of new reference genes in papaya for accurate transcript normalization under different experimental conditions.

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhu

    Full Text Available Real-time reverse transcription PCR (RT-qPCR is a preferred method for rapid and accurate quantification of gene expression studies. Appropriate application of RT-qPCR requires accurate normalization though the use of reference genes. As no single reference gene is universally suitable for all experiments, thus reference gene(s validation under different experimental conditions is crucial for RT-qPCR analysis. To date, only a few studies on reference genes have been done in other plants but none in papaya. In the present work, we selected 21 candidate reference genes, and evaluated their expression stability in 246 papaya fruit samples using three algorithms, geNorm, NormFinder and RefFinder. The samples consisted of 13 sets collected under different experimental conditions, including various tissues, different storage temperatures, different cultivars, developmental stages, postharvest ripening, modified atmosphere packaging, 1-methylcyclopropene (1-MCP treatment, hot water treatment, biotic stress and hormone treatment. Our results demonstrated that expression stability varied greatly between reference genes and that different suitable reference gene(s or combination of reference genes for normalization should be validated according to the experimental conditions. In general, the internal reference genes EIF (Eukaryotic initiation factor 4A, TBP1 (TATA binding protein 1 and TBP2 (TATA binding protein 2 genes had a good performance under most experimental conditions, whereas the most widely present used reference genes, ACTIN (Actin 2, 18S rRNA (18S ribosomal RNA and GAPDH (Glyceraldehyde-3-phosphate dehydrogenase were not suitable in many experimental conditions. In addition, two commonly used programs, geNorm and Normfinder, were proved sufficient for the validation. This work provides the first systematic analysis for the selection of superior reference genes for accurate transcript normalization in papaya under different experimental

  15. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    Science.gov (United States)

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  16. The required number of treatment imaging days for an effective off-line correction of systematic errors in conformal radiotherapy of prostate cancer -- a radiobiological analysis

    International Nuclear Information System (INIS)

    Amer, Ali M.; Mackay, Ranald I.; Roberts, Stephen A.; Hendry, Jolyon H.; Williams, Peter C.

    2001-01-01

    Background and purpose: To use radiobiological modelling to estimate the number of initial days of treatment imaging required to gain most of the benefit from off-line correction of systematic errors in the conformal radiation therapy of prostate cancer. Materials and methods: Treatment plans based on the anatomical information of a representative patient were generated assuming that the patient is treated with a multi leaf collimator (MLC) four-field technique and a total isocentre dose of 72 Gy delivered in 36 daily fractions. Target position variations between fractions were simulated from standard deviations of measured data found in the literature. Off-line correction of systematic errors was assumed to be performed only once based on the measured errors during the initial days of treatment. The tumour control probability (TCP) was calculated using the Webb and Nahum model. Results: Simulation of daily variations in the target position predicted a marked reduction in TCP if the planning target volume (PTV) margin was smaller than 4 mm (TCP decreased by 3.4% for 2 mm margin). The systematic components of target position variations had greater effect on the TCP than the random components. Off-line correction of estimated systematic errors reduced the decrease in TCP due to target daily displacements, nevertheless, the resulting TCP levels for small margins were still less than the TCP level obtained with the use of an adequate PTV margin of ∼10 mm. The magnitude of gain in TCP expected from the correction depended on the number of treatment imaging days used for the correction and the PTV margin applied. Gains of 2.5% in TCP were estimated from correction of systematic errors performed after 6 initial days of treatment imaging for a 2 mm PTV margin. The effect of various possible magnitudes of systematic and random components on the gain in TCP expected from correction and on the number of imaging days required was also investigated. Conclusions: Daily

  17. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods...... require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here...

  18. More accurate determination of coal stockpile inventories

    Energy Technology Data Exchange (ETDEWEB)

    Wright, P.L.; Hernadi, N.

    1978-08-01

    The coal density within a 12 m high stockpile can range from 836 kg/m/SUP/3 to 948 kg/m/SUP/3. Thus the use of a constant density to calculate the stockpile inventory could lead to a variation on a nominal 100,000 tonne stockpile of 12,500 tonnes. At present day metallurgical coal prices, this represents a range in inventory value of over 600,000 dollars. A more accurate evaluation of the stockpile inventory can easily be achieved by the simple expedient of including the density variations in the calculations. Laboratory compression tests at different moisture contents have been used to simulate the coal compaction, and to derive practical densities for coal at various heights within the stockpile. The resulting graphs of density variation with stockpile height at different moisture contents can be combined with accurate cross sections to volumetric surveys to produce on accurate quantification of the stockpile for inventory purposes.

  19. Compression-based distance (CBD): a simple, rapid, and accurate method for microbiota composition comparison.

    Science.gov (United States)

    Yang, Fang; Chia, Nicholas; White, Bryan A; Schook, Lawrence B

    2013-04-23

    Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets.

  20. A technique for accurate planning of stereotactic brain implants prior to head ring fixation

    International Nuclear Information System (INIS)

    Ulin, Kenneth; Bornstein, Linda E.; Ling, Marilyn N.; Saris, Stephen; Wu, Julian K.; Curran, Bruce H.; Wazer, David E.

    1997-01-01

    Purpose: A two-step procedure is described for accurate planning of stereotactic brain implants prior to head-ring fixation. Methods and Materials: Approximately 2 weeks prior to implant a CT scan without the head ring is performed for treatment-planning purposes. An entry point and a reference point, both marked with barium and later tattooed, facilitate planning and permit correlation of the images with a later CT scan. A plan is generated using a conventional treatment-planning system to determine the number and activity of I-125 seeds required and the position of each catheter. I-125 seed anisotropy is taken into account by means of a modification to the treatment planning program. On the day of the implant a second CT scan is performed with the head ring affixed to the skull and with the same points marked as in the previous scan. The planned catheter coordinates are then mapped into the coordinate system of the second CT scan by means of a manual translational correction and a computer-calculated rotational correction derived from the reference point coordinates in the two scans. Results: The rotational correction algorithm was verified experimentally in a Rando phantom before it was used clinically. For analysis of the results with individual patients a third CT scan is performed 1 day following the implant and is used for calculating the final dosimetry. Conclusion: The technique that is described has two important advantages: 1) the number and activity of seeds required can be accurately determined in advance; and 2) sufficient time is allowed to derive the best possible plan

  1. Correlation of adjusted blood requirement index with treatment intervention and outcome in patients presenting with acute variceal bleeding

    International Nuclear Information System (INIS)

    Zaberi, B.F.; Riaz, M.F.; Sultan, B.A.; Gobindram, P.

    2007-01-01

    To determine the correlation of ABRI with treatment intervention and outcome as discharged or expired in patients of acute variceal bleed. Records of all the patients admitted in Medical Unit-IV, Civil Hospital Karachi with acute variceal bleeding during January 2004 to October 2006 were retrieved. Use of vasoactive agents (Terlipressin/Octreotide), endoscopic band ligation (EBL) and outcome (Discharged/Expired) were noted. ABRI was calculated by the following formula. ABRI= Blood Units Transfused/((Final Hematocrit-Initial Hematocrit)+0.01) Mean ABRI were compared by student's 't' test according to vasoactive therapy, EBL and outcome. Correlation of ABRI with the same variables was also studied by plotting Receiver Operative Curves (ROC). Seventy six patients fulfilling inclusion criteria were selected. No statistically significant difference was observed in the mean ABRI scores when compared according to vasoactive drug administration, EBL and outcome. Significant correlation with mortality was seen on ROC plot with significantly larger area under the curve. (author)

  2. Accurate Radiometric Calibration using Mechanically-Shuttered CCD Systems

    Science.gov (United States)

    Hall, D.; Liang, D.

    Acquiring accurate radiometric measurements is an essential part of characterizing non-resolvable satellites. For instance, temporal photometric signatures provide information on characteristic size, reflectance, and stability, spin rate, etc., and with more detailed analysis, shape and attitude. Multi-color photometric measurements provide information on material composition and the effects of space weathering. Thermal infrared radiometry provides gray-body temperatures and emissivity properties. Many of these methods rely on accurate radiometric calibration. For CCD systems, the calibration process generally entails removing bias and dark signals from the raw frames, dividing by a flat-field frame to account for non-uniformities, and applying a sensitivity factor to convert the remaining signal into photon-flux or energy-flux units. However, when using mechanically-shuttered camera systems, another effect must be accounted for to obtain accurately calibrated data: the finite time required for the mechanical shutter to open and close. Measurements for both two-bladed and iris mechanical shutters indicate that neglecting this effect can lead to calibration errors of 10% or more in short-duration exposures. We present methods for measuring this effect, either in a laboratory setting or with the instrument mounted on a telescope, and the additional steps required to calibrate CCD data.

  3. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  4. FIXED-WING MICRO AERIAL VEHICLE FOR ACCURATE CORRIDOR MAPPING

    Directory of Open Access Journals (Sweden)

    M. Rehak

    2015-08-01

    Full Text Available In this study we present a Micro Aerial Vehicle (MAV equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  5. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    Science.gov (United States)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  6. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    Science.gov (United States)

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  7. Infections reported in newborns with very low birth weight who required surgical treatment. Data from the Polish Neonatology Surveillance Network.

    Science.gov (United States)

    Wójkowska-Mach, Jadwiga; Helwich, Ewa; Borszewska-Kornacka, Maria; Gadzinowski, Janusz; Gulczyńska, Ewa; Kordek, Agnieszka; Pawlik, Dorota; Szczapa, Jerzy; Domańska, Joanna; Klamka, Jerzy; Heczko, Piotr B

    2013-01-01

    To determine the risk of various forms of infections appearing in very low birth weight newborns (VLBW) during the period of 30 days after surgical treatment in hospitals which have perinatal care departments with neonatal intensive care units that form the Polish Neonatology Surveillance Network (PNSN). Continuous prospective monitoring of infections was carried out from January 1st to December 31st 2009 in six neonatal intensive care units which form the Polish Neonatal Surveillance Network. (PNSN). 910 newborns with very low birth weight (VLBW) were included in the study programme. 91 (10%) of this group underwent 118 surgical interventions. 12 newborns needed two or more surgeries. The most common procedure was the closure of persistent ductus artesiosus (PDA) and photocoagulation of vascular damage in the eye fundus. In the period of 30 days after surgery the following were diagnosed: in 3 newborns - necrotizing enterocolitis (NEC), in 22 newborns - sepsis (BSI) and in 54 newborns - pneumonia (PNEU). Symptoms of BSI and PNEU were on average observed on the 10th day after surgical intervention, while in the case of NEC on the 17th day. The highest incidence of infection (148.4%) was observed after PDA closure and in connection with introducing a drain into the pleural cavity through the intercostal space. The incidence of PN EU (37.3%) was twice as high as the incidence of BSI (18.6%). Surgical procedure was a factor significantly increasing the risk of infection and morbidity (RR 2.1, P<001) In our investigations there was no case of the local infection of a surgical site. 11 newborns died (mortality was 12.1%). The most common bacterial strains found in our investigation were coagulase-negative Staphylococcus and Escherichia Coli. Taking into consideration the fact that surgical procedure in VLBW-newborns significantly increases the risk of pneumonia and to a minor degree the risk of NEC and BSI, further detailed investigation in the field of perisurgical

  8. PENGARUH AKSES PELAYANAN KESEHATAN, PERFORMED TREAMENT INDEX/PTI REQUIREMENT TREATMENT INDEX/RTI, TERHADAP PERILAKU ORAL HYGIENE

    Directory of Open Access Journals (Sweden)

    Niniek L. Pratiwi

    2012-11-01

    affordability cross-subsidies required to increase purchasing power of a toothpaste containing fluoride levels and toothbrushes that can reach people, especially the poor. Key words: PTI, RTI, Perilaku Oral Hygiene, sosial ekonomi, akses pelayanan kesehatan

  9. Accurate modeling of parallel scientific computations

    Science.gov (United States)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  10. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. In......TraTime allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  11. [Treatment-refractory-dental-extraction-associated pyothorax involving infection by 2 species of oral originated bacteria requires surgical debridement by video assisted thoracoscopic surgery (VATS)].

    Science.gov (United States)

    Rai, Kammei; Matsuo, Kiyoshi; Yonei, Toshiro; Sato, Toshio

    2008-09-01

    Cases of septic pulmonary embolism (SPE) diagnosed clinically by CT after dental extraction rarely include verification of bacteria from the local infection site. We report the case of a 70-year-old man without background disease suffering severe pyothrax after dental extraction. We detected two species of oral bacteria from his pleural effusion. Treatment was so difficult that it required surgical debridement by video assisted thoracoscopic surgery (VATS), even after the appropriate administration of antibiotics. According to the American Heart Association (AHA) prophylaxis guidelines for preventing infective endocarditis indicate that it is uncommon to prescribe antibiotics to patients without background disease after dental extraction. No appropriate Japanese guidelines exist considering the prevention of SPE causing severe pyothorax as in our case. The hematogenous spread of bacteria such as SPE caused by sepsis after tooth extraction thus requires more attended careful consideration in clinical practice if patients are to be properly protected against potentially serious complications.

  12. Highly Accurate Prediction of Jobs Runtime Classes

    OpenAIRE

    Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi

    2016-01-01

    Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...

  13. Accurate maser positions for MALT-45

    Science.gov (United States)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  14. A simple technique for an accurate shielding of the lungs during total body irradiation

    Directory of Open Access Journals (Sweden)

    Hana Mekdash

    2017-09-01

    Conclusion: This new technique succeeded in reducing the length of the overall treatment session of the conventional TBI procedure and hence reduced patient discomfort while ensuring accurate shielding of the lungs.

  15. Efficient and Accurate Computational Framework for Injector Design and Analysis, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...

  16. Treatment

    Directory of Open Access Journals (Sweden)

    Safaa M. Raghab

    2013-08-01

    The main goal of this study is to utilize a natural low cost material “as an accelerator additive to enhance the chemical treatment process using Alum coagulant and the accelerator substances were Perlite and Bentonite. The performance of the chemical treatment was enhanced using the accelerator substances with 90 mg/l Alum as a constant dose. Perlite gave better performance than the Bentonite effluent. The removal ratio for conductivity, turbidity, BOD and COD for Perlite was 86.7%, 87.4%, 89.9% and 92.8% respectively, and for Bentonite was 83.5%, 85.0%, 86.5% and 85.0% respectively at the same concentration of 40 mg/l for each.

  17. ACE-I Angioedema: Accurate Clinical Diagnosis May Prevent Epinephrine-Induced Harm

    Directory of Open Access Journals (Sweden)

    R. Mason Curtis

    2016-06-01

    Full Text Available Introduction: Upper airway angioedema is a life-threatening emergency department (ED presentation with increasing incidence. Angiotensin-converting enzyme inhibitor induced angioedema (AAE is a non-mast cell mediated etiology of angioedema. Accurate diagnosis by clinical examination can optimize patient management and reduce morbidity from inappropriate treatment with epinephrine. The aim of this study is to describe the incidence of angioedema subtypes and the management of AAE. We evaluate the appropriateness of treatments and highlight preventable iatrogenic morbidity. Methods: We conducted a retrospective chart review of consecutive angioedema patients presenting to two tertiary care EDs between July 2007 and March 2012. Results: Of 1,702 medical records screened, 527 were included. The cause of angioedema was identified in 48.8% (n=257 of cases. The most common identifiable etiology was AAE (33.1%, n=85, with a 60.0% male predominance. The most common AAE management strategies included diphenhydramine (63.5%, n=54, corticosteroids (50.6%, n=43 and ranitidine (31.8%, n=27. Epinephrine was administered in 21.2% (n=18 of AAE patients, five of whom received repeated doses. Four AAE patients required admission (4.7% and one required endotracheal intubation. Epinephrine induced morbidity in two patients, causing myocardial ischemia or dysrhythmia shortly after administration. Conclusion: AAE is the most common identifiable etiology of angioedema and can be accurately diagnosed by physical examination. It is easily confused with anaphylaxis and mismanaged with antihistamines, corticosteroids and epinephrine. There is little physiologic rationale for epinephrine use in AAE and much risk. Improved clinical differentiation of mast cell and non-mast cell mediated angioedema can optimize patient management.

  18. Can Measured Synergy Excitations Accurately Construct Unmeasured Muscle Excitations?

    Science.gov (United States)

    Bianco, Nicholas A; Patten, Carolynn; Fregly, Benjamin J

    2018-01-01

    Accurate prediction of muscle and joint contact forces during human movement could improve treatment planning for disorders such as osteoarthritis, stroke, Parkinson's disease, and cerebral palsy. Recent studies suggest that muscle synergies, a low-dimensional representation of a large set of muscle electromyographic (EMG) signals (henceforth called "muscle excitations"), may reduce the redundancy of muscle excitation solutions predicted by optimization methods. This study explores the feasibility of using muscle synergy information extracted from eight muscle EMG signals (henceforth called "included" muscle excitations) to accurately construct muscle excitations from up to 16 additional EMG signals (henceforth called "excluded" muscle excitations). Using treadmill walking data collected at multiple speeds from two subjects (one healthy, one poststroke), we performed muscle synergy analysis on all possible subsets of eight included muscle excitations and evaluated how well the calculated time-varying synergy excitations could construct the remaining excluded muscle excitations (henceforth called "synergy extrapolation"). We found that some, but not all, eight-muscle subsets yielded synergy excitations that achieved >90% extrapolation variance accounted for (VAF). Using the top 10% of subsets, we developed muscle selection heuristics to identify included muscle combinations whose synergy excitations achieved high extrapolation accuracy. For 3, 4, and 5 synergies, these heuristics yielded extrapolation VAF values approximately 5% lower than corresponding reconstruction VAF values for each associated eight-muscle subset. These results suggest that synergy excitations obtained from experimentally measured muscle excitations can accurately construct unmeasured muscle excitations, which could help limit muscle excitations predicted by muscle force optimizations.

  19. Accurate analytical representation of Pluto modern ephemeris

    Science.gov (United States)

    Kudryavtsev, Sergey M.; Kudryavtseva, Natalia S.

    2009-12-01

    An accurate development of the latest JPL’s numerical ephemeris of Pluto, DE421, to compact analytical series is done. Rectangular barycentric ICRF coordinates of Pluto from DE421 are approximated by compact Fourier series with a maximum error of 1.3 km over 1900-2050 (the entire time interval covered by the ephemeris). To calculate Pluto positions relative to the Sun, a development of rectangular heliocentric ICRF coordinates of the Solar System barycenter to Poisson series is additionally made. As a result, DE421 Pluto heliocentric positions by the new analytical series are represented to an accuracy of better than 5 km over 1900-2050.

  20. Accurate Charge Densities from Powder Diffraction

    DEFF Research Database (Denmark)

    Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob

    Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...

  1. Hyper-accurate ribosomes inhibit growth.

    OpenAIRE

    Ruusala, T; Andersson, D; Ehrenberg, M; Kurland, C G

    1984-01-01

    We have compared both in vivo and in vitro translation by ribosomes from wild-type bacteria with those from streptomycin-resistant (SmR), streptomycin-dependent (SmD) and streptomycin-pseudo-dependent (SmP) mutants. The three mutant bacteria translate more accurately and more slowly in the absence of streptomycin (Sm) than do wild-type bacteria. In particular, the SmP bacteria grow at roughly half the rate of the wild-type in the absence of Sm. The antibiotic stimulates both the growth rate a...

  2. Hyper-accurate ribosomes inhibit growth.

    Science.gov (United States)

    Ruusala, T; Andersson, D; Ehrenberg, M; Kurland, C G

    1984-11-01

    We have compared both in vivo and in vitro translation by ribosomes from wild-type bacteria with those from streptomycin-resistant (SmR), streptomycin-dependent (SmD) and streptomycin-pseudo-dependent (SmP) mutants. The three mutant bacteria translate more accurately and more slowly in the absence of streptomycin (Sm) than do wild-type bacteria. In particular, the SmP bacteria grow at roughly half the rate of the wild-type in the absence of Sm. The antibiotic stimulates both the growth rate and the translation rate of SmP bacteria by approximately 2-fold, but it simultaneously increases the nonsense suppression rate quite dramatically. Kinetic experiments in vitro show that the greater accuracy and slower translation rates of mutant ribosomes compared with wild-type ribosomes are associated with much more rigorous proofreading activities of SmR, SmD and SmP ribosomes. Sm reduces the proofreading flows of the mutant ribosomes and stimulates their elongation rates. The data suggest that these excessively accurate ribosomes are kinetically less efficient than wild-type ribosomes, and that this inhibits mutant growth rates. The stimulation of the growth of the mutants by Sm results from the enhanced translational efficiency due to the loss of proofreading, which more than offsets the loss of accuracy caused by the antibiotic.

  3. Accurate basis set truncation for wavefunction embedding

    Science.gov (United States)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  4. Development of anatomically and dielectrically accurate breast phantoms for microwave imaging applications

    Science.gov (United States)

    O'Halloran, M.; Lohfeld, S.; Ruvio, G.; Browne, J.; Krewer, F.; Ribeiro, C. O.; Inacio Pita, V. C.; Conceicao, R. C.; Jones, E.; Glavin, M.

    2014-05-01

    Breast cancer is one of the most common cancers in women. In the United States alone, it accounts for 31% of new cancer cases, and is second only to lung cancer as the leading cause of deaths in American women. More than 184,000 new cases of breast cancer are diagnosed each year resulting in approximately 41,000 deaths. Early detection and intervention is one of the most significant factors in improving the survival rates and quality of life experienced by breast cancer sufferers, since this is the time when treatment is most effective. One of the most promising breast imaging modalities is microwave imaging. The physical basis of active microwave imaging is the dielectric contrast between normal and malignant breast tissue that exists at microwave frequencies. The dielectric contrast is mainly due to the increased water content present in the cancerous tissue. Microwave imaging is non-ionizing, does not require breast compression, is less invasive than X-ray mammography, and is potentially low cost. While several prototype microwave breast imaging systems are currently in various stages of development, the design and fabrication of anatomically and dielectrically representative breast phantoms to evaluate these systems is often problematic. While some existing phantoms are composed of dielectrically representative materials, they rarely accurately represent the shape and size of a typical breast. Conversely, several phantoms have been developed to accurately model the shape of the human breast, but have inappropriate dielectric properties. This study will brie y review existing phantoms before describing the development of a more accurate and practical breast phantom for the evaluation of microwave breast imaging systems.

  5. Is Cancer Information Exchanged on Social Media Scientifically Accurate?

    Science.gov (United States)

    Gage-Bouchard, Elizabeth A; LaValley, Susan; Warunek, Molli; Beaupin, Lynda Kwon; Mollica, Michelle

    2017-07-19

    Cancer patients and their caregivers are increasingly using social media as a platform to share cancer experiences, connect with support, and exchange cancer-related information. Yet, little is known about the nature and scientific accuracy of cancer-related information exchanged on social media. We conducted a content analysis of 12 months of data from 18 publically available Facebook Pages hosted by parents of children with acute lymphoblastic leukemia (N = 15,852 posts) and extracted all exchanges of medically-oriented cancer information. We systematically coded for themes in the nature of cancer-related information exchanged on personal Facebook Pages and two oncology experts independently evaluated the scientific accuracy of each post. Of the 15,852 total posts, 171 posts contained medically-oriented cancer information. The most frequent type of cancer information exchanged was information related to treatment protocols and health services use (35%) followed by information related to side effects and late effects (26%), medication (16%), medical caregiving strategies (13%), alternative and complementary therapies (8%), and other (2%). Overall, 67% of all cancer information exchanged was deemed medically/scientifically accurate, 19% was not medically/scientifically accurate, and 14% described unproven treatment modalities. These findings highlight the potential utility of social media as a cancer-related resource, but also indicate that providers should focus on recommending reliable, evidence-based sources to patients and caregivers.

  6. Sample size requirements for separating out the effects of combination treatments: Randomised controlled trials of combination therapy vs. standard treatment compared to factorial designs for patients with tuberculous meningitis

    Directory of Open Access Journals (Sweden)

    Farrar Jeremy

    2011-02-01

    Full Text Available Abstract Background In certain diseases clinical experts may judge that the intervention with the best prospects is the addition of two treatments to the standard of care. This can either be tested with a simple randomized trial of combination versus standard treatment or with a 2 × 2 factorial design. Methods We compared the two approaches using the design of a new trial in tuberculous meningitis as an example. In that trial the combination of 2 drugs added to standard treatment is assumed to reduce the hazard of death by 30% and the sample size of the combination trial to achieve 80% power is 750 patients. We calculated the power of corresponding factorial designs with one- to sixteen-fold the sample size of the combination trial depending on the contribution of each individual drug to the combination treatment effect and the strength of an interaction between the two. Results In the absence of an interaction, an eight-fold increase in sample size for the factorial design as compared to the combination trial is required to get 80% power to jointly detect effects of both drugs if the contribution of the less potent treatment to the total effect is at least 35%. An eight-fold sample size increase also provides a power of 76% to detect a qualitative interaction at the one-sided 10% significance level if the individual effects of both drugs are equal. Factorial designs with a lower sample size have a high chance to be underpowered, to show significance of only one drug even if both are equally effective, and to miss important interactions. Conclusions Pragmatic combination trials of multiple interventions versus standard therapy are valuable in diseases with a limited patient pool if all interventions test the same treatment concept, it is considered likely that either both or none of the individual interventions are effective, and only moderate drug interactions are suspected. An adequately powered 2 × 2 factorial design to detect effects of

  7. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  8. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2014-08-15

    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  9. Accurate metacognition for visual sensory memory representations.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  10. Accurate predictions for the LHC made easy

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.

  11. Effects of Music Therapy on Anesthesia Requirements and Anxiety in Women Undergoing Ambulatory Breast Surgery for Cancer Diagnosis and Treatment: A Randomized Controlled Trial.

    Science.gov (United States)

    Palmer, Jaclyn Bradley; Lane, Deforia; Mayo, Diane; Schluchter, Mark; Leeming, Rosemary

    2015-10-01

    To investigate the effect of live and recorded perioperative music therapy on anesthesia requirements, anxiety levels, recovery time, and patient satisfaction in women experiencing surgery for diagnosis or treatment of breast cancer. Between 2012 and 2014, 207 female patients undergoing surgery for potential or known breast cancer were randomly assigned to receive either patient-selected live music (LM) preoperatively with therapist-selected recorded music intraoperatively (n=69), patient-selected recorded music (RM) preoperatively with therapist-selected recorded music intraoperatively (n=70), or usual care (UC) preoperatively with noise-blocking earmuffs intraoperatively (n=68). The LM and the RM groups did not differ significantly from the UC group in the amount of propofol required to reach moderate sedation. Compared with the UC group, both the LM and the RM groups had greater reductions (Pmusic therapy as a complementary modality with cancer surgery may help manage preoperative anxiety in a way that is safe, effective, time-efficient, and enjoyable. © 2015 by American Society of Clinical Oncology.

  12. Statistical methods for accurately determining criticality code bias

    International Nuclear Information System (INIS)

    Trumble, E.F.; Kimball, K.D.

    1997-01-01

    A system of statistically treating validation calculations for the purpose of determining computer code bias is provided in this paper. The following statistical treatments are described: weighted regression analysis, lower tolerance limit, lower tolerance band, and lower confidence band. These methods meet the criticality code validation requirements of ANS 8.1. 8 refs., 5 figs., 4 tabs

  13. Accurate estimation of dose distributions inside an eye irradiated with 106Ru plaques

    International Nuclear Information System (INIS)

    Brualla, L.; Sauerwein, W.; Sempau, J.; Zaragoza, F.J.; Wittig, A.

    2013-01-01

    Background: Irradiation of intraocular tumors requires dedicated techniques, such as brachytherapy with 106 Ru plaques. The currently available treatment planning system relies on the assumption that the eye is a homogeneous water sphere and on simplified radiation transport physics. However, accurate dose distributions and their assessment demand better models for both the eye and the physics. Methods: The Monte Carlo code PENELOPE, conveniently adapted to simulate the beta decay of 106 Ru over 106 Rh into 106 Pd, was used to simulate radiation transport based on a computerized tomography scan of a patient's eye. A detailed geometrical description of two plaques (models CCA and CCB) from the manufacturer BEBIG was embedded in the computerized tomography scan. Results: The simulations were firstly validated by comparison with experimental results in a water phantom. Dose maps were computed for three plaque locations on the eyeball. From these maps, isodose curves and cumulative dose-volume histograms in the eye and for the structures at risk were assessed. For example, it was observed that a 4-mm anterior displacement with respect to a posterior placement of a CCA plaque for treating a posterior tumor would reduce from 40 to 0% the volume of the optic disc receiving more than 80 Gy. Such a small difference in anatomical position leads to a change in the dose that is crucial for side effects, especially with respect to visual acuity. The radiation oncologist has to bring these large changes in absorbed dose in the structures at risk to the attention of the surgeon, especially when the plaque has to be positioned close to relevant tissues. Conclusion: The detailed geometry of an eye plaque in computerized and segmented tomography of a realistic patient phantom was simulated accurately. Dose-volume histograms for relevant anatomical structures of the eye and the orbit were obtained with unprecedented accuracy. This represents an important step toward an optimized

  14. Fast and accurate read alignment for resequencing.

    Science.gov (United States)

    Mu, John C; Jiang, Hui; Kiani, Amirhossein; Mohiyuddin, Marghoob; Bani Asadi, Narges; Wong, Wing H

    2012-09-15

    Next-generation sequence analysis has become an important task both in laboratory and clinical settings. A key stage in the majority sequence analysis workflows, such as resequencing, is the alignment of genomic reads to a reference genome. The accurate alignment of reads with large indels is a computationally challenging task for researchers. We introduce SeqAlto as a new algorithm for read alignment. For reads longer than or equal to 100 bp, SeqAlto is up to 10 × faster than existing algorithms, while retaining high accuracy and the ability to align reads with large (up to 50 bp) indels. This improvement in efficiency is particularly important in the analysis of future sequencing data where the number of reads approaches many billions. Furthermore, SeqAlto uses less than 8 GB of memory to align against the human genome. SeqAlto is benchmarked against several existing tools with both real and simulated data. Linux and Mac OS X binaries free for academic use are available at http://www.stanford.edu/group/wonglab/seqalto whwong@stanford.edu.

  15. Accurate equilibrium structures for piperidine and cyclohexane.

    Science.gov (United States)

    Demaison, Jean; Craig, Norman C; Groner, Peter; Écija, Patricia; Cocinero, Emilio J; Lesarri, Alberto; Rudolph, Heinz Dieter

    2015-03-05

    Extended and improved microwave (MW) measurements are reported for the isotopologues of piperidine. New ground state (GS) rotational constants are fitted to MW transitions with quartic centrifugal distortion constants taken from ab initio calculations. Predicate values for the geometric parameters of piperidine and cyclohexane are found from a high level of ab initio theory including adjustments for basis set dependence and for correlation of the core electrons. Equilibrium rotational constants are obtained from GS rotational constants corrected for vibration-rotation interactions and electronic contributions. Equilibrium structures for piperidine and cyclohexane are fitted by the mixed estimation method. In this method, structural parameters are fitted concurrently to predicate parameters (with appropriate uncertainties) and moments of inertia (with uncertainties). The new structures are regarded as being accurate to 0.001 Å and 0.2°. Comparisons are made between bond parameters in equatorial piperidine and cyclohexane. Another interesting result of this study is that a structure determination is an effective way to check the accuracy of the ground state experimental rotational constants.

  16. A novel and accurate diagnostic test for human African trypanosomiasis.

    Science.gov (United States)

    Papadopoulos, Marios C; Abel, Paulo M; Agranoff, Dan; Stich, August; Tarelli, Edward; Bell, B Anthony; Planche, Timothy; Loosemore, Alison; Saadoun, Samira; Wilkins, Peter; Krishna, Sanjeev

    2004-04-24

    Human African trypanosomiasis (sleeping sickness) affects up to half a million people every year in sub-Saharan Africa. Because current diagnostic tests for the disease have low accuracy, we sought to develop a novel test that can diagnose human African trypanosomiasis with high sensitivity and specificity. We applied serum samples from 85 patients with African trypanosomiasis and 146 control patients who had other parasitic and non-parasitic infections to a weak cation exchange chip, and analysed with surface-enhanced laser desorption-ionisation time-of-flight mass spectrometry. Mass spectra were then assessed with three powerful data-mining tools: a tree classifier, a neural network, and a genetic algorithm. Spectra (2-100 kDa) were grouped into training (n=122) and testing (n=109) sets. The training set enabled data-mining software to identify distinct serum proteomic signatures characteristic of human African trypanosomiasis among 206 protein clusters. Sensitivity and specificity, determined with the testing set, were 100% and 98.6%, respectively, when the majority opinion of the three algorithms was considered. This novel approach is much more accurate than any other diagnostic test. Our report of the accurate diagnosis of an infection by use of proteomic signature analysis could form the basis for diagnostic tests for the disease, monitoring of response to treatment, and for improving the accuracy of patient recruitment in large-scale epidemiological studies.

  17. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  18. Accurate photometric redshift probability density estimation - method comparison and application

    Science.gov (United States)

    Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-10-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.

  19. Accurate modeling and evaluation of microstructures in complex materials

    Science.gov (United States)

    Tahmasebi, Pejman

    2018-02-01

    Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.

  20. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    International Nuclear Information System (INIS)

    Dral, Pavlo O.; Lilienfeld, O. Anatole von; Thiel, Walter

    2015-01-01

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C 7 H 10 O 2 , for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules

  1. Comparison of thermistor linearization techniques for accurate temperature measurement in phase change materials

    OpenAIRE

    Stankovic, S. B.; Kyriacou, P. A.

    2011-01-01

    Alternate energy technologies are developing rapidly in the recent years. A significant part of this trend is the development of different phase change materials (PCMs). Proper utilization of PCMs requires accurate thermal characterization. There are several methodologies used in this field. This paper stresses the importance of accurate temperature measurements during the implementation of T-history method. Since the temperature sensor size is also important thermistors have been selected as...

  2. Accurate measurement of the electron beam polarization in JLab Hall A using Compton polarimetry

    International Nuclear Information System (INIS)

    Escoffier, S.; Bertin, P.Y.; Brossard, M.; Burtin, E.; Cavata, C.; Colombel, N.; Jager, C.W. de; Delbart, A.; Lhuillier, D.; Marie, F.; Mitchell, J.; Neyret, D.; Pussieux, T.

    2005-01-01

    A major advance in accurate electron beam polarization measurement has been achieved at Jlab Hall A with a Compton polarimeter based on a Fabry-Perot cavity photon beam amplifier. At an electron energy of 4.6GeV and a beam current of 40μA, a total relative uncertainty of 1.5% is typically achieved within 40min of data taking. Under the same conditions monitoring of the polarization is accurate at a level of 1%. These unprecedented results make Compton polarimetry an essential tool for modern parity-violation experiments, which require very accurate electron beam polarization measurements

  3. Accurate calibration of TXRF using microdroplet samples.

    Science.gov (United States)

    Fabry, L; Pahlke, S; Kotz, L

    1996-01-01

    TXRF has been applied in combination with VPD to the analysis of trace impurities in the native oxide layer of Si wafer surfaces down to the range of 10(8) atoms. cm(-2). Proper quantification of VPD/TXRF data requires calibration with microdroplet standard reference wafers. The precision of calibration function has been evaluated and found to allow quantification at a high level of 3 sigma confidence with microdroplet standard reference.

  4. Important Nearby Galaxies without Accurate Distances

    Science.gov (United States)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  5. Accurate hydrocarbon estimates attained with radioactive isotope

    International Nuclear Information System (INIS)

    Hubbard, G.

    1983-01-01

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  6. Accurate modeling of defects in graphene transport calculations

    Science.gov (United States)

    Linhart, Lukas; Burgdörfer, Joachim; Libisch, Florian

    2018-01-01

    We present an approach for embedding defect structures modeled by density functional theory into large-scale tight-binding simulations. We extract local tight-binding parameters for the vicinity of the defect site using Wannier functions. In the transition region between the bulk lattice and the defect the tight-binding parameters are continuously adjusted to approach the bulk limit far away from the defect. This embedding approach allows for an accurate high-level treatment of the defect orbitals using as many as ten nearest neighbors while keeping a small number of nearest neighbors in the bulk to render the overall computational cost reasonable. As an example of our approach, we consider an extended graphene lattice decorated with Stone-Wales defects, flower defects, double vacancies, or silicon substitutes. We predict distinct scattering patterns mirroring the defect symmetries and magnitude that should be experimentally accessible.

  7. Vessel calibration for accurate material accountancy at RRP

    International Nuclear Information System (INIS)

    Yanagisawa, Yuu; Ono, Sawako; Iwamoto, Tomonori

    2004-01-01

    RRP has a 800t·Upr capacity a year to re-process, where would be handled a large amount of nuclear materials as solution. A large scale plant like RRP will require accurate materials accountancy system, so that the vessel calibration with high-precision is very important as initial vessel calibration before operation. In order to obtain the calibration curve, it is needed well-known each the increment volume related with liquid height. Then we performed at least 2 or 3 times run with water for vessel calibration and careful evaluation for the calibration data should be needed. We performed vessel calibration overall 210 vessels, and the calibration of 81 vessels including IAT and OAT were held under presence of JSGO and IAEA inspectors taking into account importance on the material accountancy. This paper describes outline of the initial vessel calibration and calibration results based on back pressure measurement with dip tubes. (author)

  8. Compact and Accurate Turbocharger Modelling for Engine Control

    DEFF Research Database (Denmark)

    Sorenson, Spencer C; Hendricks, Elbert; Magnússon, Sigurjón

    2005-01-01

    With the current trend towards engine downsizing, the use of turbochargers to obtain extra engine power has become common. A great díffuculty in the use of turbochargers is in the modelling of the compressor map. In general this is done by inserting the compressor map directly into the engine ECU...... (Engine Control Unit) as a table. This method uses a great deal of memory space and often requires on-line interpolation and thus a large amount of CPU time. In this paper a more compact, accurate and rapid method of dealing with the compressor modelling problem is presented and is applicable to all...... turbocharges with radial compressors for either Spark Ignition (SI) or diesel engines...

  9. An Integrative Approach to Accurate Vehicle Logo Detection

    Directory of Open Access Journals (Sweden)

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  10. Accurate characterisation of hole geometries by fringe projection profilometry

    Science.gov (United States)

    Wu, Yuxiang; Dantanarayana, Harshana G.; Yue, Huimin; Huntley, Jonathan M.

    2017-06-01

    Accurate localisation and characterisation of holes is often required in the field of automated assembly and quality control. Compared to time consuming coordinate measuring machines (CMM), fringe-projection-based 3D scanners offer an attractive alternative as a fast, non-contact measurement technique that provides a dense 3D point cloud of a large sample in a few seconds. However, as we show in this paper, measurement artifacts occur at such hole edges, which can introduce errors in the estimated hole diameter by well over 0.25 mm, even though the estimated hole centre locations are largely unaffected. A compensation technique to suppress these measurement artifacts has been developed, by modelling the artifact using data extrapolated from neighboring pixels. By further incorporating a sub-pixel edge detection technique, we have been able to reduce the root mean square (RMS) diameter errors by up to 9.3 times using the proposed combined method.

  11. Assessment of stroke and concomitant cerebrovascular disease with heart disease requires invasive treatment: analysis of 249 consecutive patients with heart disease.

    Science.gov (United States)

    Kim, Myeong Jin; Song, Hyun; Oh, Se-Yang; Choi, Jai Ho; Kim, Bum-Soo; Kang, Joonkyu; Shin, Yong Sam

    2014-06-01

    The aim of this study was to analyze the relationships of cerebrovascular disease (CVD), heart problems, and stroke in patients who required an invasive cardiac procedure. We enrolled 249 consecutive patients who required to or underwent invasive cardiac treatment and divided into a non-CVD group (n = 116) and a CVD group (n = 133). The latter group was divided into a coronary artery disease (CAD) group (n = 118) and a non-CAD group such as cardiac structural lesions (n = 15). No significant relationship with significant cerebrovascular stenosis was observed in either the CADs or non-CADs. The incidence of past stroke was significantly higher in the CVD group than that in the non-CVD group (12.8 vs. 3.4%; p = 0.017). Previous stroke event had increased odds of having significant cerebrovascular stenosis (odds ratio, 3.919, p = 0.006). In patients with both cardiac disease and the CVD, perioperative stroke was only one case (0.9%). The main source of stroke was cardiogenic in the immediate results and cerebrovascular lesions in the delayed results (1-12 months). The risk of perioperative stroke was very low in combined cardiac disease and the CVD. However, for preventing ischemic stroke due to the predetected cerebrovascular lesions, precautionary efforts could be needed for patients undergoing an invasive cardiac procedure, and concomitant cerebrovascular lesions should be considered as main source of delayed ischemic stroke. Georg Thieme Verlag KG Stuttgart · New York.

  12. An accurate method for measuring triploidy of larval fish spawns

    Science.gov (United States)

    Jenkins, Jill A.; Draugelis-Dale, Rassa O.; Glennon, Robert; Kelly, Anita; Brown, Bonnie L.; Morrison, John

    2017-01-01

    A standard flow cytometric protocol was developed for estimating triploid induction in batches of larval fish. Polyploid induction treatments are not guaranteed to be 100% efficient, thus the ability to quantify the proportion of triploid larvae generated by a particular treatment helps managers to stock high-percentage spawns and researchers to select treatments for efficient triploid induction. At 3 d posthatch, individual Grass Carp Ctenopharyngodon idella were mechanically dissociated into single-cell suspensions; nuclear DNA was stained with propidium iodide then analyzed by flow cytometry. Following ploidy identification of individuals, aliquots of diploid and triploid cell suspensions were mixed to generate 15 levels (0–100%) of known triploidy (n = 10). Using either 20 or 50 larvae per level, the observed triploid percentages were lower than the known, actual values. Using nonlinear regression analyses, quadratic equations solved for triploid proportions in mixed samples and corresponding estimation reference plots allowed for predicting triploidy. Thus, an accurate prediction of the proportion of triploids in a spawn can be made by following a standard larval processing and analysis protocol with either 20 or 50 larvae from a single spawn, coupled with applying the quadratic equations or reference plots to observed flow cytometry results. Due to the universality of triploid DNA content being 1.5 times the diploid level and because triploid fish consist of fewer cells than diploids, this method should be applicable to other produced triploid fish species, and it may be adapted for use with bivalves or other species where batch analysis is appropriate.

  13. Accurate and efficient calculation of response times for groundwater flow

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  14. The importance of accurate meteorological input fields and accurate planetary boundary layer parameterizations, tested against ETEX-1

    International Nuclear Information System (INIS)

    Brandt, J.; Ebel, A.; Elbern, H.; Jakobs, H.; Memmesheimer, M.; Mikkelsen, T.; Thykier-Nielsen, S.; Zlatev, Z.

    1997-01-01

    Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)

  15. An automated method for accurate vessel segmentation

    Science.gov (United States)

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting

    2017-05-01

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008

  16. Seamless Requirements

    OpenAIRE

    Naumchev, Alexandr; Meyer, Bertrand

    2017-01-01

    Popular notations for functional requirements specifications frequently ignore developers' needs, target specific development models, or require translation of requirements into tests for verification; the results can give out-of-sync or downright incompatible artifacts. Seamless Requirements, a new approach to specifying functional requirements, contributes to developers' understanding of requirements and to software quality regardless of the process, while the process itself becomes lighter...

  17. A Machine Learned Classifier That Uses Gene Expression Data to Accurately Predict Estrogen Receptor Status

    Science.gov (United States)

    Bastani, Meysam; Vos, Larissa; Asgarian, Nasimeh; Deschenes, Jean; Graham, Kathryn; Mackey, John; Greiner, Russell

    2013-01-01

    Background Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER) status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. Methods To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. Results This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. Conclusions Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions. PMID:24312637

  18. A machine learned classifier that uses gene expression data to accurately predict estrogen receptor status.

    Directory of Open Access Journals (Sweden)

    Meysam Bastani

    Full Text Available BACKGROUND: Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. METHODS: To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. RESULTS: This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. CONCLUSIONS: Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions.

  19. Treatment of municipal wastewater in full-scale on-site sand filter reduces BOD efficiently but does not reach requirements for nitrogen and phosphorus removal.

    Science.gov (United States)

    Laaksonen, Petteri; Sinkkonen, Aki; Zaitsev, Gennadi; Mäkinen, Esa; Grönroos, Timo; Romantschuk, Martin

    2017-04-01

    A traditional sand filter for treatment of household wastewater was constructed in the fall of 2012 at Biolinja 12, Turku, Finland. Construction work was led and monitored by an authorized wastewater treatment consultant. The filter was placed on a field bordered by open ditches from all sides in order to collect excess rain and snowmelt waters. The filter was constructed and insulated from the environment so that all outflowing water was accounted for. Untreated, mainly municipal, wastewater from Varissuo suburb was pumped from a sewer separately via three septic tanks (volume = 1 m 3 each) into the filters. Normally, wastewater was distributed to ground filters automatically according to pre-programmed schedule. Initially, the daily flow was 1200 L day -1 to reflect the average organic load of a household of five persons (load: ca 237 g day -1 BOD; 73 g day -1 total N; and 10.4 g day -1 total P). Later in the test, the flow rate was decreased first to 900 and then to 600 L day -1 to better reflect the average volume produced by five persons. Volumes of inlet wastewater as well as treated water were monitored by magnetic flow meters. Samples were withdrawn from the inlet water, from the water entering the filters after the third septic tank, and from the outflowing water. After an initial adaption time, the reductions in BOD and chemical oxygen demand were constantly between 92 and 98%, showing that the biological degradation process in the filters functioned optimally and clearly comply with the national and EU standards. The reduction in total nitrogen and total phosphorus, however, reached required levels only during the first months of testing, apparently when buildup of microbial biomass was still ongoing. After this initial period of 3 months showing satisfactory reduction levels, the reduction of total nitrogen varied between 5 and 25% and total phosphorus mostly between 50 and 65%. Nitrification was efficient in the filter, but as indicated

  20. Optimising conventional treatment of domestic waste water: quality, required surface area, solid waste minimisation and biogas production for medium and small-scale applications

    CSIR Research Space (South Africa)

    Szewczuk, S

    2010-09-01

    Full Text Available Municipal waste water, or sewage, is a combination of domestic and industrial effluent. The increasing volume of sewage due to urbanisation and economic growth places pressure on the treatment performance of existing waste treatment systems...

  1. Accurate diode behavioral model with reverse recovery

    Science.gov (United States)

    Banáš, Stanislav; Divín, Jan; Dobeš, Josef; Paňko, Václav

    2018-01-01

    This paper deals with the comprehensive behavioral model of p-n junction diode containing reverse recovery effect, applicable to all standard SPICE simulators supporting Verilog-A language. The model has been successfully used in several production designs, which require its full complexity, robustness and set of tuning parameters comparable with standard compact SPICE diode model. The model is like standard compact model scalable with area and temperature and can be used as a stand-alone diode or as a part of more complex device macro-model, e.g. LDMOS, JFET, bipolar transistor. The paper briefly presents the state of the art followed by the chapter describing the model development and achieved solutions. During precise model verification some of them were found non-robust or poorly converging and replaced by more robust solutions, demonstrated in the paper. The measurement results of different technologies and different devices compared with a simulation using the new behavioral model are presented as the model validation. The comparison of model validation in time and frequency domains demonstrates that the implemented reverse recovery effect with correctly extracted parameters improves the model simulation results not only in switching from ON to OFF state, which is often published, but also its impedance/admittance frequency dependency in GHz range. Finally the model parameter extraction and the comparison with SPICE compact models containing reverse recovery effect is presented.

  2. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Directory of Open Access Journals (Sweden)

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  3. Accurate fluid force measurement based on control surface integration

    Science.gov (United States)

    Lentink, David

    2018-01-01

    Nonintrusive 3D fluid force measurements are still challenging to conduct accurately for freely moving animals, vehicles, and deforming objects. Two techniques, 3D particle image velocimetry (PIV) and a new technique, the aerodynamic force platform (AFP), address this. Both rely on the control volume integral for momentum; whereas PIV requires numerical integration of flow fields, the AFP performs the integration mechanically based on rigid walls that form the control surface. The accuracy of both PIV and AFP measurements based on the control surface integration is thought to hinge on determining the unsteady body force associated with the acceleration of the volume of displaced fluid. Here, I introduce a set of non-dimensional error ratios to show which fluid and body parameters make the error negligible. The unsteady body force is insignificant in all conditions where the average density of the body is much greater than the density of the fluid, e.g., in gas. Whenever a strongly deforming body experiences significant buoyancy and acceleration, the error is significant. Remarkably, this error can be entirely corrected for with an exact factor provided that the body has a sufficiently homogenous density or acceleration distribution, which is common in liquids. The correction factor for omitting the unsteady body force, {{{ {ρ f}} {1 - {ρ f} ( {{ρ b}+{ρ f}} )}.{( {{{{ρ }}b}+{ρ f}} )}}} , depends only on the fluid, {ρ f}, and body, {{ρ }}b, density. Whereas these straightforward solutions work even at the liquid-gas interface in a significant number of cases, they do not work for generalized bodies undergoing buoyancy in combination with appreciable body density inhomogeneity, volume change (PIV), or volume rate-of-change (PIV and AFP). In these less common cases, the 3D body shape needs to be measured and resolved in time and space to estimate the unsteady body force. The analysis shows that accounting for the unsteady body force is straightforward to non

  4. Computed tomography image guidance for more accurate repair of anterior table frontal sinus fractures.

    Science.gov (United States)

    Lee, Justine C; Andrews, Brian T; Abdollahi, Hamid; Lambi, Alex G; Pereira, Clifford T; Bradley, James P

    2015-01-01

    Anterior table frontal sinus fractures accompanied by nasofrontal duct injury require surgical correction. Extracranial approaches for anterior table osteotomies have traditionally used plain radiograph templates or a "cut-as-you-go" technique. We compared these methods with a newer technique utilizing computed tomography (CT)-guided imaging. Data of patients with acute, traumatic anterior table frontal sinus fractures and nasofrontal duct injury between 2009 and 2013 were reviewed (n = 29). Treatment groups compared were as follows: (1) CT image guidance, (2) plain radiograph template, and (3) cut-as-you-go. Frontal sinus obliteration was performed in all cases. Demographics, operative times, length of stay, complications, and osteotomy accuracy were recorded. Similar demographics, concomitant injuries, operative times, and length of stay among groups were noted. No patients in the CT-guided group had perioperative complications including intraoperative injury of the dura, cerebrum, or orbital structures. In the plain radiograph template group, 25% of patients had inadvertent dural exposure, and 12.5% required take-back to the operating room for cranial bone graft donor site hematoma. In the cut-as-you-go group, 11% required hardware removal for exposure. There were no cases of cerebrospinal fluid leak, meningitis, or mucocele in any group (follow-up, 29.2 months). The CT image guidance group had the most accuracy of the osteotomies (95%) compared with plain radiograph template (85%) and the cut-as-you-go group (72.5%). A new technique using CT image guidance for traumatic frontal sinus fractures repair offers more accurate osteotomy and elevation of the anterior table without increased operative times or untoward sequelae.

  5. Minimum required number of specimen records to develop accurate species distribution models

    NARCIS (Netherlands)

    Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, J.J.; Raes, N.

    2016-01-01

    Species distribution models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being

  6. Minimum required number of specimen records to develop accurate species distribution models

    NARCIS (Netherlands)

    Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, Jan; Raes, N.

    2015-01-01

    Species Distribution Models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being

  7. Is there an accurate method to measure metabolic requirement of institutionalized children with spastic cerebral palsy?

    Science.gov (United States)

    Lee, Siu Pik Peggy; Cheung, Ka Ming; Ko, Chun Hung; Chiu, Heung Chin

    2011-07-01

    This study hypothesized that there is no difference between energy expenditure measured by indirect calorimetry (IC) and that estimated by predicted formulas compared with the actual intake of children with spastic cerebral palsy (CP). Fifteen children aged 3 to 18 years with spastic CP and associated complications were recruited. IC was used to measure mean energy expenditure (MEE) compared with 3 predicted equations for energy expenditure (PEE), including body surface area (BSA), the recommended daily allowance (RDA), and an equation designed specifically for patients with CP. Friedman and paired t tests were used to examine the variance between PEE and MEE. Intraclass correlation coefficient (ICC) was used to explore the correlation between MEE and PEE. The pretest and posttest core temperatures were compared using the Wilcoxon signed rank test. Mean ± standard deviation MEE was 800.5 ± 295.7 kcal/d; BSA was 1,213.4 ± 171.2 kcal/d; RDA was 1,928.1 ± 341.0 kcal/d; and CP was 1,603.1 ± 215.8 kcal/d. The actual diet intake provided 935.3 ± 222.9 kcal/d. Post hoc analysis revealed a significant difference between mean MEE and PEE (P children with spastic CP.

  8. How many atoms are required to characterize accurately trajectory fluctuations of a protein?

    Science.gov (United States)

    Cukier, Robert I

    2010-06-28

    Large molecules, whose thermal fluctuations sample a complex energy landscape, exhibit motions on an extended range of space and time scales. Principal component analysis (PCA) is often used to extract dominant motions that in proteins are typically domain motions. These motions are captured in the large eigenvalue (leading) principal components. There is also information in the small eigenvalues, arising from approximate linear dependencies among the coordinates. These linear dependencies suggest that instead of using all the atom coordinates to represent a trajectory, it should be possible to use a reduced set of coordinates with little loss in the information captured by the large eigenvalue principal components. In this work, methods that can monitor the correlation (overlap) between a reduced set of atoms and any number of retained principal components are introduced. For application to trajectory data generated by simulations, where the overall translational and rotational motion needs to be eliminated before PCA is carried out, some difficulties with the overlap measures arise and methods are developed to overcome them. The overlap measures are evaluated for a trajectory generated by molecular dynamics for the protein adenylate kinase, which consists of a stable, core domain, and two more mobile domains, referred to as the LID domain and the AMP-binding domain. The use of reduced sets corresponding, for the smallest set, to one-eighth of the alpha carbon (CA) atoms relative to using all the CA atoms is shown to predict the dominant motions of adenylate kinase. The overlap between using all the CA atoms and all the backbone atoms is essentially unity for a sum over PCA modes that effectively capture the exact trajectory. A reduction to a few atoms (three in the LID and three in the AMP-binding domain) shows that at least the first principal component, characterizing a large part of the LID-binding and AMP-binding motion, is well described. Based on these results, the overlap criterion should be applicable as a guide to postulating and validating coarse-grained descriptions of generic biomolecular assemblies.

  9. Integrated multidimensional analysis is required for accurate prognostic biomarkers in colorectal cancer.

    Directory of Open Access Journals (Sweden)

    Marisa Mariani

    Full Text Available CRC cancer is one of the deadliest diseases in Western countries. In order to develop prognostic biomarkers for CRC (colorectal cancer aggressiveness, we analyzed retrospectively 267 CRC patients via a novel, multidimensional biomarker platform. Using nanofluidic technology for qPCR analysis and quantitative fluorescent immunohistochemistry for protein analysis, we assessed 33 microRNAs, 124 mRNAs and 9 protein antigens. Analysis was conducted in each single dimension (microRNA, gene or protein using both the multivariate Cox model and Kaplan-Meier method. Thereafter, we simplified the censored survival data into binary response data (aggressive vs. non aggressive cancer. Subsequently, we integrated the data into a diagnostic score using sliced inverse regression for sufficient dimension reduction. Accuracy was assessed using area under the receiver operating characteristic curve (AUC. Single dimension analysis led to the discovery of individual factors that were significant predictors of outcome. These included seven specific microRNAs, four genes, and one protein. When these factors were quantified individually as predictors of aggressive disease, the highest demonstrable area under the curve (AUC was 0.68. By contrast, when all results from single dimensions were combined into integrated biomarkers, AUCs were dramatically increased with values approaching and even exceeding 0.9. Single dimension analysis generates statistically significant predictors, but their predictive strengths are suboptimal for clinical utility. A novel, multidimensional integrated approach overcomes these deficiencies. Newly derived integrated biomarkers have the potential to meaningfully guide the selection of therapeutic strategies for individual patients while elucidating molecular mechanisms driving disease progression.

  10. Variational mode decomposition based approach for accurate classification of color fundus images with hemorrhages

    Science.gov (United States)

    Lahmiri, Salim; Shmuel, Amir

    2017-11-01

    Diabetic retinopathy is a disease that can cause a loss of vision. An early and accurate diagnosis helps to improve treatment of the disease and prognosis. One of the earliest characteristics of diabetic retinopathy is the appearance of retinal hemorrhages. The purpose of this study is to design a fully automated system for the detection of hemorrhages in a retinal image. In the first stage of our proposed system, a retinal image is processed with variational mode decomposition (VMD) to obtain the first variational mode, which captures the high frequency components of the original image. In the second stage, four texture descriptors are extracted from the first variational mode. Finally, a classifier trained with all computed texture descriptors is used to distinguish between images of healthy and unhealthy retinas with hemorrhages. Experimental results showed evidence of the effectiveness of the proposed system for detection of hemorrhages in the retina, since a perfect detection rate was achieved. Our proposed system for detecting diabetic retinopathy is simple and easy to implement. It requires only short processing time, and it yields higher accuracy in comparison with previously proposed methods for detecting diabetic retinopathy.

  11. Can student health professionals accurately estimate alcohol content in commonly occurring drinks?

    Science.gov (United States)

    Sinclair, Julia; Searle, Emma

    2016-08-01

    Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment.

  12. Can student health professionals accurately estimate alcohol content in commonly occurring drinks?

    Science.gov (United States)

    Sinclair, Julia; Searle, Emma

    2016-01-01

    Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344

  13. Accurate mass error correction in liquid chromatography time-of-flight mass spectrometry based metabolomics

    NARCIS (Netherlands)

    Mihaleva, V.V.; Vorst, O.F.J.; Maliepaard, C.A.; Verhoeven, H.A.; Vos, de C.H.; Hall, R.D.; Ham, van R.C.H.J.

    2008-01-01

    Compound identification and annotation in (untargeted) metabolomics experiments based on accurate mass require the highest possible accuracy of the mass determination. Experimental LC/TOF-MS platforms equipped with a time-to-digital converter (TDC) give the best mass estimate for those mass signals

  14. Using an FPGA for Fast Bit Accurate SoC Simulation

    NARCIS (Netherlands)

    Wolkotte, P.T.; Holzenspies, P.K.F.; Smit, Gerardus Johannes Maria

    In this paper we describe a sequential simulation method to simulate large parallel homo- and heterogeneous systems on a single FPGA. The method is applicable for parallel systems were lengthy cycle and bit accurate simulations are required. It is particularly designed for systems that do not fit

  15. Accurate continuous geographic assignment from low- to high-density SNP data

    DEFF Research Database (Denmark)

    Guillot, Gilles; Jónsson, Hákon; Hinge, Antoine

    2016-01-01

    hotspot areas can be located. Such approaches, however, require fast and accurate geographical assignment methods. Results : We introduce a novel statistical method for geopositioning individuals of unknown origin from genotypes. Our method is based on a geostatistical model trained with a dataset...

  16. High performance liquid chromatography method for rapid and accurate determination of homocysteine in plasma and serum

    DEFF Research Database (Denmark)

    Vester, Birte; Rasmussen, K

    1991-01-01

    Determination of homocysteine in plasma or serum for evaluation of cobalamin and folate deficiency is becoming an important diagnostic procedure. Accurate, rapid and low cost methods for measuring homocysteine are therefore required. We have improved an HPLC method and made it suitable for clinical...

  17. Accurate stereochemistry for two related 22,26-epiminocholestene derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Vega-Baez, José Luis; Sandoval-Ramírez, Jesús; Meza-Reyes, Socorro; Montiel-Smith, Sara; Gómez-Calvario, Victor [Facultad de Ciencias Químicas, Benemérita Universidad Autónoma de Puebla, Ciudad Universitaria, San Manuel, 72000 Puebla, Pue. (Mexico); Bernès, Sylvain, E-mail: sylvain-bernes@hotmail.com [DEP Facultad de Ciencias Químicas, UANL, Guerrero y Progreso S/N, Col. Treviño, 64570 Monterrey, NL (Mexico); Facultad de Ciencias Químicas, Benemérita Universidad Autónoma de Puebla, Ciudad Universitaria, San Manuel, 72000 Puebla, Pue. (Mexico)

    2008-04-01

    Regioselective opening of ring E of solasodine under various conditions afforded (25R)-22,26-epimino@@cholesta-5,22(N)-di@@ene-3β,16β-diyl diacetate (previously known as 3,16-diacetyl pseudosolasodine B), C{sub 31}H{sub 47}NO{sub 4}, or (22S,25R)-16β-hydr@@oxy-22,26-epimino@@cholesta-5-en-3β-yl acetate (a derivative of the naturally occurring alkaloid oblonginine), C{sub 29}H{sub 47}NO{sub 3}. In both cases, the reactions are carried out with retention of chirality at the C16, C20 and C25 stereogenic centers, which are found to be S, S and R, respectively. Although pseudosolasodine was synthesized 50 years ago, these accurate assignments clarify some controversial points about the actual stereochemistry for these alkaloids. This is of particular importance in the case of oblonginine, since this compound is currently under consideration for the treatment of aphasia arising from apoplexy; the present study defines a diastereoisomerically pure compound for pharmacological studies.

  18. Accurate stereochemistry for two related 22,26-epiminocholestene derivatives

    International Nuclear Information System (INIS)

    Vega-Baez, José Luis; Sandoval-Ramírez, Jesús; Meza-Reyes, Socorro; Montiel-Smith, Sara; Gómez-Calvario, Victor; Bernès, Sylvain

    2008-01-01

    Regioselective opening of ring E of solasodine under various conditions afforded (25R)-22,26-epimino@@cholesta-5,22(N)-di@@ene-3β,16β-diyl diacetate (previously known as 3,16-diacetyl pseudosolasodine B), C 31 H 47 NO 4 , or (22S,25R)-16β-hydr@@oxy-22,26-epimino@@cholesta-5-en-3β-yl acetate (a derivative of the naturally occurring alkaloid oblonginine), C 29 H 47 NO 3 . In both cases, the reactions are carried out with retention of chirality at the C16, C20 and C25 stereogenic centers, which are found to be S, S and R, respectively. Although pseudosolasodine was synthesized 50 years ago, these accurate assignments clarify some controversial points about the actual stereochemistry for these alkaloids. This is of particular importance in the case of oblonginine, since this compound is currently under consideration for the treatment of aphasia arising from apoplexy; the present study defines a diastereoisomerically pure compound for pharmacological studies

  19. Issues related to accurate classification of buttocks wounds.

    Science.gov (United States)

    Mahoney, Mary; Rozenboom, Barbara; Doughty, Dorothy; Smith, Hayden

    2011-01-01

    This study was designed to determine the level of agreement among wound care nurses when asked to classify the etiology of 9 wounds located on the buttocks and within the intergluteal cleft. Study subjects were 100 wound care nurses who responded to an invitation placed on the WOCN Society's wound care forum and to an e-mail sent to members of the WOCN Iowa Affiliate. Respondents were asked to view 9 unique wound photos and to determine whether the primary etiologic factor was pressure, moisture, incontinence-associated dermatitis, or skin tear. Subjects were given no background information regarding the patients but were allowed to add comments. The overall κ analysis of the 9 photos combined was 0.1708 (99% confidence interval, 0.1630-0.1786). The testing of the overall κ for the 9 photos equaling "0" or mere chance produced a P wound care nurses' classifications of photo subgroups or for all 9 photos analyzed together. Accurate wound classification impacts not only treatment decisions but also reimbursement, risk of litigation, and accuracy of data regarding prevalence and incidence of pressure ulcers. It is, therefore, critical for professional societies such as the WOCN to begin development of consensus definitions and guidelines to ensure consistency and accuracy in wound classification.

  20. Highly accurate symplectic element based on two variational principles

    Science.gov (United States)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  1. An Accurate Model of Mercury's Spin-Orbit Motion

    Science.gov (United States)

    Rambaux, Nicolas; Bois, Eric

    2005-01-01

    Our work deals with the physical and dynamical causes that induce librations around an equilibrium state defined by the 3:2 spin-orbit resonance of Mercury. In order to integrate the spin-orbit motion of Mercury we have used our gravitational model of the solar System including the Moon's spin-orbit motion. This model called SONYR (acronym of Spin-Orbit N-bodY Relativistic) was previously built by Bois Journet and Vokrouhlicky in accordance with the requirements of the Lunar Laser Ranging observational accuracy. Using the model we have identified and evaluated the main perturbations acting on the spin-orbit motion of Mercury such as the planetary interactions and the dynamical figure of the planet. Moreover the complete rotation of Mercury exhibits two proper frequencies namely 15.825 and 1089 years and one secular variation of 271043 years. Besides we have computed in the Hermean librations the impact of the variation of the greatest principal moment of inertia C/MR2 on the obliquity and on the libration in longitude (1.4 and 0.4 milliarseconds respectively for an increase of 1% on the C/MR2 value). We think that these accurate relations are also significant and useful in the context of the two upcoming missions BepiColombo and MESSENGER.

  2. Highly accurate symplectic element based on two variational principles

    Science.gov (United States)

    Qing, Guanghui; Tian, Jia

    2017-11-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  3. Accurate, Automated Detection of Atrial Fibrillation in Ambulatory Recordings

    Science.gov (United States)

    Linker, David T.

    2016-01-01

    Purpose A highly accurate, automated algorithm would facilitate cost-effective screening for asymptomatic atrial fibrillation. This study analyzed a new algorithm and compared to existing techniques. Methods The incremental benefit of each step in refinement of the algorithm was measured, and the algorithm was compared to other methods using the Physionet atrial fibrillation and normal sinus rhythm databases. Results When analyzing segments of 21 RR intervals or less, the algorithm had a significantly higher area under the receiver operating characteristic curve (AUC) than the other algorithms tested. At analysis segment sizes of up to 101 RR intervals, the algorithm continued to have a higher AUC than any of the other methods tested, although the difference from the second best other algorithm was no longer significant, with an AUC of 0.9992 with a 95% confidence interval (CI) of 0.9986–0.9998, versus 0.9986 (CI 0.9978–0.9994). With identical per-subject sensitivity, per-subject specificity of the current algorithm was superior to the other tested algorithms even at 101 RR intervals, with no false positives (CI 0.0%–0.8%) versus 5.3% false positives for the second best algorithm (CI 3.4–7.9%). Conclusions The described algorithm shows great promise for automated screening for atrial fibrillation by reducing false positives requiring manual review, while maintaining high sensitivity. PMID:26850411

  4. Anatomical brain images alone can accurately diagnose chronic neuropsychiatric illnesses.

    Directory of Open Access Journals (Sweden)

    Ravi Bansal

    Full Text Available OBJECTIVE: Diagnoses using imaging-based measures alone offer the hope of improving the accuracy of clinical diagnosis, thereby reducing the costs associated with incorrect treatments. Previous attempts to use brain imaging for diagnosis, however, have had only limited success in diagnosing patients who are independent of the samples used to derive the diagnostic algorithms. We aimed to develop a classification algorithm that can accurately diagnose chronic, well-characterized neuropsychiatric illness in single individuals, given the availability of sufficiently precise delineations of brain regions across several neural systems in anatomical MR images of the brain. METHODS: We have developed an automated method to diagnose individuals as having one of various neuropsychiatric illnesses using only anatomical MRI scans. The method employs a semi-supervised learning algorithm that discovers natural groupings of brains based on the spatial patterns of variation in the morphology of the cerebral cortex and other brain regions. We used split-half and leave-one-out cross-validation analyses in large MRI datasets to assess the reproducibility and diagnostic accuracy of those groupings. RESULTS: In MRI datasets from persons with Attention-Deficit/Hyperactivity Disorder, Schizophrenia, Tourette Syndrome, Bipolar Disorder, or persons at high or low familial risk for Major Depressive Disorder, our method discriminated with high specificity and nearly perfect sensitivity the brains of persons who had one specific neuropsychiatric disorder from the brains of healthy participants and the brains of persons who had a different neuropsychiatric disorder. CONCLUSIONS: Although the classification algorithm presupposes the availability of precisely delineated brain regions, our findings suggest that patterns of morphological variation across brain surfaces, extracted from MRI scans alone, can successfully diagnose the presence of chronic neuropsychiatric disorders

  5. Size matters: how accurate is clinical estimation of traumatic wound size?

    Science.gov (United States)

    Peterson, N; Stevenson, H; Sahni, V

    2014-01-01

    The presentation of traumatic wounds is commonplace in the accident & emergency department. Often, these wounds need referral to specialist care, e.g. trauma & orthopaedic, plastic or maxillofacial surgeons. Documentation and communication of the size of the wound can influence management, e.g. Gustilo & Anderson classification of open fractures. Several papers acknowledge the variability in measurement of chronic wounds, but there is no data regarding accuracy of traumatic wound assessment. The authors hypothesised that the estimation of wound size and subsequent communication or documentation was often inaccurate, with high inter-observer variability. A study was designed to assess this hypothesis. A total of 7 scaled images of wounds related to trauma were obtained from an Internet search engine. The questionnaire asked 3 questions regarding mechanism of injury, relevant anatomy and proposed treatment, to simulate real patient assessment. One further question addressed the estimation of wound size. 50 doctors of varying experience across several specialities were surveyed. The images were analysed after data collection had finished to provide appropriate measurements, and compared to the questionnaire results by a researcher blinded to the demographics of the individual. Our results show that there is a high inter-observer variability and inaccuracy in the estimation of wound size. This inaccuracy was directional and affected by gender. Male doctors were more likely to overestimate the size of wounds, whilst their female colleagues were more likely to underestimate size. The estimation of wound size is a common requirement of clinical practice, and inaccurate interpretation of size may influence surgical management. Assessment using estimation was inaccurate, with high inter-observer variability. Assessment of traumatic wounds that require surgical management should be accurately measured, possibly using photography and ruler measurement. Copyright © 2012

  6. Environmental Requirements Management

    Energy Technology Data Exchange (ETDEWEB)

    Cusack, Laura J.; Bramson, Jeffrey E.; Archuleta, Jose A.; Frey, Jeffrey A.

    2015-01-08

    CH2M HILL Plateau Remediation Company (CH2M HILL) is the U.S. Department of Energy (DOE) prime contractor responsible for the environmental cleanup of the Hanford Site Central Plateau. As part of this responsibility, the CH2M HILL is faced with the task of complying with thousands of environmental requirements which originate from over 200 federal, state, and local laws and regulations, DOE Orders, waste management and effluent discharge permits, Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) response and Resource Conservation and Recovery Act (RCRA) corrective action documents, and official regulatory agency correspondence. The challenge is to manage this vast number of requirements to ensure they are appropriately and effectively integrated into CH2M HILL operations. Ensuring compliance with a large number of environmental requirements relies on an organization’s ability to identify, evaluate, communicate, and verify those requirements. To ensure that compliance is maintained, all changes need to be tracked. The CH2M HILL identified that the existing system used to manage environmental requirements was difficult to maintain and that improvements should be made to increase functionality. CH2M HILL established an environmental requirements management procedure and tools to assure that all environmental requirements are effectively and efficiently managed. Having a complete and accurate set of environmental requirements applicable to CH2M HILL operations will promote a more efficient approach to: • Communicating requirements • Planning work • Maintaining work controls • Maintaining compliance

  7. ASElux: An Ultra-Fast and Accurate Allelic Reads Counter.

    Science.gov (United States)

    Miao, Zong; Alvarez, Marcus; Pajukanta, Päivi; Ko, Arthur

    2017-11-23

    Mapping bias causes preferential alignment to the reference allele, forming a major obstacle in allele-specific expression (ASE) analysis. The existing methods, such as simulation and SNP-aware alignment, are either inaccurate or relatively slow. To fast and accurately count allelic reads for ASE analysis, we developed a novel approach, ASElux, which utilizes the personal SNP information and counts allelic reads directly from unmapped RNA-sequence (RNA-seq) data. ASElux significantly reduces runtime by disregarding reads outside single nucleotide polymorphisms (SNPs) during the alignment. When compared to other tools on simulated and experimental data, ASElux achieves a higher accuracy on ASE estimation than non-SNP-aware aligners and requires a much shorter time than the benchmark SNP-aware aligner, GSNAP with just a slight loss in performance. ASElux can process 40 million read-pairs from an RNA-sequence (RNA-seq) sample and count allelic reads within 10 minutes, which is comparable to directly counting the allelic reads from alignments based on other tools. Furthermore, processing an RNA-seq sample using ASElux in conjunction with a general aligner, such as STAR, is more accurate and still ∼4X faster than STAR+WASP, and ∼33X faster than the lead SNP-aware aligner, GSNAP, making ASElux ideal for ASE analysis of large-scale transcriptomic studies. We applied ASElux to 273 lung RNA-seq samples from GTEx and identified a splice-QTL rs11078928 in lung which explains the mechanism underlying an asthma GWAS SNP rs11078927. Thus, our analysis demonstrated ASE as a highly powerful complementary tool to cis-expression quantitative trait locus (eQTL) analysis. The software can be downloaded from https://drive.google.com/open?id=0B7E7HSjQ-SumQmlPc1Z0aUR5Sk0. a5ko@ucla.edu (Arthur Ko), zmiao@ucla.edu (Zong Miao). Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions

  8. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  9. Fast and accurate calculation of the properties of water and steam for simulation

    International Nuclear Information System (INIS)

    Szegi, Zs.; Gacs, A.

    1990-01-01

    A basic principle simulator was developed at the CRIP, Budapest, for real time simulation of the transients of WWER-440 type nuclear power plants. Its integral part is the fast and accurate calculation of the thermodynamic properties of water and steam. To eliminate successive approximations, the model system of the secondary coolant circuit requires binary forms which are known as inverse functions, countinuous when crossing the saturation line, accurate and coherent for all argument combinations. A solution which reduces the computer memory and execution time demand is reported. (author) 36 refs.; 5 figs.; 3 tabs

  10. New simple method for fast and accurate measurement of volumes

    International Nuclear Information System (INIS)

    Frattolillo, Antonio

    2006-01-01

    A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The

  11. Accurate Measurement of Indoor Radon Concentration using a Low-Effective Volume Radon Monitor.

    Science.gov (United States)

    Tanaka, Aya; Minami, Nodoka; Yasuoka, Yumi; Iimoto, Takeshi; Omori, Yasutaka; Nagahama, Hiroyuki; Muto, Jun; Mukai, Takahiro

    2017-12-01

    AlphaGUARD is a low-effective volume detector and one of the most popular portable radon monitors which is currently available. This study investigated whether AlphaGUARD can accurately measure the variable indoor radon levels. The consistency of the radon-concentration data obtained by AlphaGUARD is evaluated against simultaneous measurements by two other monitors (each ~10 times more sensitive than AlphaGUARD). When accurately measuring radon concentration with AlphaGUARD, we found that the net counts of the AlphaGUARD were required of at least 500 counts, <25% of the relative percent difference. AlphaGUARD can provide accurate measurements of radon concentration for the world average level (~50 Bq m-3) and the reference level of workplace (1000 Bq m-3), using integrated data over at least 3 h and 10 min, respectively. © The Author 2017. Published by Oxford University Press.

  12. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  13. Cephalometric analysis for accurately determining the vertical dimension (case report

    Directory of Open Access Journals (Sweden)

    Wahipa Wiro

    2017-04-01

    Full Text Available Objective : Determination of the vertical dimension of occlusion (DVO tends to changes throughout the human life. The vertical dimension is determined by the interocclusal point of the upper and lower teeth contact so the application is limited when the natural teeth was missing. As the result, many functional and aesthetic changes are occurred in the whole orofacial region and stomatognathic system. DVO is one of the difficult stages in prosthodontic treatment. Most of the techniques to determine DVO in edentulous patients are based on the soft tissue references, which can cause the different measurements. Cephalometric analysis allows the evaluation of bone growth changes and can be used as a diagnostic tool in prosthodontics to evaluate the results of prosthodontic rehabilitation. Methods : The purpose of this case report was to find out the results of the vertical dimension of occlusion measurements in maxillomandibular relation by using cephalometric photo in patients who have been long lost their teeth and have never been using denture. Results : A 50 year old female patient, partially edentulous on the upper and lower jaw with the remaining teeth were 12 (residual root, 11,21,23,33 and 43. The remaining teeth were endodontically treated prior the complete denture procedure. Cephalometric photo was done in patients after making bite rim, upper and lower bite rim were given metal marker, the image was traced, then measured between metal to get the vertical dimension of occlusion. Conclusion : The measurement results of the vertical dimension of occlusion by using cephalometric photo on making full denture were more accurate, so it could improve and restore the masticatory function, aesthetic function and phonetics.

  14. SPEX: a highly accurate spectropolarimeter for atmospheric aerosol characterization

    Science.gov (United States)

    Rietjens, J. H. H.; Smit, J. M.; di Noia, A.; Hasekamp, O. P.; van Harten, G.; Snik, F.; Keller, C. U.

    2017-11-01

    Global characterization of atmospheric aerosol in terms of the microphysical properties of the particles is essential for understanding the role aerosols in Earth climate [1]. For more accurate predictions of future climate the uncertainties of the net radiative forcing of aerosols in the Earth's atmosphere must be reduced [2]. Essential parameters that are needed as input in climate models are not only the aerosol optical thickness (AOT), but also particle specific properties such as the aerosol mean size, the single scattering albedo (SSA) and the complex refractive index. The latter can be used to discriminate between absorbing and non-absorbing aerosol types, and between natural and anthropogenic aerosol. Classification of aerosol types is also very important for air-quality and health-related issues [3]. Remote sensing from an orbiting satellite platform is the only way to globally characterize atmospheric aerosol at a relevant timescale of 1 day [4]. One of the few methods that can be employed for measuring the microphysical properties of aerosols is to observe both radiance and degree of linear polarization of sunlight scattered in the Earth atmosphere under different viewing directions [5][6][7]. The requirement on the absolute accuracy of the degree of linear polarization PL is very stringent: the absolute error in PL must be smaller then 0.001+0.005.PL in order to retrieve aerosol parameters with sufficient accuracy to advance climate modelling and to enable discrimination of aerosol types based on their refractive index for air-quality studies [6][7]. In this paper we present the SPEX instrument, which is a multi-angle spectropolarimeter that can comply with the polarimetric accuracy needed for characterizing aerosols in the Earth's atmosphere. We describe the implementation of spectral polarization modulation in a prototype instrument of SPEX and show results of ground based measurements from which aerosol microphysical properties are retrieved.

  15. Accurate mobile malware detection and classification in the cloud.

    Science.gov (United States)

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.

  16. Improved management of radiotherapy departments through accurate cost data

    International Nuclear Information System (INIS)

    Kesteloot, K.; Lievens, Y.; Schueren, E. van der

    2000-01-01

    Escalating health care expenses urge Governments towards cost containment. More accurate data on the precise costs of health care interventions are needed. We performed an aggregate cost calculation of radiation therapy departments and treatments and discussed the different cost components. The costs of a radiotherapy department were estimated, based on accreditation norms for radiotherapy departments set forth in the Belgian legislation. The major cost components of radiotherapy are the cost of buildings and facilities, equipment, medical and non-medical staff, materials and overhead. They respectively represent around 3, 30, 50, 4 and 13% of the total costs, irrespective of the department size. The average cost per patient lowers with increasing department size and optimal utilization of resources. Radiotherapy treatment costs vary in a stepwise fashion: minor variations of patient load do not affect the cost picture significantly due to a small impact of variable costs. With larger increases in patient load however, additional equipment and/or staff will become necessary, resulting in additional semi-fixed costs and an important increase in costs. A sensitivity analysis of these two major cost inputs shows that a decrease in total costs of 12-13% can be obtained by assuming a 20% less than full time availability of personnel; that due to evolving seniority levels, the annual increase in wage costs is estimated to be more than 1%; that by changing the clinical life-time of buildings and equipment with unchanged interest rate, a 5% reduction of total costs and cost per patient can be calculated. More sophisticated equipment will not have a very large impact on the cost (±4000 BEF/patient), provided that the additional equipment is adapted to the size of the department. That the recommendations we used, based on the Belgian legislation, are not outrageous is shown by replacing them by the USA Blue book recommendations. Depending on the department size, costs in

  17. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  18. Accurate position estimation methods based on electrical impedance tomography measurements

    International Nuclear Information System (INIS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T A

    2017-01-01

    than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers. (paper)

  19. Requirements Engineering

    CERN Document Server

    Hull, Elizabeth; Dick, Jeremy

    2011-01-01

    Written for those who want to develop their knowledge of requirements engineering process, whether practitioners or students.Using the latest research and driven by practical experience from industry, Requirements Engineering gives useful hints to practitioners on how to write and structure requirements. It explains the importance of Systems Engineering and the creation of effective solutions to problems. It describes the underlying representations used in system modeling and introduces the UML2, and considers the relationship between requirements and modeling. Covering a generic multi-layer r

  20. Software requirements

    CERN Document Server

    Wiegers, Karl E

    2003-01-01

    Without formal, verifiable software requirements-and an effective system for managing them-the programs that developers think they've agreed to build often will not be the same products their customers are expecting. In SOFTWARE REQUIREMENTS, Second Edition, requirements engineering authority Karl Wiegers amplifies the best practices presented in his original award-winning text?now a mainstay for anyone participating in the software development process. In this book, you'll discover effective techniques for managing the requirements engineering process all the way through the development cy

  1. Methodology of assessment of the impact on the risk of changes of monitoring requirements of safety equipment integrating treatment and analysis of uncertainty of model and parameter

    International Nuclear Information System (INIS)

    Martorell, S.; Villamizar, M.; Marton, I.; Villanueva, J. f.; Carlos, S.; Sanchez, A. I.

    2013-01-01

    The feasibility of the proposed methodology has been demonstrated with the results obtained in a case of application for the analysis of changes in the requirements of maximum time of unavailability of system batteries using an APS level 1. The proposed methodology is consistent with the proposal by the American regulatory guide RG 1,174, applicable to LWR designs in Spanish, although large plants their level of technical development.

  2. Retinal screening acceptance, laser treatment uptake and follow-up response in diabetics requiring laser therapy in an urban diabetes care centre

    International Nuclear Information System (INIS)

    Memon, S.

    2015-01-01

    To determine the acceptance of retinal screening, Laser uptake and subsequent follow-up in diabetic patients attending the Diabetes Centre of Diabetic Association of Pakistan (DAP), Karachi. Study Design: Observational case series. Place and Duration of Study: Diabetic Centre of Diabetic Association of Pakistan (DAP), Karachi, from January 2011 to December 2012. Methodology: All the diabetic patients were screened for Diabetic Retinopathy (DR) with non-Mydriatic Fundus Camera (NMFC). Patients with DR were examined by the ophthalmologist using fundus lens and slit lamp. DR was graded for severity on the basis of modified Airlie House Classification. Patients with Sight Threatening Diabetic Retinopathy (STDR) were advised Laser treatment. Each patient was followed-up for at least 6 months. The records of patients recommended Laser were retrieved, and called for re-examination. Results: Retinal screening was accepted by all of the 8368 registered diabetics attending DAP Centre. On fundus photography, 21.2% (1777) individuals were found to have DR. Seven hundred and five (39.5%) patients were found to have STDR. Laser was advised to 96.4% (680) of STDR patients; amongst whom 70.5% (480) accepted Laser treatment. Out of 480 patients who had Laser treatment, 21.2% (107) turned out for follow-up after 6 months. Conclusion: Acceptance of retinal screening and Laser application was good; but follow-up was suboptional. (author)

  3. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    NARCIS (Netherlands)

    Jose, Jithin; Willemink, Rene G. H.; Steenbergen, Wiendelt; Slump, C. H.; van Leeuwen, Ton G.; Manohar, Srirang

    2012-01-01

    Purpose: In most photoacoustic (PA) tomographic reconstructions, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. The authors

  4. Accurate vehicle classification including motorcycles using piezoelectric sensors.

    Science.gov (United States)

    2013-03-01

    State and federal departments of transportation are charged with classifying vehicles and monitoring mileage traveled. Accurate data reporting enables suitable roadway design for safety and capacity. Vehicle classifiers currently employ inductive loo...

  5. Controlling Hay Fever Symptoms with Accurate Pollen Counts

    Science.gov (United States)

    ... Hay fever and pollen counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has ... MD, FAAAAI Seasonal allergic rhinitis known as hay fever is caused by pollen carried in the air ...

  6. Accurate determination of light elements by charged particle activation analysis

    International Nuclear Information System (INIS)

    Shikano, K.; Shigematsu, T.

    1989-01-01

    To develop accurate determination of light elements by CPAA, accurate and practical standardization methods and uniform chemical etching are studied based on determination of carbon in gallium arsenide using the 12 C(d,n) 13 N reaction and the following results are obtained: (1)Average stopping power method with thick target yield is useful as an accurate and practical standardization method. (2)Front surface of sample has to be etched for accurate estimate of incident energy. (3)CPAA is utilized for calibration of light element analysis by physical method. (4)Calibration factor of carbon analysis in gallium arsenide using the IR method is determined to be (9.2±0.3) x 10 15 cm -1 . (author)

  7. Highly Accurate Sensor for High-Purity Oxygen Determination Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....

  8. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  9. Stable and high order accurate difference methods for the elastic wave equation in discontinuous media

    KAUST Repository

    Duru, Kenneth

    2014-12-01

    © 2014 Elsevier Inc. In this paper, we develop a stable and systematic procedure for numerical treatment of elastic waves in discontinuous and layered media. We consider both planar and curved interfaces where media parameters are allowed to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions at layer interfaces are imposed weakly using penalties. By deriving lower bounds of the penalty strength and constructing discrete energy estimates we prove time stability. We present numerical experiments in two space dimensions to illustrate the usefulness of the proposed method for simulations involving typical interface phenomena in elastic materials. The numerical experiments verify high order accuracy and time stability.

  10. Assessing reference genes for accurate transcript normalization using quantitative real-time PCR in pearl millet [Pennisetum glaucum (L. R. Br].

    Directory of Open Access Journals (Sweden)

    Prasenjit Saha

    Full Text Available Pearl millet [Pennisetum glaucum (L. R.Br.], a close relative of Panicoideae food crops and bioenergy grasses, offers an ideal system to perform functional genomics studies related to C4 photosynthesis and abiotic stress tolerance. Quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR provides a sensitive platform to conduct such gene expression analyses. However, the lack of suitable internal control reference genes for accurate transcript normalization during qRT-PCR analysis in pearl millet is the major limitation. Here, we conducted a comprehensive assessment of 18 reference genes on 234 samples which included an array of different developmental tissues, hormone treatments and abiotic stress conditions from three genotypes to determine appropriate reference genes for accurate normalization of qRT-PCR data. Analyses of Ct values using Stability Index, BestKeeper, ΔCt, Normfinder, geNorm and RefFinder programs ranked PP2A, TIP41, UBC2, UBQ5 and ACT as the most reliable reference genes for accurate transcript normalization under different experimental conditions. Furthermore, we validated the specificity of these genes for precise quantification of relative gene expression and provided evidence that a combination of the best reference genes are required to obtain optimal expression patterns for both endogeneous genes as well as transgenes in pearl millet.

  11. Combined model-based segmentation and elastic registration for accurate quantification of the aortic arch.

    Science.gov (United States)

    Biesdorf, Andreas; Rohr, Karl; von Tengg-Kobligk, Hendrik; Wörz, Stefan

    2010-01-01

    Accurate quantification of the morphology of vessels is important for diagnosis and treatment of cardiovascular diseases. We introduce a new approach for the quantification of the aortic arch morphology that combines 3D model-based segmentation with elastic image registration. The performance of the approach has been evaluated using 3D synthetic images and clinically relevant 3D CTA images including pathologies. We also performed a comparison with a previous approach.

  12. Mycobacterium bovis BCG: the importance of an accurate identification in the diagnostic routine

    Directory of Open Access Journals (Sweden)

    Antonella Grottola

    2010-09-01

    Full Text Available M. bovis BCG is used clinically in the immunotherapy treatment of superficial bladder cancer to prevent progression to invasive disease, leading in some cases to a severe localized inflammation or disseminated infections. For this reason, an accurate and early identification of this particular microorganism is clinically relevant.We describe a case-report of bladder cancer with a urine culture-positive for mycobacteria initially diagnosed as MTB complex infection and later identified as BCG disease by molecular methods.

  13. Accurate simulation of ionisation chamber response with the Monte Carlo code PENELOPE

    International Nuclear Information System (INIS)

    Sempau, Josep; Andreo, Pedro

    2011-01-01

    Ionisation chambers (IC) are routinely used in hospitals for the dosimetry of the photon and electron beams used for radiotherapy treatments. The determination of absorbed dose to water from the absorbed dose to the air filling the cavity requires the introduction of stopping power ratios and perturbation factors, which account for the disturbance caused by the presence of the chamber. Although this may seem a problem readily amenable to Monte Carlo simulation, the fact is that the accurate determination of IC response has been, for various decades, one of the most important challenges of the simulation of electromagnetic showers. The main difficulty stems from the use of condensed history techniques for electron and positron transport. This approach, which involves grouping a large number of interactions into a single artificial event, is known to produce the so-called interface effects when particles travel across surfaces separating different media. These effects can be sizeable when the electron step length is not negligible compared to the size of the region being crossed, as it is the case with the cavity of an IC. The artefact, which becomes apparent when the chamber response shows a marked dependence on the adopted step size, can be palliated with the use of sophisticated electron transport algorithms. These topics are discussed in the context of the transport model implemented in the PENELOPE code. The degree of violation of the Fano theorem for a simple, planar geometry, is used as a measure of the stability of the algorithm with respect to variations of the electron step length, thus assessing the 'quality' of its condensed history scheme. It is shown that, with a suitable choice of transport parameters, PENELOPE simulates IC response with an accuracy of the order of 0.1%.

  14. Can student health professionals accurately estimate alcohol content in commonly occurring drinks?

    OpenAIRE

    Sinclair, Julia; Searle, Emma

    2016-01-01

    Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students.Methods: A cross-sectional survey of 891 medical and nur...

  15. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    Science.gov (United States)

    1999-06-01

    estimate of the age of the universe. In order to do this, you need an unambiguous, absolute distance to another galaxy. We are pleased that the NSF's VLBA has for the first time determined such a distance, and thus provided the calibration standard astronomers have always sought in their quest for accurate distances beyond the Milky Way," said Morris Aizenman, Executive Officer of the National Science Foundation's (NSF) Division of Astronomical Sciences. "For astronomers, this measurement is the golden meter stick in the glass case," Aizenman added. The international team of astronomers used the VLBA to measure directly the motion of gas orbiting what is generally agreed to be a supermassive black hole at the heart of NGC 4258. The orbiting gas forms a warped disk, nearly two light-years in diameter, surrounding the black hole. The gas in the disk includes water vapor, which, in parts of the disk, acts as a natural amplifier of microwave radio emission. The regions that amplify radio emission are called masers, and work in a manner similar to the way a laser amplifies light emission. Determining the distance to NGC 4258 required measuring motions of extremely small shifts in position of these masers as they rotate around the black hole. This is equivalent to measuring an angle one ten-thousandth the width of a human hair held at arm's length. "The VLBA is the only instrument in the world that could do this," said Moran. "This work is the culmination of a 20-year effort at the Harvard Smithsonian Center for Astrophysics to measure distances to cosmic masers," said Irwin Shapiro, Director of that institution. Collection of the data for the NGC 4258 project was begun in 1994 and was part of Herrnstein's Ph.D dissertation at Harvard University. Previous observations with the VLBA allowed the scientists to measure the speed at which the gas is orbiting the black hole, some 39 million times more massive than the Sun. They did this by observing the amount of change in the

  16. Energy requirements of adult dogs: a meta-analysis.

    Directory of Open Access Journals (Sweden)

    Emma N Bermingham

    Full Text Available A meta-analysis was conducted to determine the maintenance energy requirements of adult dogs. Suitable publications were first identified, and then used to generate relationships amongst energy requirements, husbandry, activity level, methodology, sex, neuter status, dog size, and age in healthy adult dogs. Allometric equations for maintenance energy requirements were determined using log-log linear regression. So that the resulting equations could readily be compared with equations reported by the National Research Council, maintenance energy requirements in the current study were determined in kcal/kg(0.75 body weight (BW. Ultimately, the data of 70 treatment groups from 29 publications were used, and mean (± standard deviation maintenance energy requirements were 142.8±55.3 kcal·kgBW(-0.75·day(-1. The corresponding allometric equation was 81.5 kcal·kgBW(-0.9·day(-1 (adjusted R2 = 0.64; 70 treatment groups. Type of husbandry had a significant effect on maintenance energy requirements (P<0.001: requirements were greatest in racing dogs, followed by working dogs and hunting dogs, whilst the energy requirements of pet dogs and kennel dogs were least. Maintenance energy requirements were less in neutered compared with sexually intact dogs (P<0.001, but there was no effect of sex. Further, reported activity level tended to effect the maintenance energy requirement of the dog (P = 0.09. This review suggests that estimating maintenance energy requirements based on BW alone may not be accurate, but that predictions that factor in husbandry, neuter status and, possibly, activity level might be superior. Additionally, more information on the nutrient requirements of older dogs, and those at the extremes of body size (i.e. giant and toy breeds is needed.

  17. Energy Requirements of Adult Dogs: A Meta-Analysis

    Science.gov (United States)

    Bermingham, Emma N.; Thomas, David G.; Cave, Nicholas J.; Morris, Penelope J.; Butterwick, Richard F.; German, Alexander J.

    2014-01-01

    A meta-analysis was conducted to determine the maintenance energy requirements of adult dogs. Suitable publications were first identified, and then used to generate relationships amongst energy requirements, husbandry, activity level, methodology, sex, neuter status, dog size, and age in healthy adult dogs. Allometric equations for maintenance energy requirements were determined using log-log linear regression. So that the resulting equations could readily be compared with equations reported by the National Research Council, maintenance energy requirements in the current study were determined in kcal/kg0.75 body weight (BW). Ultimately, the data of 70 treatment groups from 29 publications were used, and mean (± standard deviation) maintenance energy requirements were 142.8±55.3 kcal.kgBW−0.75.day−1. The corresponding allometric equation was 81.5 kcal.kgBW−0.93.day−1 (adjusted R2 = 0.64; 70 treatment groups). Type of husbandry had a significant effect on maintenance energy requirements (P<0.001): requirements were greatest in racing dogs, followed by working dogs and hunting dogs, whilst the energy requirements of pet dogs and kennel dogs were least. Maintenance energy requirements were less in neutered compared with sexually intact dogs (P<0.001), but there was no effect of sex. Further, reported activity level tended to effect the maintenance energy requirement of the dog (P = 0.09). This review suggests that estimating maintenance energy requirements based on BW alone may not be accurate, but that predictions that factor in husbandry, neuter status and, possibly, activity level might be superior. Additionally, more information on the nutrient requirements of older dogs, and those at the extremes of body size (i.e. giant and toy breeds) is needed. PMID:25313818

  18. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR.

    Directory of Open Access Journals (Sweden)

    XueYan Li

    Full Text Available Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes.

  19. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR.

    Science.gov (United States)

    Li, XueYan; Cheng, JinYun; Zhang, Jing; Teixeira da Silva, Jaime A; Wang, ChunXia; Sun, HongMei

    2015-01-01

    Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes.

  20. Prevalence study of oral mucosal lesions, mucosal variants, and treatment required for patients reporting to a dental school in North India: In accordance with WHO guidelines

    Directory of Open Access Journals (Sweden)

    Puneet Bhatnagar

    2013-01-01

    Full Text Available The aim of the study was to evaluate the prevalence of oral mucosal lesions (OML in adult patients reporting to the dental outpatient department at the Institute of Dental Studies and Technologies, Modinagar, Uttar Pradesh, India. The purpose was to determine the priorities in oral health education, preventive measures, and identify the group in urgent need of treatment. Materials and Methods: The study was conducted over a period of 6 months in 2010, when 8866 subjects were offered structured interviews and standardized extraoral and intraoral examinations according to the World Health Organization (WHO guidelines . Result: Overall prevalence of OML was 1736 (16.8%, the most prevalent being smoker′s palate (10.44% followed by leukoplakia (2.83%, oral submucous fibrosis (1.97%, oral candidiasis (1.61%, recurrent aphthous stomatitis (1.53%, oral lichen planus (0.8% and others (0.78%. The highest prevalence of the tobacco habit in both forms was recorded in the group aged 40-44 yearsand those aged between 60 and 64 years who wore dentures. Lesions were most prevalent in those aged 40-44 years with a significant predominance of males at 3:1 (M = 12.6% and F = 4.3%. Patients who consumed tobacco in any form or wore dentures had a significantly higher prevalence of OML (P < 0.001. The highest number of lesions were on the palate (59.7% followed by buccal mucosa (19.9%. Various normal mucosal variants were recorded. Fordyce′s granules (0.13%, fissured tongue (3.3%, leukoedema (1.47%, and lingual varices (2.73% were also recorded. The tongue showed the highest number of variants (64.4%. Patients were grouped according to the treatment needed under the WHO criteria. One hundred and ninety-seven patients were given oral hygiene instructions only, whereas 1422 patients were advised on change of habit and a follow-up and 674 patients needed definitive treatment. Conclusion: This study thus highlights diagnostic criteria, multifactorial risk factors to

  1. DNACLUST: accurate and efficient clustering of phylogenetic marker genes

    Directory of Open Access Journals (Sweden)

    Liu Bo

    2011-06-01

    Full Text Available Abstract Background Clustering is a fundamental operation in the analysis of biological sequence data. New DNA sequencing technologies have dramatically increased the rate at which we can generate data, resulting in datasets that cannot be efficiently analyzed by traditional clustering methods. This is particularly true in the context of taxonomic profiling of microbial communities through direct sequencing of phylogenetic markers (e.g. 16S rRNA - the domain that motivated the work described in this paper. Many analysis approaches rely on an initial clustering step aimed at identifying sequences that belong to the same operational taxonomic unit (OTU. When defining OTUs (which have no universally accepted definition, scientists must balance a trade-off between computational efficiency and biological accuracy, as accurately estimating an environment's phylogenetic composition requires computationally-intensive analyses. We propose that efficient and mathematically well defined clustering methods can benefit existing taxonomic profiling approaches in two ways: (i the resulting clusters can be substituted for OTUs in certain applications; and (ii the clustering effectively reduces the size of the data-sets that need to be analyzed by complex phylogenetic pipelines (e.g., only one sequence per cluster needs to be provided to downstream analyses. Results To address the challenges outlined above, we developed DNACLUST, a fast clustering tool specifically designed for clustering highly-similar DNA sequences. Given a set of sequences and a sequence similarity threshold, DNACLUST creates clusters whose radius is guaranteed not to exceed the specified threshold. Underlying DNACLUST is a greedy clustering strategy that owes its performance to novel sequence alignment and k-mer based filtering algorithms. DNACLUST can also produce multiple sequence alignments for every cluster, allowing users to manually inspect clustering results, and enabling more

  2. Memory conformity affects inaccurate memories more than accurate memories.

    Science.gov (United States)

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  3. Fast and accurate computation of projected two-point functions

    Science.gov (United States)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  4. Quality of life in type 2 diabetes mellitus patients requiring insulin treatment in Buenos Aires, Argentina: a cross-sectional study

    Science.gov (United States)

    Pichon-Riviere, Andres; Irazola, Vilma; Beratarrechea, Andrea; Alcaraz, Andrea; Carrara, Carolina

    2015-01-01

    Background: Decision-makers have begun to recognize Health-Related Quality of Life (HRQoL) as an important and measurable outcome of healthcare interventions; and HRQoL data is increasingly being used by policy-makers to prioritize health resources. Our objective was to measure HRQoL in a group of Type 2 Diabetes Mellitus (T2DM) patients receiving insulin treatment in Buenos Aires, Argentina. Methods: We conducted a cross-sectional study of patients with T2DM over 21 years of age, treated with either Neutral Protamine Hagedorn (NPH) insulin or Insulin Glargine (IG), who had not changed their baseline schedule in the last 6 months. The recruitment was during 2006–7 in nine private diabetes specialists’ offices in Buenos Aires, Argentina. A standardized diabetes-specific HRQoL questionnaire, the Audit of Diabetes Dependent Quality of Life (ADDQoL), was used. Results: A total of 183 patients were included (93 receiving NPH and 90 receiving IG). The mean QoL score was: 0.98 (SD: 0.89) and the diabetes specific QoL was: -1.49 (SD: 0.90). T2DM had a negative impact on HRQoL with a mean Average Weighted Impact (AWI) score on QoL of -1.77 (SD: 1.58). The greatest negative impact was observed for domains: ‘worries about the future’, ‘freedom to eat’, ‘living conditions’, ‘sex life’, and ‘family life’. The mean AWI score was -1.71 (SD: 1.48) in patients treated with IG and -1.85 (SD: 1.68) in patients receiving NPH, this difference was not statistically significant. Conclusion: The ADDQoL questionnaire is a tool that can be used in Argentina to measure the QoL of patients with diabetes when evaluating diabetes care programs. The scores of QoL in our selected population did not differ from those reported in high-income countries. We expect that the results of this study will increase healthcare providers’ awareness of patients’ perceived QoL and help to overcome the barriers that delay insulin treatment; mainly clinical inertia and patient

  5. Quality of Life in Type 2 Diabetes Mellitus Patients Requiring Insulin Treatment in Buenos Aires, Argentina: A Cross-Sectional Study

    Directory of Open Access Journals (Sweden)

    Andres Pichon-Riviere

    2015-07-01

    Full Text Available Background Decision-makers have begun to recognize Health-Related Quality of Life (HRQoL as an important and measurable outcome of healthcare interventions; and HRQoL data is increasingly being used by policy-makers to prioritize health resources. Our objective was to measure HRQoL in a group of Type 2 Diabetes Mellitus (T2DM patients receiving insulin treatment in Buenos Aires, Argentina. Methods We conducted a cross-sectional study of patients with T2DM over 21 years of age, treated with either Neutral Protamine Hagedorn (NPH insulin or Insulin Glargine (IG, who had not changed their baseline schedule in the last 6 months. The recruitment was during 2006–7 in nine private diabetes specialists’ offices in Buenos Aires, Argentina. A standardized diabetes-specific HRQoL questionnaire, the Audit of Diabetes Dependent Quality of Life (ADDQoL, was used. Results A total of 183 patients were included (93 receiving NPH and 90 receiving IG. The mean QoL score was: 0.98 (SD: 0.89 and the diabetes specific QoL was: -1.49 (SD: 0.90. T2DM had a negative impact on HRQoL with a mean Average Weighted Impact (AWI score on QoL of -1.77 (SD: 1.58. The greatest negative impact was observed for domains: ‘worries about the future’, ‘freedom to eat’, ‘living conditions’, ‘sex life’, and ‘family life’. The mean AWI score was -1.71 (SD: 1.48 in patients treated with IG and -1.85 (SD: 1.68 in patients receiving NPH, this difference was not statistically significant. Conclusion The ADDQoL questionnaire is a tool that can be used in Argentina to measure the QoL of patients with diabetes when evaluating diabetes care programs. The scores of QoL in our selected population did not differ from those reported in high-income countries. We expect that the results of this study will increase healthcare providers’ awareness of patients’ perceived QoL and help to overcome the barriers that delay insulin treatment; mainly clinical inertia and patient

  6. Short-term antidiabetic treatment with insulin or metformin has a similar impact on the components of metabolic syndrome in women with gestational diabetes mellitus requiring antidiabetic agents: results of a prospective, randomised study.

    Science.gov (United States)

    Zawiejska, A; Wender-Ozegowska, E; Grewling-Szmit, K; Brazert, M; Brazert, J

    2016-04-01

    Gestational diabetes mellitus (GDM) is associated with an increased prevalence of fetal and maternal complications primarily caused by maternal hyperglycemia, which results in abnormal fetal growth. Diet modification is a common first step in the treatment of GDM, followed by antidiabetic pharmacotherapy if this approach fails. Insulin therapy is generally accepted; however, oral hypoglycemic agents have been used in this population. In this prospective, randomised study, we compared maternal metabolic status after treatment with insulin or metformin. Pregnant women (gestational age: ≥ 20 weeks) with GDM requiring medical hypoglycemic treatment were randomly allocated to the Metformin (n = 35) or Insulin (n = 43) Groups. Maternal metabolic status - assessed by glycated hemoglobin (HBA1c) level, glycemic profile, insulin concentration, Homeostatic Model Assessment - Insulin Resistance index, and lipids - was recorded at booking and throughout pregnancy. The characteristics of the study group were: maternal age 33.5 ± 5.9 years, gestational age at baseline 28.5 ± 3.5 weeks, prepregnancy body mass index (BMI) 32.2 ± 3.5 kg/m(2), HbA1c at baseline 5.6 ± 0.6%, and average daily glycemia 5.9 ± 0.6 mmol/dl. Fasting glycemia at term was significantly lower in the Insulin Group but there were no significant differences in mean daily glycemia, HbA1c and BMI at term between the groups. Longitudinally, there was a small but significant increase in BMI and a significant increase in high-density lipoprotein-cholesterol in the Insulin Group and a significant increase in the atherogenic index of plasma (AIP) and a trend towards higher triglycerides in the Metformin Group. Both fasting and average daily glycemia were significantly reduced following treatment in both groups. No such change was evident for HbA1c. In a relative risk analysis, metformin treatment was associated with an insignificant elevated risk of HbA1c, triglycerides and lipid indices falling within the

  7. Disappearing or residual tiny (≤5 mm) colorectal liver metastases after chemotherapy on gadoxetic acid-enhanced liver MRI and diffusion-weighted imaging: Is local treatment required?

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Soo [Sungkyunkwan University School of Medicine, Department of Radiology and Center for Imaging Science, Samsung Medical Center, Seoul (Korea, Republic of); Cheonan Hospital, Department of Radiology, Soonchunhyang University College of Medicine, Cheonan-si, Chungcheongnam-do (Korea, Republic of); Song, Kyoung Doo; Kim, Young Kon [Sungkyunkwan University School of Medicine, Department of Radiology and Center for Imaging Science, Samsung Medical Center, Seoul (Korea, Republic of); Kim, Hee Cheol; Huh, Jung Wook [Sungkyunkwan University School of Medicine, Department of Surgery, Samsung Medical Center, Seoul (Korea, Republic of); Park, Young Suk; Park, Joon Oh; Kim, Seung Tae [Sungkyunkwan University School of Medicine, Division of Hematology-Oncology, Department of Medicine, Samsung Medical Center, Seoul (Korea, Republic of)

    2017-07-15

    To evaluate the clinical course of disappearing colorectal liver metastases (DLM) or residual tiny (≤5 mm) colorectal liver metastases (RTCLM) on gadoxetic acid-enhanced magnetic resonance imaging (MRI) and diffusion-weighted imaging (DWI) in patients who had colorectal liver metastases (CLM) and received chemotherapy. Among 137 patients who received chemotherapy for CLM and underwent gadoxetic acid-enhanced MRI and DWI between 2010 and 2012, 43 patients with 168 DLMs and 48 RTCLMs were included. The cumulative in situ recurrence rate of DLM and progression rate of RTCLM and their predictive factors were evaluated. A total of 150 DLMs and 26 RTCLMs were followed up without additional treatment. At 1 and 2 years, respectively, the cumulative in situ recurrence rates for DLM were 10.9 % and 15.7 % and the cumulative progression rates for RTCLM were 27.2 % and 33.2 %. The in situ recurrence rate at 2 years was 4.9 % for the DLM group that did not show reticular hypointensity of liver parenchyma on hepatobiliary phase. DLM on gadoxetic acid-enhanced liver MRI and DWI indicates a high possibility of clinical complete response, especially in patients without chemotherapy-induced sinusoidal obstruction syndrome. Thirty-three percent of RTCLMs showed progression at 2 years. (orig.)

  8. Entericidin is required for a probiotic treatment (Enterobacter sp. strain C6-6) to protect trout from cold-water disease challenge.

    Science.gov (United States)

    Schubiger, Carla B; Orfe, Lisa H; Sudheesh, Ponnerassery S; Cain, Kenneth D; Shah, Devendra H; Call, Douglas R

    2015-01-01

    Flavobacterium psychrophilum causes bacterial cold-water disease in multiple fish species, including salmonids. An autochthonous Enterobacter strain (C6-6) inhibits the in vitro growth of F. psychrophilum, and when ingested as a putative probiotic, it provides protection against injection challenge with F. psychrophilum in rainbow trout. In this study, low-molecular-mass (≤3 kDa) fractions from both Enterobacter C6-6 and Escherichia coli K-12 culture supernatants inhibited the growth of F. psychrophilum. The ≤3-kDa fraction from Enterobacter C6-6 was analyzed by SDS-PAGE, and subsequent tandem mass spectroscopy identified EcnB, which is a small membrane lipoprotein that is a putative pore-forming toxin. Agar plate diffusion assays demonstrated that ecnAB knockout strains of both Enterobacter C6-6 and E. coli K-12 no longer inhibited F. psychrophilum (P ) and the wild-type strain (C6-6) were added to the fish diet every day for 38 days. On day 11, the fish were challenged by injection with a virulent strain of F. psychrophilum (CSF 259-93). Fish that were fed C6-6 had significantly longer survival than fish fed the ecnAB knockout strain (P Enterobacter C6-6, and it may present new opportunities for therapeutic and prophylactic treatments against similarly susceptible pathogens. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  9. Accurate upwind-monotone (nonoscillatory) methods for conservation laws

    Science.gov (United States)

    Huynh, Hung T.

    1992-01-01

    The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.

  10. Novel technique to accurately measure femoral diameter using a Thomas splint.

    Science.gov (United States)

    Liew, Ignatius; Qureshi, Mobeen; Joseph, Jibu; Bailey, Oliver

    2017-11-01

    During surgical management of femoral shaft fractures, difficulties arise when treating patients with narrow femoral diaphyseal canals, such as young patients and those with dysplastic femurs secondary to underlying pathology. Accurate pre-operative assessment of the femoral diaphyseal canal diameter would allow the surgeon to plan surgical technique and ensure appropriate equipment was available, such as narrow, unreamed or paediatric sized nails. When secured to the patient both longitudinal rods of the main Thomas Splint component lie parallel with the femoral shaft and horizontal to the radiographic x-ray plate. The diameter of these rods are 13mm (Adult and paediatric). Using the calibration tool, we calibrate the diameter of the Thomas Splint to 13mm, accurately measuring any further detail on that radiograph, such as the diaphyseal canal diameter. Accurate knowledge pre-operatively of radiographic measurements is highly valuable to the operating surgeon. This technique can accurately measure femoral canal diameter using the Thomas splint, negates the requirement for a calibration marker, is reproducible, easy to perform, and is indispensible when faced with a patient with a narrow femoral canal in a diaphyseal femoral fracture. (181 words). Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  11. Fast thermal simulations and temperature optimization for hyperthermia treatment planning, including realistic 3D vessel networks

    NARCIS (Netherlands)

    Kok, H. P.; van den Berg, C. A. T.; Bel, A.; Crezee, J.

    2013-01-01

    Accurate thermal simulations in hyperthermia treatment planning require discrete modeling of large blood vessels. The very long computation time of the finite difference based DIscrete VAsculature model (DIVA) developed for this purpose is impractical for clinical applications. In this work, a fast

  12. Harmine treatment enhances short-term memory in old rats: Dissociation of cognition and the ability to perform the procedural requirements of maze testing.

    Science.gov (United States)

    Mennenga, Sarah E; Gerson, Julia E; Dunckley, Travis; Bimonte-Nelson, Heather A

    2015-01-01

    Harmine is a naturally occurring monoamine oxidase inhibitor that has recently been shown to selectively inhibit the dual-specificity tyrosine-(Y)-phosphorylation-regulated kinase 1A (DYRK1A). We investigated the cognitive effects of 1mg (low) Harmine and 5mg (high) Harmine using the delayed-match-to-sample (DMS) asymmetrical 3-choice water maze task to evaluate spatial working and recent memory, and the Morris water maze task (MM) to test spatial reference memory. Animals were also tested on the visible platform task, a water-escape task with the same motor, motivational, and reinforcement components as the other tasks used to evaluate cognition, but differing in its greater simplicity and that the platform was visible above the surface of the water. A subset of the Harmine-high treated animals showed clear motor impairments on all behavioral tasks, and the visible platform task confirmed a lack of competence to perform the procedural components of water maze testing. After excluding animals from the high dose group that could not perform the procedural components of a swim task, it was revealed that both high- and low-dose treatment with Harmine enhanced performance on the latter portion of DMS testing, but had no effect on MM performance. Thus, this study demonstrates the importance of confirming motor and visual competence when studying animal cognition, and verifies the one-day visible platform task as a reliable measure of ability to perform the procedural components necessary for completion of a swim task. Copyright © 2014. Published by Elsevier Inc.

  13. Multimodality guidance for accurate bronchoscopic insertion of fiducial markers.

    Science.gov (United States)

    Steinfort, Daniel P; Siva, Shankar; Kron, Tomas; Chee, Raphael R; Ruben, Jeremy D; Ball, David L; Irving, Louis B

    2015-02-01

    Fiducial markers act as visible surrogates of tumor position during image-guided radiotherapy. Marker placement has been attempted percutaneously but is associated with high rates of pneumothorax and chest drain placement. Patients undergoing radical radiation treatment for non-small-cell lung cancer underwent bronchoscopic implantation of gold fiducials using radial probe endobronchial ultrasound (EBUS) with virtual bronchoscopy and fluoroscopic guidance to achieve tumor localization and placement within/adjacent to peripheral lung tumors. For tumors not localized using radial EBUS, fiducial placement was achieved by electromagnetic navigation to the vicinity of the tumor. Eighteen fiducials were placed to mark 16 lesions in 15 patients. In nine patients (60%), fiducials were implanted at the time of diagnostic bronchoscopy. No procedural complications occurred. EBUS localization allowed marker implantation within the target lesion in 12 cases. In four lesions, electromagnetic navigation bronchoscopy-guided implantation achieved a median fiducial-lesion distance of 6 mm (mean 12 mm). No marker migration occurred after the implantation of two-band markers; however, early migration was observed in two of eight (25%) of the smaller linear fiducials. No migration during the course of radiation therapy was observed. Fiducial marker placement is easily and safely performed bronchoscopically, including at the time of diagnostic bronchoscopy. Marker geometry appears important in stability of bronchoscopically inserted fiducials. Future studies are required to confirm the optimal marker size, geometry, and spatial relationship with the target lesion.

  14. Closure requirements

    International Nuclear Information System (INIS)

    Hutchinson, I.P.G.; Ellison, R.D.

    1992-01-01

    Closure of a waste management unit can be either permanent or temporary. Permanent closure may be due to: economic factors which make it uneconomical to mine the remaining minerals; depletion of mineral resources; physical site constraints that preclude further mining and beneficiation; environmental, regulatory or other requirements that make it uneconomical to continue to develop the resources. Temporary closure can occur for a period of several months to several years, and may be caused by factors such as: periods of high rainfall or snowfall which prevent mining and waste disposal; economic circumstances which temporarily make it uneconomical to mine the target mineral; labor problems requiring a cessation of operations for a period of time; construction activities that are required to upgrade project components such as the process facilities and waste management units; and mine or process plant failures that require extensive repairs. Permanent closure of a mine waste management unit involves the provision of durable surface containment features to protect the waters of the State in the long-term. Temporary closure may involve activities that range from ongoing maintenance of the existing facilities to the installation of several permanent closure features in order to reduce ongoing maintenance. This paper deals with the permanent closure features

  15. Accurate quasi static capacitance for abrupt homojunction under ...

    Indian Academy of Sciences (India)

    Accurate quasi static capacitance for abrupt homojunction under forward and reverse polarization. D BOUKREDIMI. ∗ and H ALLOUCHE. Laboratoire de Physique des Couches Minces et Matériaux pour l'Electronique, Département de Physique,. Faculté des Sciences, Université d'Oran, Es-sénia 31100, Oran, Algérie.

  16. Toward more accurate loss tangent measurements in reentrant cavities

    Energy Technology Data Exchange (ETDEWEB)

    Moyer, R. D.

    1980-05-01

    Karpova has described an absolute method for measurement of dielectric properties of a solid in a coaxial reentrant cavity. His cavity resonance equation yields very accurate results for dielectric constants. However, he presented only approximate expressions for the loss tangent. This report presents more exact expressions for that quantity and summarizes some experimental results.

  17. Towards accurate de novo assembly for genomes with repeats

    NARCIS (Netherlands)

    Bucur, Doina

    2017-01-01

    De novo genome assemblers designed for short k-mer length or using short raw reads are unlikely to recover complex features of the underlying genome, such as repeats hundreds of bases long. We implement a stochastic machine-learning method which obtains accurate assemblies with repeats and

  18. Is Expressive Language Disorder an Accurate Diagnostic Category?

    Science.gov (United States)

    Leonard, Laurence B.

    2009-01-01

    Purpose: To propose that the diagnostic category of "expressive language disorder" as distinct from a disorder of both expressive and receptive language might not be accurate. Method: Evidence that casts doubt on a pure form of this disorder is reviewed from several sources, including the literature on genetic findings, theories of language…

  19. Accurate analysis of planar metamaterials using the RLC theory

    DEFF Research Database (Denmark)

    Malureanu, Radu; Lavrinenko, Andrei

    2008-01-01

    In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering the...

  20. Adaptive through-thickness integration for accurate springback prediction

    NARCIS (Netherlands)

    Burchitz, I.A.; Meinders, Vincent T.

    2007-01-01

    Accurate numerical prediction of springback in sheet metal forming is essential for the automotive industry. Numerous factors influence the accuracy of prediction of this complex phenomenon by using the finite element method. One of them is the numerical integration through the thickness of shell

  1. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  2. Accurate determination of calcium and other elements in cereal grains

    African Journals Online (AJOL)

    Accurate determination of calcium and other elements in cereal grains. AP Udoh. Abstract. No Abstract. Nigerian Journal of Chemical Research Vol 5 2000: 57-60. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · http://dx.doi.org/10.4314/njcr.v5i1.35607.

  3. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  4. Towards accurate target delineation for head and neck cancer

    NARCIS (Netherlands)

    Jager, E.A.

    2017-01-01

    To benefit from these high precision techniques, an accurate delineation of the target is essential. In this thesis, imaging modalities for Gross Tumor Volume (GTV) delineation for laryngeal and hypopharyngeal tumors are validated by using histopathology as a gold standard. The need for this

  5. SDGs: The Need for Vital Registration and Accurate Record Keeping

    African Journals Online (AJOL)

    AJRH Managing Editor

    The importance of having solid data and the need for accurate record keeping and reporting has been emphasized. Mobile phones and “smart cards” that store a woman‟s basic information and health record greatly facilitate data collection and analysis across facilities. A maternal audit system, the Confidential Enquiry into ...

  6. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  7. Accurate and fast urinalysis in febrile patients by flow cytometry

    NARCIS (Netherlands)

    de Boer, Foppie J.; Gieteling, Elske; van Egmond-Kreileman, Heidi; Moshaver, Bijan; van der Leur, Sjef J. C. M.; Stegeman, Coen A.; Groeneveld, Paul H. P.

    2017-01-01

    Background: The urine culture is worldwide accepted as the gold standard in diagnosing urinary tract infections, but is time consuming and costly, other methods are fast but moderately reliable. We investigated whether counting the number of bacteria by flow cytometry could be a fast and accurate

  8. Accurate method of the magnetic field measurement of quadrupole magnets

    International Nuclear Information System (INIS)

    Kumada, M.; Sakai, I.; Someya, H.; Sasaki, H.

    1983-01-01

    We present an accurate method of the magnetic field measurement of the quadrupole magnet. The method of obtaining the information of the field gradient and the effective focussing length is given. A new scheme to obtain the information of the skew field components is also proposed. The relative accuracy of the measurement was 1 x 10 -4 or less. (author)

  9. Laser guided automated calibrating system for accurate bracket ...

    African Journals Online (AJOL)

    It is widely recognized that accurate bracket placement is of critical importance in the efficient application of biomechanics and in realizing the full potential of a preadjusted edgewise appliance. Aim: The purpose of ... placement. Keywords: Hough transforms, Indirect bonding technique, Laser, Orthodontic bracket placement ...

  10. Fishing site mapping using local knowledge provides accurate and ...

    African Journals Online (AJOL)

    Accurate fishing ground maps are necessary for fisheries monitoring. In Velondriake locally managed marine area (LMMA) we observed that the nomenclature of shared fishing sites (FS) is villages dependent. Additionally, the level of illiteracy makes data collection more complicated, leading to data collectors improvising ...

  11. accurate solutions of colebrook- white's friction factor formulae

    African Journals Online (AJOL)

    HOD

    The importance of Ff is well known in the selection of pipe size, determination of flows in a pipe, fluid transportation and in the design of potable water supply scheme. There are alot of researches and publications on the Ff estimation in pipe, but documentations on explicit Fff for computing accurate. Ff are rare in literature.

  12. Weak Weak Lensing : How Accurately Can Small Shears be Measured?

    NARCIS (Netherlands)

    Kuijken, K.

    2006-01-01

    Abstract: Now that weak lensing signals on the order of a percent are actively being searched for (cosmic shear, galaxy-galaxy lensing, large radii in clusters...) it is important to investigate how accurately weak shears can be determined. Many systematic effects are present, and need to be

  13. Accurate eye center location through invariant isocentric patterns

    NARCIS (Netherlands)

    Valenti, R.; Gevers, T.

    2012-01-01

    Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and

  14. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  15. Device accurately measures and records low gas-flow rates

    Science.gov (United States)

    Branum, L. W.

    1966-01-01

    Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.

  16. Centi-pixel accurate real-time inverse distortion correction

    CSIR Research Space (South Africa)

    De Villiers, Johan P

    2008-11-01

    Full Text Available , memory usage or processing time. This paper shows that it is possible to have real-time, low memory, accurate inverse distortion correction. A novel method based on the re-use of left-over distortion characterization data is combined with modern numerical...

  17. Accurate segmentation of dense nanoparticles by partially discrete electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)

    2012-03-15

    Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.

  18. Exploring the relationship between sequence similarity and accurate phylogenetic trees.

    Science.gov (United States)

    Cantarel, Brandi L; Morrison, Hilary G; Pearson, William

    2006-11-01

    We have characterized the relationship between accurate phylogenetic reconstruction and sequence similarity, testing whether high levels of sequence similarity can consistently produce accurate evolutionary trees. We generated protein families with known phylogenies using a modified version of the PAML/EVOLVER program that produces insertions and deletions as well as substitutions. Protein families were evolved over a range of 100-400 point accepted mutations; at these distances 63% of the families shared significant sequence similarity. Protein families were evolved using balanced and unbalanced trees, with ancient or recent radiations. In families sharing statistically significant similarity, about 60% of multiple sequence alignments were 95% identical to true alignments. To compare recovered topologies with true topologies, we used a score that reflects the fraction of clades that were correctly clustered. As expected, the accuracy of the phylogenies was greatest in the least divergent families. About 88% of phylogenies clustered over 80% of clades in families that shared significant sequence similarity, using Bayesian, parsimony, distance, and maximum likelihood methods. However, for protein families with short ancient branches (ancient radiation), only 30% of the most divergent (but statistically significant) families produced accurate phylogenies, and only about 70% of the second most highly conserved families, with median expectation values better than 10(-60), produced accurate trees. These values represent upper bounds on expected tree accuracy for sequences with a simple divergence history; proteins from 700 Giardia families, with a similar range of sequence similarities but considerably more gaps, produced much less accurate trees. For our simulated insertions and deletions, correct multiple sequence alignments did not perform much better than those produced by T-COFFEE, and including sequences with expressed sequence tag-like sequencing errors did not

  19. Development of a setup to enable stable and accurate flow conditions for membrane biofouling studies

    KAUST Repository

    Bucs, Szilard

    2015-07-10

    Systematic laboratory studies on membrane biofouling require experimental conditions that are well defined and representative for practice. Hydrodynamics and flow rate variations affect biofilm formation, morphology, and detachment and impacts on membrane performance parameters such as feed channel pressure drop. There is a suite of available monitors to study biofouling, but systems to operate monitors have not been well designed to achieve an accurate, constant water flow required for a reliable determination of biomass accumulation and feed channel pressure drop increase. Studies were done with membrane fouling simulators operated in parallel with manual and automated flow control, with and without dosage of a biodegradable substrate to the feedwater to enhance biofouling rate. High flow rate variations were observed for the manual water flow system (up to ≈9%) compared to the automatic flow control system (<1%). The flow rate variation in the manual system was strongly increased by biofilm accumulation, while the automatic system maintained an accurate and constant water flow in the monitor. The flow rate influences the biofilm accumulation and the impact of accumulated biofilm on membrane performance. The effect of the same amount of accumulated biomass on the pressure drop increase was related to the linear flow velocity. Stable and accurate feedwater flow rates are essential for biofouling studies in well-defined conditions in membrane systems. © 2015 Balaban Desalination Publications. All rights reserved.

  20. The New Aptima HBV Quant Real-Time TMA Assay Accurately Quantifies Hepatitis B Virus DNA from Genotypes A to F.

    Science.gov (United States)

    Chevaliez, Stéphane; Dauvillier, Claude; Dubernet, Fabienne; Poveda, Jean-Dominique; Laperche, Syria; Hézode, Christophe; Pawlotsky, Jean-Michel

    2017-04-01

    Sensitive and accurate hepatitis B virus (HBV) DNA detection and quantification are essential to diagnose HBV infection, establish the prognosis of HBV-related liver disease, and guide the decision to treat and monitor the virological response to antiviral treatment and the emergence of resistance. Currently available HBV DNA platforms and assays are generally designed for batching multiple specimens within an individual run and require at least one full day of work to complete the analyses. The aim of this study was to evaluate the ability of the newly developed, fully automated, one-step Aptima HBV Quant assay to accurately detect and quantify HBV DNA in a large series of patients infected with different HBV genotypes. The limit of detection of the assay was estimated to be 4.5 IU/ml. The specificity of the assay was 100%. Intra-assay and interassay coefficients of variation ranged from 0.29% to 5.07% and 4.90% to 6.85%, respectively. HBV DNA levels from patients infected with HBV genotypes A to F measured with the Aptima HBV Quant assay strongly correlated with those measured by two commercial real-time PCR comparators (Cobas AmpliPrep/Cobas TaqMan HBV test, version 2.0, and Abbott RealTi m e HBV test). In conclusion, the Aptima HBV Quant assay is sensitive, specific, and reproducible and accurately quantifies HBV DNA in plasma samples from patients with chronic HBV infections of all genotypes, including patients on antiviral treatment with nucleoside or nucleotide analogues. The Aptima HBV Quant assay can thus confidently be used to detect and quantify HBV DNA in both clinical trials with new anti-HBV drugs and clinical practice. Copyright © 2017 American Society for Microbiology.

  1. Extension of the Accurate Voltage-Sag Fault Location Method in Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Youssef Menchafou

    2016-03-01

    Full Text Available Accurate Fault location in an Electric Power Distribution System (EPDS is important in maintaining system reliability. Several methods have been proposed in the past. However, the performances of these methods either show to be inefficient or are a function of the fault type (Fault Classification, because they require the use of an appropriate algorithm for each fault type. In contrast to traditional approaches, an accurate impedance-based Fault Location (FL method is presented in this paper. It is based on the voltage-sag calculation between two measurement points chosen carefully from the available strategic measurement points of the line, network topology and current measurements at substation. The effectiveness and the accuracy of the proposed technique are demonstrated for different fault types using a radial power flow system. The test results are achieved from the numerical simulation using the data of a distribution line recognized in the literature.

  2. Application of an accurate thermal hydraulics solver in VTT's reactor dynamics codes

    International Nuclear Information System (INIS)

    Rajamaeki, M.; Raety, H.; Kyrki-Rajamaeki, R.; Eskola, M.

    1998-01-01

    VTT's reactor dynamics codes are developed further and new more detailed models are created for tasks related to increased safety requirements. For thermal hydraulics calculations an accurate general flow model based on a new solution method PLIM has been developed. It has been applied in VTT's one-dimensional TRAB and three-dimensional HEXTRAN codes. Results of a demanding international boron dilution benchmark defined by VTT are given and compared against results of other codes with original or improved boron tracking. The new PLIM method not only allows the accurate modelling of a propagating boron dilution front, but also the tracking of a temperature front, which is missed by the special boron tracking models. (orig.)

  3. New models for energy beam machining enable accurate generation of free forms.

    Science.gov (United States)

    Axinte, Dragos; Billingham, John; Bilbao Guillerna, Aitor

    2017-09-01

    We demonstrate that, despite differences in their nature, many energy beam controlled-depth machining processes (for example, waterjet, pulsed laser, focused ion beam) can be modeled using the same mathematical framework-a partial differential evolution equation that requires only simple calibrations to capture the physics of each process. The inverse problem can be solved efficiently through the numerical solution of the adjoint problem and leads to beam paths that generate prescribed three-dimensional features with minimal error. The viability of this modeling approach has been demonstrated by generating accurate free-form surfaces using three processes that operate at very different length scales and with different physical principles for material removal: waterjet, pulsed laser, and focused ion beam machining. Our approach can be used to accurately machine materials that are hard to process by other means for scalable applications in a wide variety of industries.

  4. Accurate lattice energies of organic molecular crystals from periodic turbomole calculations.

    Science.gov (United States)

    Buchholz, Hannes Konrad; Stein, Matthias

    2018-03-05

    Accurate lattice energies of organic crystals are important i.e. for the pharmaceutical industry. Periodic DFT calculations with atom-centered Gaussian basis functions with the Turbomole program are used to calculate lattice energies for several non-covalently bound organic molecular crystals. The accuracy and convergence of results with basis set size and k-space sampling from periodic calculations is evaluated for the two reference molecules benzoic acid and naphthalene. For the X23 benchmark set of small molecular crystals accurate lattice energies are obtained using the PBE-D3 functional. In particular for hydrogen-bonded systems, a sufficiently large basis set is required. The calculated lattice energy differences between enantiopure and racemic crystal forms for a prototype set of chiral molecules are in good agreement with experimental results and allow the rationalization and computer-aided design of chiral separation processes. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  5. The Scientific and Societal Need for Accurate Global Remote Sensing of Marine Suspended Sediments

    Science.gov (United States)

    Acker, James G.

    2006-01-01

    Population pressure, commercial development, and climate change are expected to cause continuing alteration of the vital oceanic coastal zone environment. These pressures will influence both the geology and biology of the littoral, nearshore, and continental shelf regions. A pressing need for global observation of coastal change processes is an accurate remotely-sensed data product for marine suspended sediments. The concentration, delivery, transport, and deposition of sediments is strongly relevant to coastal primary production, inland and coastal hydrology, coastal erosion, and loss of fragile wetland and island habitats. Sediment transport and deposition is also related to anthropogenic activities including agriculture, fisheries, aquaculture, harbor and port commerce, and military operations. Because accurate estimation of marine suspended sediment concentrations requires advanced ocean optical analysis, a focused collaborative program of algorithm development and assessment is recommended, following the successful experience of data refinement for remotely-sensed global ocean chlorophyll concentrations.

  6. Accurate anisotropic material modelling using only tensile tests for hot and cold forming

    Science.gov (United States)

    Abspoel, M.; Scholting, M. E.; Lansbergen, M.; Neelis, B. M.

    2017-09-01

    Accurate material data for simulations require a lot of effort. Advanced yield loci require many different kinds of tests and a Forming Limit Curve (FLC) needs a large amount of samples. Many people use simple material models to reduce the effort of testing, however some models are either not accurate enough (i.e. Hill’48), or do not describe new types of materials (i.e. Keeler). Advanced yield loci describe the anisotropic materials behaviour accurately, but are not widely adopted because of the specialized tests, and data post-processing is a hurdle for many. To overcome these issues, correlations between the advanced yield locus points (biaxial, plane strain and shear) and mechanical properties have been investigated. This resulted in accurate prediction of the advanced stress points using only Rm, Ag and r-values in three directions from which a Vegter yield locus can be constructed with low effort. FLC’s can be predicted with the equations of Abspoel & Scholting depending on total elongation A80, r-value and thickness. Both predictive methods are initially developed for steel, aluminium and stainless steel (BCC and FCC materials). The validity of the predicted Vegter yield locus is investigated with simulation and measurements on both hot and cold formed parts and compared with Hill’48. An adapted specimen geometry, to ensure a homogeneous temperature distribution in the Gleeble hot tensile test, was used to measure the mechanical properties needed to predict a hot Vegter yield locus. Since for hot material, testing of stress states other than uniaxial is really challenging, the prediction for the yield locus adds a lot of value. For the hot FLC an A80 sample with a homogeneous temperature distribution is needed which is due to size limitations not possible in the Gleeble tensile tester. Heating the sample in an industrial type furnace and tensile testing it in a dedicated device is a good alternative to determine the necessary parameters for the FLC

  7. DNA barcode data accurately assign higher spider taxa

    Directory of Open Access Journals (Sweden)

    Jonathan A. Coddington

    2016-07-01

    Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However

  8. Robust design requirements specification: a quantitative method for requirements development using quality loss functions

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard; Christensen, Martin Ebro; Howard, Thomas J.

    2016-01-01

    Product requirements serve many purposes in the product development process. Most importantly, they are meant to capture and facilitate product goals and acceptance criteria, as defined by stakeholders. Accurately communicating stakeholder goals and acceptance criteria can be challenging and more...

  9. A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity

    Directory of Open Access Journals (Sweden)

    C. C. Cui

    2017-01-01

    Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.

  10. Accurate phylogenetic tree reconstruction from quartets: a heuristic approach.

    Science.gov (United States)

    Reaz, Rezwana; Bayzid, Md Shamsuzzoha; Rahman, M Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A 'quartet' is an unrooted tree over 4 taxa, hence the quartet-based supertree methods combine many 4-taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets.

  11. Accurate van der Waals coefficients from density functional theory

    Science.gov (United States)

    Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn

    2012-01-01

    The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765

  12. Truth in radiation: A matter of accurate measurement

    International Nuclear Information System (INIS)

    Gill, D.; Schutz, D.F.

    1990-01-01

    Radiation protection begins with accurate measurement. Without respect for radiation based on a solid scientific measurement free reign is given to those who would create harm through over exposures resulting from carelessness and apathy as well as to those who would deny the benefits of nuclear technology by seeking to eliminate all exposure to nuclear radiation. Teledyne Isotopes has been a leader in accurate radiation measurement since 1955, providing measurements of all significant radionuclides in environmental media from the vicinity of nuclear power plants, nuclear material production facilities and research facilities. TI also provides thermoluminescent dosimetry (TLD) for environmental radiation and human exposure monitoring. Instrumentation ranges from manual readers for small facilities and research applications to automatic readers for large-scale badge service operations

  13. Tray-Grid Guide for Accurate Mini-implant Insertion

    Directory of Open Access Journals (Sweden)

    Madhukar Reddy Rachala

    2012-01-01

    Full Text Available The use of Orthodontic mini-implant anchorage is rapidly growing. With the improved understanding of the biomechanics, an array of tooth movements are possible with mini-implants. Precise positioning of miniscrews is critical to their success. Surgical stents, guides and templates can transfer a radiographically planned, three-dimensional implant position to the surgical site more accurately. A new technique using thermoplastic sheets and a grid made of 0.012" stainless steel ligature wire (Tray-Grid Guide; TGG was devised that provides reliable guidance in terms of both location and angulations with minimal complications. It was found to be effective and efficient to obtain a precise and accurate placement of mini-implants. It is particularly valuable when the mini-implant is prescribed and inserted by different clinicians or when the orthodontist is inexperienced in implant techniques.

  14. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao

    2015-02-01

    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  15. The highly accurate anteriolateral portal for injecting the knee

    Directory of Open Access Journals (Sweden)

    Chavez-Chiang Colbert E

    2011-03-01

    Full Text Available Abstract Background The extended knee lateral midpatellar portal for intraarticular injection of the knee is accurate but is not practical for all patients. We hypothesized that a modified anteriolateral portal where the synovial membrane of the medial femoral condyle is the target would be highly accurate and effective for intraarticular injection of the knee. Methods 83 subjects with non-effusive osteoarthritis of the knee were randomized to intraarticular injection using the modified anteriolateral bent knee versus the standard lateral midpatellar portal. After hydrodissection of the synovial membrane with lidocaine using a mechanical syringe (reciprocating procedure device, 80 mg of triamcinolone acetonide were injected into the knee with a 2.0-in (5.1-cm 21-gauge needle. Baseline pain, procedural pain, and pain at outcome (2 weeks and 6 months were determined with the 10 cm Visual Analogue Pain Score (VAS. The accuracy of needle placement was determined by sonographic imaging. Results The lateral midpatellar and anteriolateral portals resulted in equivalent clinical outcomes including procedural pain (VAS midpatellar: 4.6 ± 3.1 cm; anteriolateral: 4.8 ± 3.2 cm; p = 0.77, pain at outcome (VAS midpatellar: 2.6 ± 2.8 cm; anteriolateral: 1.7 ± 2.3 cm; p = 0.11, responders (midpatellar: 45%; anteriolateral: 56%; p = 0.33, duration of therapeutic effect (midpatellar: 3.9 ± 2.4 months; anteriolateral: 4.1 ± 2.2 months; p = 0.69, and time to next procedure (midpatellar: 7.3 ± 3.3 months; anteriolateral: 7.7 ± 3.7 months; p = 0.71. The anteriolateral portal was 97% accurate by real-time ultrasound imaging. Conclusion The modified anteriolateral bent knee portal is an effective, accurate, and equivalent alternative to the standard lateral midpatellar portal for intraarticular injection of the knee. Trial Registration ClinicalTrials.gov: NCT00651625

  16. Accurate quasi static capacitance for abrupt homojunction under ...

    Indian Academy of Sciences (India)

    Home; Journals; Bulletin of Materials Science; Volume 36; Issue 2. Accurate quasi static capacitance for abrupt ... D Boukredimi1 H Allouche1. Laboratoire de Physique des Couches Minces et Matériaux pour l'Electronique, Département de Physique, Faculté des Sciences, Université d'Oran, Es-sénia 31100, Oran, Algérie ...

  17. Accurate interpolation of 3D fields in charged particle optics.

    Science.gov (United States)

    Horák, Michal; Badin, Viktor; Zlámal, Jakub

    2018-03-29

    Standard 3D interpolation polynomials often suffer from numerical errors of the calculated field and lack of node points in the 3D solution. We introduce a novel method for accurate and smooth interpolation of arbitrary electromagnetic fields in the vicinity of the optical axis valid up to 90% of the bore radius. Our method combines Fourier analysis and Gaussian wavelet interpolation and provides the axial multipole field functions and their derivatives analytically. The results are accurate and noiseless, usually up to the 5th derivative. This is very advantageous for further applications, such as accurate particle tracing, and evaluation of aberration coefficients and other optical properties. The proposed method also enables studying the strength and orientation of all multipole field components. To illustrate the capabilities of the proposed algorithm, we present three examples: a magnetic lens with a hole in the polepiece, a saturated magnetic lens with an elliptic polepiece, and an electrostatic 8-electrode multipole. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Directory of Open Access Journals (Sweden)

    Merchant Anwar T

    2008-09-01

    Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.

  19. Accurate and approximate thermal rate constants for polyatomic chemical reactions

    International Nuclear Information System (INIS)

    Nyman, Gunnar

    2007-01-01

    In favourable cases it is possible to calculate thermal rate constants for polyatomic reactions to high accuracy from first principles. Here, we discuss the use of flux correlation functions combined with the multi-configurational time-dependent Hartree (MCTDH) approach to efficiently calculate cumulative reaction probabilities and thermal rate constants for polyatomic chemical reactions. Three isotopic variants of the H 2 + CH 3 → CH 4 + H reaction are used to illustrate the theory. There is good agreement with experimental results although the experimental rates generally are larger than the calculated ones, which are believed to be at least as accurate as the experimental rates. Approximations allowing evaluation of the thermal rate constant above 400 K are treated. It is also noted that for the treated reactions, transition state theory (TST) gives accurate rate constants above 500 K. TST theory also gives accurate results for kinetic isotope effects in cases where the mass of the transfered atom is unchanged. Due to neglect of tunnelling, TST however fails below 400 K if the mass of the transferred atom changes between the isotopic reactions

  20. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Science.gov (United States)

    Dehghan, Mahshid; Merchant, Anwar T

    2008-01-01

    Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA) is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI), waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand)-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study. PMID:18778488

  1. Accurate and occlusion-robust multi-view stereo

    Science.gov (United States)

    Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.

    2015-11-01

    This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.

  2. Osteoporosis treatment

    DEFF Research Database (Denmark)

    Pazianas, Michael; Abrahamsen, Bo

    2016-01-01

    The findings of the Women's Health Initiative study in 2002 marginalized the use of hormone replacement therapy and established bisphosphonates as the first line of treatment for osteoporosis. Denosumab could be used in selected patients. Although bisphosphonates only maintain the structure of bone...... to their benefits/harm ratio. Treatment of osteoporosis is a long process, and many patients will require treatment with more than one type of drug over their lifetime....

  3. Accurate Classification of Chronic Migraine via Brain Magnetic Resonance Imaging

    Science.gov (United States)

    Schwedt, Todd J.; Chong, Catherine D.; Wu, Teresa; Gaw, Nathan; Fu, Yinlin; Li, Jing

    2015-01-01

    Background The International Classification of Headache Disorders provides criteria for the diagnosis and subclassification of migraine. Since there is no objective gold standard by which to test these diagnostic criteria, the criteria are based on the consensus opinion of content experts. Accurate migraine classifiers consisting of brain structural measures could serve as an objective gold standard by which to test and revise diagnostic criteria. The objectives of this study were to utilize magnetic resonance imaging measures of brain structure for constructing classifiers: 1) that accurately identify individuals as having chronic vs. episodic migraine vs. being a healthy control; and 2) that test the currently used threshold of 15 headache days/month for differentiating chronic migraine from episodic migraine. Methods Study participants underwent magnetic resonance imaging for determination of regional cortical thickness, cortical surface area, and volume. Principal components analysis combined structural measurements into principal components accounting for 85% of variability in brain structure. Models consisting of these principal components were developed to achieve the classification objectives. Ten-fold cross validation assessed classification accuracy within each of the ten runs, with data from 90% of participants randomly selected for classifier development and data from the remaining 10% of participants used to test classification performance. Headache frequency thresholds ranging from 5–15 headache days/month were evaluated to determine the threshold allowing for the most accurate subclassification of individuals into lower and higher frequency subgroups. Results Participants were 66 migraineurs and 54 healthy controls, 75.8% female, with an average age of 36 +/− 11 years. Average classifier accuracies were: a) 68% for migraine (episodic + chronic) vs. healthy controls; b) 67.2% for episodic migraine vs. healthy controls; c) 86.3% for chronic

  4. Medical Treatment in Lieu of Evacuation: Techniques for Combat Casualty Care Physicians

    Science.gov (United States)

    2012-06-08

    begin treatment in a medical facility, but later die due to those same injuries.4 Unfortunately, DOW is most accurately measured at the hospital, or...intensive, or nursing services. However, the addition of a patient hold section, xray diagnostics, and minimal laboratory equipment allow Role 2...fracture.28 Currently, the standard of medicine requires an xray to confirm a fracture before it is treated. An additional xray is required after

  5. Accurate monitoring developed by EDF for FA-3-EPRTM and UK-EPRTM: chemistry-radiochemistry design and procedures

    International Nuclear Information System (INIS)

    Tigeras, Arancha; Bouhrizi, Sofia; Pierre, Marine; L'Orphelin, Jean-Matthieu

    2012-09-01

    The monitoring of chemistry and radiochemistry parameters is a fundamental need in nuclear power plants in order to ensure: - The reactivity control in real time, - The barrier integrity surveillance by means of the fuel cladding failures detection and the primary-pressure boundary components control, - The water quality to limit the radiation build-up and the material corrosion permitting to prepare the maintenance, radioprotection and waste operations. - The efficiency of treatment systems and hence the minimization of chemical and radiochemical substances discharges The relevant chemistry and radiochemistry parameters to be monitored are selected depending on the chemistry conditioning of systems, the source term evaluations, the corrosion mechanisms and the radioactivity consequences. In spite of the difficulties for obtaining representative samples under all circumstances, the EPR M design provides the appropriate provisions and analytical procedures for ensuring the reliable and accurate monitoring of parameters in compliance with the specification requirements. The design solutions, adopted for Flamanville 3-EPR M and UK-EPR M , concerning the sampling conditions and locations, the on-line and analytical equipment, the procedures and the results transmission to control room and chemistry laboratory are supported by ALARP considerations, international experience and researches concerning the nuclides behavior (corrosion product and actinides solubility, fission product degassing, impurities and additives reactions also). This paper details the means developed by EDF for making successful and meaningful sampling and measurements to achieve the essential objectives associated with the monitoring. (authors)

  6. FULLY AUTOMATED GENERATION OF ACCURATE DIGITAL SURFACE MODELS WITH SUB-METER RESOLUTION FROM SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    J. Wohlfeil

    2012-07-01

    Full Text Available Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images’ relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.

  7. The NLC Software Requirements Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Shoaee, Hamid

    2002-08-20

    We describe the software requirements and development methodology developed for the NLC control system. Given the longevity of that project, and the likely geographical distribution of the collaborating engineers, the planned requirements management process is somewhat more formal than the norm in high energy physics projects. The short term goals of the requirements process are to accurately estimate costs, to decompose the problem, and to determine likely technologies. The long term goal is to enable a smooth transition from high level functional requirements to specific subsystem and component requirements for individual programmers, and to support distributed development. The methodology covers both ends of that life cycle. It covers both the analytical and documentary tools for software engineering, and project management support. This paper introduces the methodology, which is fully described in [1].

  8. Impact of appendicitis during pregnancy : No delay in accurate diagnosis and treatment

    NARCIS (Netherlands)

    Aggenbach, L.; Zeeman, G. G.; Cantineau, A. E. P.; Gordijn, S. J.; Hofker, H. S.

    Background: Acute appendicitis during pregnancy may be associated with serious maternal and/or fetal complications. To date, the optimal clinical approach to the management of pregnant women suspected of having acute appendicitis is subject to debate. The purpose of this retrospective study was to

  9. Down Syndrome and Dementia: Is Depression a Confounder for Accurate Diagnosis and Treatment?

    Science.gov (United States)

    Wark, Stuart; Hussain, Rafat; Parmenter, Trevor

    2014-01-01

    The past century has seen a dramatic improvement in the life expectancy of people with Down syndrome. However, research has shown that individuals with Down syndrome now have an increased likelihood of early onset dementia. They are more likely than their mainstream peers to experience other significant co-morbidities including mental health…

  10. Nutrients requirements in biological industrial wastewater treatment ...

    African Journals Online (AJOL)

    African Journal of Biotechnology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 3, No 4 (2004) >. Log in or Register to get access to full text downloads.

  11. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin

    2012-08-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  12. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    DEFF Research Database (Denmark)

    Stovgaard, Kasper; Andreetta, Christian; Ferkinghoff-Borg, Jesper

    2010-01-01

    scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion: We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program...... CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, Torus...

  13. Accurately fitting advanced training. Flexible simulator training by modular training course concepts

    International Nuclear Information System (INIS)

    Sickora, Katrin; Cremer, Hans-Peter

    2010-01-01

    Every employee of a power plant contributes with his individual expertise to the success of the enterprise. Certainly personal skills of employees differ from each other as well as power plants are different. With respect to effective simulator training this means that no two simulator training courses can be identical. To exactly meet the requirements of our customers KWS has developed modules for simulation training courses. Each module represents either a technical subject or addresses a topic in the field of soft skills. An accurately fitting combination of several of these modules to the needs of our customers allows for most efficient simulator training courses. (orig.)

  14. Two reactions method for accurate analysis by irradiation with charged particles

    International Nuclear Information System (INIS)

    Ishii, K.; Sastri, C.S.; Valladon, M.; Borderie, B.; Debrun, J.L.

    1978-01-01

    In the average stopping power method the formula error itself was negligible but systematic errors could be introduced by the stopping power data used in this formula. A method directly derived from the average stopping power method, but based on the use of two nuclear reactions, is described here. This method has a negligible formula error and does not require the use of any stopping power or range data: accurate and 'self-consistent' analysis by irradiation with charged particles is then possible. (Auth.)

  15. Chemical techniques to extract organic fractions from fossil bones for accurate 14C dating

    International Nuclear Information System (INIS)

    Minami, Masayo; Muto, Hiroo; Nakamura, Toshio

    2004-01-01

    We examined different concentrations of HCl, such as 0.4, 0.6, 0.8, 1.0 and 1.2 M, for decalcification of fossil bones and different times of 0.1 M NaOH treatment on collagens to determine the best conditions for purifying collagen through extraction of humic contaminants, and compared the alkali treatment method with the XAD-2 treatment method for several types of fossils. The yield of acid-insoluble bone fractions did not change over the range from 0.4 to 1.0 M HCl and decreased suddenly with 1.2 M HCl on decalcification, and the 14 C ages of the extracted gelatins from the five decalcified fractions were unchanged, suggesting that 14 C ages as those of the XAD-purified hydrolysates. The NaOH-treatment time should be less than several hours to avoid a loss of collagen. The fossil bones used are relatively well-preserved, but the alkali treatment could bring about a lot of loss of organic bone proteins for poorly-preserved bones. The XAD-2 treatment method is effective for accurate radiocarbon dating of fossil bones, if the XAD-2 resin is completely pre-cleaned

  16. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    Science.gov (United States)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  17. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  18. Fast sweeping algorithm for accurate solution of the TTI eikonal equation using factorization

    KAUST Repository

    bin Waheed, Umair

    2017-06-10

    Traveltime computation is essential for many seismic data processing applications and velocity analysis tools. High-resolution seismic imaging requires eikonal solvers to account for anisotropy whenever it significantly affects the seismic wave kinematics. Moreover, computation of auxiliary quantities, such as amplitude and take-off angle, rely on highly accurate traveltime solutions. However, the finite-difference based eikonal solution for a point-source initial condition has an upwind source-singularity at the source position, since the wavefront curvature is large near the source point. Therefore, all finite-difference solvers, even the high-order ones, show inaccuracies since the errors due to source-singularity spread from the source point to the whole computational domain. We address the source-singularity problem for tilted transversely isotropic (TTI) eikonal solvers using factorization. We solve a sequence of factored tilted elliptically anisotropic (TEA) eikonal equations iteratively, each time by updating the right hand side function. At each iteration, we factor the unknown TEA traveltime into two factors. One of the factors is specified analytically, such that the other factor is smooth in the source neighborhood. Therefore, through the iterative procedure we obtain accurate solution to the TTI eikonal equation. Numerical tests show significant improvement in accuracy due to factorization. The idea can be easily extended to compute accurate traveltimes for models with lower anisotropic symmetries, such as orthorhombic, monoclinic or even triclinic media.

  19. Eddy covariance observations of methane and nitrous oxide emissions. Towards more accurate estimates from ecosystems

    International Nuclear Information System (INIS)

    Kroon, P.S.

    2010-09-01

    About 30% of the increased greenhouse gas (GHG) emissions of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) are related to land use changes and agricultural activities. In order to select effective measures, knowledge is required about GHG emissions from these ecosystems and how these emissions are influenced by management and meteorological conditions. Accurate emission values are therefore needed for all three GHGs to compile the full GHG balance. However, the current annual estimates of CH4 and N2O emissions from ecosystems have significant uncertainties, even larger than 50%. The present study showed that an advanced technique, micrometeorological eddy covariance flux technique, could obtain more accurate estimates with uncertainties even smaller than 10%. The current regional and global trace gas flux estimates of CH4 and N2O are possibly seriously underestimated due to incorrect measurement procedures. Accurate measurements of both gases are really important since they could even contribute for more than two-third to the total GHG emission. For example: the total GHG emission of a dairy farm site was estimated at 16.10 3 kg ha -1 yr -1 in CO2-equivalents from which 25% and 45% was contributed by CH4 and N2O, respectively. About 60% of the CH4 emission was emitted by ditches and their bordering edges. These emissions are not yet included in the national inventory reports. We recommend including these emissions in coming reports.

  20. Toward Accurate On-Ground Attitude Determination for the Gaia Spacecraft

    Science.gov (United States)

    Samaan, Malak A.

    2010-03-01

    The work presented in this paper concerns the accurate On-Ground Attitude (OGA) reconstruction for the astrometry spacecraft Gaia in the presence of disturbance and of control torques acting on the spacecraft. The reconstruction of the expected environmental torques which influence the spacecraft dynamics will be also investigated. The telemetry data from the spacecraft will include the on-board real-time attitude, which is of order of several arcsec. This raw attitude is the starting point for the further attitude reconstruction. The OGA will use the inputs from the field coordinates of known stars (attitude stars) and also the field coordinate differences of objects on the Sky Mapper (SM) and Astrometric Field (AF) payload instruments to improve this raw attitude. The on-board attitude determination uses a Kalman Filter (KF) to minimize the attitude errors and produce a more accurate attitude estimation than the pure star tracker measurement. Therefore the first approach for the OGA will be an adapted version of KF. Furthermore, we will design a batch least squares algorithm to investigate how to obtain a more accurate OGA estimation. Finally, a comparison between these different attitude determination techniques in terms of accuracy, robustness, speed and memory required will be evaluated in order to choose the best attitude algorithm for the OGA. The expected resulting accuracy for the OGA determination will be on the order of milli-arcsec.

  1. Three-dimensional photoacoustic imaging and inversion for accurate quantification of chromophore distributions

    Science.gov (United States)

    Fonseca, Martina; Malone, Emma; Lucka, Felix; Ellwood, Rob; An, Lu; Arridge, Simon; Beard, Paul; Cox, Ben

    2017-03-01

    Photoacoustic tomography can, in principle, provide quantitatively accurate, high-resolution, images of chromophore distributions in 3D in vivo. However, achieving this goal requires not only dealing with the optical fluence-related spatial and spectral distortion but also having access to high quality, calibrated, measurements and using image reconstruction algorithms free from inaccurate assumptions. Furthermore, accurate knowledge of experimental parameters, such as the positions of the ultrasound detectors and the illumination pattern, is necessary for the reconstruction step. A meticulous and rigorous experimental phantom study was conducted to show that highly-resolved 3D estimation of chromophore distributions can be achieved: a crucial step towards in vivo implementation. The phantom consisted of four 580 μm diameter tubes with different ratios of copper sulphate and nickel sulphate as hemoglobin analogues, submersed in a background medium of intralipid and india ink. The optical absorption, scattering, photostability, and Grüneisen parameter were characterised for all components independently. A V-shaped imaging scanner enabled 3D imaging with the high resolution, high sensitivity, and wide bandwidth characteristic of Fabry-Pérot ultrasound sensors, but without the limited-view disadvantage of single-plane scanners. The optical beam profile and position were determined experimentally. Nine wavelengths between 750 and 1110 nm were used. The images of the chromophore concentrations were obtained using a model-based, two-step, procedure, that did not require image segmentation. First, the acoustic reconstruction was solved with an iterative time-reversal algorithm to obtain images of the initial acoustic pressure at each of the nine wavelengths for an 18×17×13 mm3 volume with 50μm voxels. Then, 3D high resolution estimates of the chromophore concentrations were obtained by using a diffusion model of light transport in an iterative nonlinear optimisation

  2. Accurate characterization of OPVs: Device masking and different solar simulators

    DEFF Research Database (Denmark)

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.

    2013-01-01

    One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  3. An Accurate Transmitting Power Control Method in Wireless Communication Transceivers

    Science.gov (United States)

    Zhang, Naikang; Wen, Zhiping; Hou, Xunping; Bi, Bo

    2018-01-01

    Power control circuits are widely used in transceivers aiming at stabilizing the transmitted signal power to a specified value, thereby reducing power consumption and interference to other frequency bands. In order to overcome the shortcomings of traditional modes of power control, this paper proposes an accurate signal power detection method by multiplexing the receiver and realizes transmitting power control in the digital domain. The simulation results show that this novel digital power control approach has advantages of small delay, high precision and simplified design procedure. The proposed method is applicable to transceivers working at large frequency dynamic range, and has good engineering practicability.

  4. Acoustic Effects Accurately Predict an Extreme Case of Biological Morphology

    Science.gov (United States)

    Zhang, Zhiwei; Truong, Son Nguyen; Müller, Rolf

    2009-07-01

    The biosonar system of bats utilizes physical baffle shapes around the sites of ultrasound emission for diffraction-based beam forming. Among these shapes, some extreme cases have evolved that include a long noseleaf protrusion (sella) in a species of horseshoe bat. We have evaluated the acoustic cost function associated with sella length with a computational physics approach and found that the extreme length can be predicted accurately from a fiducial point on this function. This suggests that some extreme cases of biological morphology can be explained from their physical function alone.

  5. Accurate ocean bottom seismometer positioning method inspired by multilateration technique

    Science.gov (United States)

    Benazzouz, Omar; Pinheiro, Luis M.; Matias, Luis M. A.; Afilhado, Alexandra; Herold, Daniel; Haines, Seth

    2018-01-01

    The positioning of ocean bottom seismometers (OBS) is a key step in the processing flow of OBS data, especially in the case of self popup types of OBS instruments. The use of first arrivals from airgun shots, rather than relying on the acoustic transponders mounted in the OBS, is becoming a trend and generally leads to more accurate positioning due to the statistics from a large number of shots. In this paper, a linearization of the OBS positioning problem via the multilateration technique is discussed. The discussed linear solution solves jointly for the average water layer velocity and the OBS position using only shot locations and first arrival times as input data.

  6. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  7. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis

    2015-12-01

    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  8. Fast, accurate control rod calibration using a programmable desk calculator

    International Nuclear Information System (INIS)

    Naugle, N.W.; Randall, John D.

    1972-01-01

    In an attempt to develop a simple least squares program for the rapid calibration of control rods it was necessary to verify that all rods, with the exception of the transient rod, could be accurately defined by a single analytical expression. Since the vertical flux distribution in the core region follows a cosine function, a cosine squared variation was tested. The solution which involves the inversion of a 3 x 3 matrix is performed using a Hewlett Packard Model 9100B programmable desk calculator. The least squares program was applied to a number of control rod calibrations that had previously been analyzed by hand. The agreement was excellent in all cases

  9. Accurate strand-specific quantification of viral RNA.

    Directory of Open Access Journals (Sweden)

    Nicole E Plaskon

    Full Text Available The presence of full-length complements of viral genomic RNA is a hallmark of RNA virus replication within an infected cell. As such, methods for detecting and measuring specific strands of viral RNA in infected cells and tissues are important in the study of RNA viruses. Strand-specific quantitative real-time PCR (ssqPCR assays are increasingly being used for this purpose, but the accuracy of these assays depends on the assumption that the amount of cDNA measured during the quantitative PCR (qPCR step accurately reflects amounts of a specific viral RNA strand present in the RT reaction. To specifically test this assumption, we developed multiple ssqPCR assays for the positive-strand RNA virus o'nyong-nyong (ONNV that were based upon the most prevalent ssqPCR assay design types in the literature. We then compared various parameters of the ONNV-specific assays. We found that an assay employing standard unmodified virus-specific primers failed to discern the difference between cDNAs generated from virus specific primers and those generated through false priming. Further, we were unable to accurately measure levels of ONNV (- strand RNA with this assay when higher levels of cDNA generated from the (+ strand were present. Taken together, these results suggest that assays of this type do not accurately quantify levels of the anti-genomic strand present during RNA virus infectious cycles. However, an assay permitting the use of a tag-specific primer was able to distinguish cDNAs transcribed from ONNV (- strand RNA from other cDNAs present, thus allowing accurate quantification of the anti-genomic strand. We also report the sensitivities of two different detection strategies and chemistries, SYBR(R Green and DNA hydrolysis probes, used with our tagged ONNV-specific ssqPCR assays. Finally, we describe development, design and validation of ssqPCR assays for chikungunya virus (CHIKV, the recent cause of large outbreaks of disease in the Indian Ocean

  10. Fast and accurate methods of independent component analysis: A survey

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  11. Quantum-Accurate Molecular Dynamics Potential for Tungsten

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Mitchell; Thompson, Aidan P.

    2017-03-01

    The purpose of this short contribution is to report on the development of a Spectral Neighbor Analysis Potential (SNAP) for tungsten. We have focused on the characterization of elastic and defect properties of the pure material in order to support molecular dynamics simulations of plasma-facing materials in fusion reactors. A parallel genetic algorithm approach was used to efficiently search for fitting parameters optimized against a large number of objective functions. In addition, we have shown that this many-body tungsten potential can be used in conjunction with a simple helium pair potential1 to produce accurate defect formation energies for the W-He binary system.

  12. A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras

    Science.gov (United States)

    Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen

    2017-11-01

    Measuring the depth-of-interaction (DOI) of gamma photons enables increasing the resolution of emission imaging systems. Several design variants of DOI-sensitive detectors have been recently introduced to improve the performance of scanners for positron emission tomography (PET). However, the accurate characterization of the response of DOI detectors, necessary to accurately measure the DOI, remains an unsolved problem. Numerical simulations are, at the state of the art, imprecise, while measuring directly the characteristics of DOI detectors experimentally is hindered by the impossibility to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite mixture model. The method is purely data-driven, not requiring simulations, and is applicable to a wide range of detector types. The proposed method was evaluated both in a simulation study and with data acquired using a monolithic gamma camera designed for PET (the cMiCE detector), demonstrating the accurate recovery of the DOI characteristics. The combination of the proposed calibration technique with maximum- a posteriori estimation of the coordinates of interaction provided a depth resolution of  ≈1.14 mm for the simulated PET detector and  ≈1.74 mm for the cMiCE detector. The software and experimental data are made available at http://occiput.mgh.harvard.edu/depthembedding/.

  13. A simple method for the accurate determination of free [Ca] in Ca-EGTA solutions.

    Science.gov (United States)

    Bers, D M

    1982-05-01

    A simple method for the accurate determination of free [Ca] in ethyleneglycol-bis(beta-aminoethylether)-N,N'-tetraacetic acid (EGTA)-buffered Ca solutions is described. This method is useful for calibration of Ca macro- and microelectrodes to low free [Ca] and should improve the reliability of calculated free [Ca] in more complex solutions. Briefly, free [Ca] in Ca-EGTA solutions is measured with a Ca electrode, bound Ca is calculated, and Scatchard and double-reciprocal plots are resolved for the total [EGTA] and the apparent Ca-EGTA association constant (K'Ca) in the solutions used. The free [Ca] is then recalculated using the determined parameters, giving a more accurate knowledge of the free [Ca] in these solutions and providing an accurate calibration curve for the Ca electrode. These solutions can then be used to calibrate other Ca electrodes (e.g., Ca microelectrodes) or the calibrated Ca electrode can be used to measure free [Ca] in solutions containing multiple metal ligands. This method allows determination of free [Ca], K'Ca, and total [EGTA] in the actual solutions used regardless of pH, temperature, or ionic strength. It does not require accurate knowledge of K'Ca or EGTA purity and circumvents many potential errors due to assumption of binding parameters. K'Ca was found to be 2.45 +/- 0.04 X 10(6) M-1 in 100 mM KCl, 10 mM N-2-hydroxyethylpiperazine-N'-2-ethanesulfonic acid, and 1 mM EGTA at pH 7.00 and 23 degrees C. Total [EGTA] varied with supplier but was always less than quoted.

  14. SU-E-J-134: An Augmented-Reality Optical Imaging System for Accurate Breast Positioning During Radiotherapy

    International Nuclear Information System (INIS)

    Nazareth, D; Malhotra, H; French, S; Hoffmann, K; Merrow, C

    2014-01-01

    Purpose: Breast radiotherapy, particularly electronic compensation, may involve large dose gradients and difficult patient positioning problems. We have developed a simple self-calibrating augmented-reality system, which assists in accurately and reproducibly positioning the patient, by displaying her live image from a single camera superimposed on the correct perspective projection of her 3D CT data. Our method requires only a standard digital camera capable of live-view mode, installed in the treatment suite at an approximately-known orientation and position (rotation R; translation T). Methods: A 10-sphere calibration jig was constructed and CT imaged to provide a 3D model. The (R,T) relating the camera to the CT coordinate system were determined by acquiring a photograph of the jig and optimizing an objective function, which compares the true image points to points calculated with a given candidate R and T geometry. Using this geometric information, 3D CT patient data, viewed from the camera's perspective, is plotted using a Matlab routine. This image data is superimposed onto the real-time patient image, acquired by the camera, and displayed using standard live-view software. This enables the therapists to view both the patient's current and desired positions, and guide the patient into assuming the correct position. The method was evaluated using an in-house developed bolus-like breast phantom, mounted on a supporting platform, which could be tilted at various angles to simulate treatment-like geometries. Results: Our system allowed breast phantom alignment, with an accuracy of about 0.5 cm and 1 ± 0.5 degree. Better resolution could be possible using a camera with higher-zoom capabilities. Conclusion: We have developed an augmented-reality system, which combines a perspective projection of a CT image with a patient's real-time optical image. This system has the potential to improve patient setup accuracy during breast radiotherapy, and could possibly be

  15. Orthodontic Treatment of a Mandibular Incisor Extraction Case with Invisalign

    Directory of Open Access Journals (Sweden)

    Khalid H. Zawawi

    2014-01-01

    Full Text Available Mandibular incisor extraction for orthodontic treatment is considered an unusual treatment option because of the limited number of patients that meet the criteria for such treatment. Accurate diagnosis and treatment planning is essential to achieve the desired results. Adult orthodontic patients are increasingly motivated by esthetic considerations and reject the idea of conventional fixed appliances. In recent years, Invisalign appliances have gained tremendous attention for orthodontic treatment of adult patients to meet their esthetic demands. In this case report, a case of Class I malocclusion was treated with mandibular incisor extraction using the Invisalign appliance system. Successful tooth alignment of both arches was achieved. The use of Invisalign appliance is an effective treatment option in adult patients with Class I malocclusion that requires incisor extraction due to moderate to severe mandibular anterior crowding.

  16. Orthodontic treatment of a mandibular incisor extraction case with invisalign.

    Science.gov (United States)

    Zawawi, Khalid H

    2014-01-01

    Mandibular incisor extraction for orthodontic treatment is considered an unusual treatment option because of the limited number of patients that meet the criteria for such treatment. Accurate diagnosis and treatment planning is essential to achieve the desired results. Adult orthodontic patients are increasingly motivated by esthetic considerations and reject the idea of conventional fixed appliances. In recent years, Invisalign appliances have gained tremendous attention for orthodontic treatment of adult patients to meet their esthetic demands. In this case report, a case of Class I malocclusion was treated with mandibular incisor extraction using the Invisalign appliance system. Successful tooth alignment of both arches was achieved. The use of Invisalign appliance is an effective treatment option in adult patients with Class I malocclusion that requires incisor extraction due to moderate to severe mandibular anterior crowding.

  17. MO-G-BRE-04: Automatic Verification of Daily Treatment Deliveries and Generation of Daily Treatment Reports for a MR Image-Guided Treatment Machine

    International Nuclear Information System (INIS)

    Yang, D; Li, X; Li, H; Wooten, H; Green, O; Rodriguez, V; Mutic, S

    2014-01-01

    Purpose: Two aims of this work were to develop a method to automatically verify treatment delivery accuracy immediately after patient treatment and to develop a comprehensive daily treatment report to provide all required information for daily MR-IGRT review. Methods: After systematically analyzing the requirements for treatment delivery verification and understanding the available information from a novel MR-IGRT treatment machine, we designed a method to use 1) treatment plan files, 2) delivery log files, and 3) dosimetric calibration information to verify the accuracy and completeness of daily treatment deliveries. The method verifies the correctness of delivered treatment plans and beams, beam segments, and for each segment, the beam-on time and MLC leaf positions. Composite primary fluence maps are calculated from the MLC leaf positions and the beam-on time. Error statistics are calculated on the fluence difference maps between the plan and the delivery. We also designed the daily treatment delivery report by including all required information for MR-IGRT and physics weekly review - the plan and treatment fraction information, dose verification information, daily patient setup screen captures, and the treatment delivery verification results. Results: The parameters in the log files (e.g. MLC positions) were independently verified and deemed accurate and trustable. A computer program was developed to implement the automatic delivery verification and daily report generation. The program was tested and clinically commissioned with sufficient IMRT and 3D treatment delivery data. The final version has been integrated into a commercial MR-IGRT treatment delivery system. Conclusion: A method was developed to automatically verify MR-IGRT treatment deliveries and generate daily treatment reports. Already in clinical use since December 2013, the system is able to facilitate delivery error detection, and expedite physician daily IGRT review and physicist weekly chart

  18. Improved fingercode alignment for accurate and compact fingerprint recognition

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-05-01

    Full Text Available requirements. The low storage requirements for a low resolution texture-based fingerprint recognition method known as FingerCode enables the combined use of fingerprints with the additional security of other devices such as smartcards. The low recognition...

  19. A fast and accurate decoder for underwater acoustic telemetry.

    Science.gov (United States)

    Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.

  20. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    Science.gov (United States)

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  1. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  2. Accurate prediction of defect properties in density functional supercell calculations

    International Nuclear Information System (INIS)

    Lany, Stephan; Zunger, Alex

    2009-01-01

    The theoretical description of defects and impurities in semiconductors is largely based on density functional theory (DFT) employing supercell models. The literature discussion of uncertainties that limit the predictivity of this approach has focused mostly on two issues: (1) finite-size effects, in particular for charged defects; (2) the band-gap problem in local or semi-local DFT approximations. We here describe how finite-size effects (1) in the formation energy of charged defects can be accurately corrected in a simple way, i.e. by potential alignment in conjunction with a scaling of the Madelung-like screened first order correction term. The factor involved with this scaling depends only on the dielectric constant and the shape of the supercell, and quite accurately accounts for the full third order correction according to Makov and Payne. We further discuss in some detail the background and justification for this correction method, and also address the effect of the ionic screening on the magnitude of the image charge energy. In regard to (2) the band-gap problem, we discuss the merits of non-local external potentials that are added to the DFT Hamiltonian and allow for an empirical band-gap correction without significantly increasing the computational demand over that of standard DFT calculations. In combination with LDA + U, these potentials are further instrumental for the prediction of polaronic defects with localized holes in anion-p orbitals, such as the metal-site acceptors in wide-gap oxide semiconductors

  3. Individual Differences in Accurately Judging Personality From Text.

    Science.gov (United States)

    Hall, Judith A; Goh, Jin X; Mast, Marianne Schmid; Hagedorn, Christian

    2016-08-01

    This research examines correlates of accuracy in judging Big Five traits from first-person text excerpts. Participants in six studies were recruited from psychology courses or online. In each study, participants performed a task of judging personality from text and performed other ability tasks and/or filled out questionnaires. Participants who were more accurate in judging personality from text were more likely to be female; had personalities that were more agreeable, conscientious, and feminine, and less neurotic and dominant (all controlling for participant gender); scored higher on empathic concern; self-reported more interest in, and attentiveness to, people's personalities in their daily lives; and reported reading more for pleasure, especially fiction. Accuracy was not associated with SAT scores but had a significant relation to vocabulary knowledge. Accuracy did not correlate with tests of judging personality and emotion based on audiovisual cues. This research is the first to address individual differences in accurate judgment of personality from text, thus adding to the literature on correlates of the good judge of personality. © 2015 Wiley Periodicals, Inc.

  4. Accurate Evaluation Method of Molecular Binding Affinity from Fluctuation Frequency

    Science.gov (United States)

    Hoshino, Tyuji; Iwamoto, Koji; Ode, Hirotaka; Ohdomari, Iwao

    2008-05-01

    Exact estimation of the molecular binding affinity is significantly important for drug discovery. The energy calculation is a direct method to compute the strength of the interaction between two molecules. This energetic approach is, however, not accurate enough to evaluate a slight difference in binding affinity when distinguishing a prospective substance from dozens of candidates for medicine. Hence more accurate estimation of drug efficacy in a computer is currently demanded. Previously we proposed a concept of estimating molecular binding affinity, focusing on the fluctuation at an interface between two molecules. The aim of this paper is to demonstrate the compatibility between the proposed computational technique and experimental measurements, through several examples for computer simulations of an association of human immunodeficiency virus type-1 (HIV-1) protease and its inhibitor (an example for a drug-enzyme binding), a complexation of an antigen and its antibody (an example for a protein-protein binding), and a combination of estrogen receptor and its ligand chemicals (an example for a ligand-receptor binding). The proposed affinity estimation has proven to be a promising technique in the advanced stage of the discovery and the design of drugs.

  5. Cerebral fat embolism: Use of MR spectroscopy for accurate diagnosis

    Directory of Open Access Journals (Sweden)

    Laxmi Kokatnur

    2015-01-01

    Full Text Available Cerebral fat embolism (CFE is an uncommon but serious complication following orthopedic procedures. It usually presents with altered mental status, and can be a part of fat embolism syndrome (FES if associated with cutaneous and respiratory manifestations. Because of the presence of other common factors affecting the mental status, particularly in the postoperative period, the diagnosis of CFE can be challenging. Magnetic resonance imaging (MRI of brain typically shows multiple lesions distributed predominantly in the subcortical region, which appear as hyperintense lesions on T2 and diffusion weighted images. Although the location offers a clue, the MRI findings are not specific for CFE. Watershed infarcts, hypoxic encephalopathy, disseminated infections, demyelinating disorders, diffuse axonal injury can also show similar changes on MRI of brain. The presence of fat in these hyperintense lesions, identified by MR spectroscopy as raised lipid peaks will help in accurate diagnosis of CFE. Normal brain tissue or conditions producing similar MRI changes will not show any lipid peak on MR spectroscopy. We present a case of CFE initially misdiagnosed as brain stem stroke based on clinical presentation and cranial computed tomography (CT scan, and later, MR spectroscopy elucidated the accurate diagnosis.

  6. How accurately can 21cm tomography constrain cosmology?

    Science.gov (United States)

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  7. Canadian consumer issues in accurate and fair electricity metering

    International Nuclear Information System (INIS)

    2000-07-01

    The Public Interest Advocacy Centre (PIAC), located in Ottawa, participates in regulatory proceedings concerning electricity and natural gas to support public and consumer interest. PIAC provides legal representation, research and policy support and public advocacy. A study aimed toward the determination of the issues at stake for residential electricity consumers in the provision of fair and accurate electricity metering, was commissioned by Measurement Canada in consultation with Industry Canada's Consumer Affairs. The metering of electricity must be carried out in a fair and efficient manner for all residential consumers. The Electricity, Gas and Inspection Act was developed to ensure compliance with standards for measuring instrumentation. The accurate metering of electricity through the distribution systems for electricity in Canada represents the main focus of this study and report. The role played by Measurement Canada and the increased efficiencies of service delivery by Measurement Canada or the changing of electricity market conditions are of special interest. The role of Measurement Canada was explained, as were the concerns of residential consumers. A comparison was then made between the interests of residential consumers and those of commercial and industrial electricity consumers in electricity metering. Selected American and Commonwealth jurisdictions were reviewed in light of their electricity metering practices. A section on compliance and conflict resolution was included, in addition to a section on the use of voluntary codes for compliance and conflict resolution

  8. Optimal selection of mother wavelet for accurate infant cry classification.

    Science.gov (United States)

    Saraswathy, J; Hariharan, M; Nadarajaw, Thiyagar; Khairunizam, Wan; Yaacob, Sazali

    2014-06-01

    Wavelet theory is emerging as one of the prevalent tool in signal and image processing applications. However, the most suitable mother wavelet for these applications is still a relative question mark amongst researchers. Selection of best mother wavelet through parameterization leads to better findings for the analysis in comparison to random selection. The objective of this article is to compare the performance of the existing members of mother wavelets and to select the most suitable mother wavelet for accurate infant cry classification. Optimal wavelet is found using three different criteria namely the degree of similarity of mother wavelets, regularity of mother wavelets and accuracy of correct recognition during classification processes. Recorded normal and pathological infant cry signals are decomposed into five levels using wavelet packet transform. Energy and entropy features are extracted at different sub bands of cry signals and their effectiveness are tested with four supervised neural network architectures. Findings of this study expound that, the Finite impulse response based approximation of Meyer is the best wavelet candidate for accurate infant cry classification analysis.

  9. Funnel metadynamics as accurate binding free-energy method

    Science.gov (United States)

    Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele

    2013-01-01

    A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839

  10. Accurate Sample Time Reconstruction of Inertial FIFO Data.

    Science.gov (United States)

    Stieber, Sebastian; Dorsch, Rainer; Haubelt, Christian

    2017-12-13

    In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO) interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts-introduced by fabrication inaccuracies, temperature changes and wear-out effects-onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS) technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  11. The economic value of accurate wind power forecasting to utilities

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.J. [Rutherford Appleton Lab., Oxfordshire (United Kingdom); Giebel, G.; Joensen, A. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    With increasing penetrations of wind power, the need for accurate forecasting is becoming ever more important. Wind power is by its very nature intermittent. For utility schedulers this presents its own problems particularly when the penetration of wind power capacity in a grid reaches a significant level (>20%). However, using accurate forecasts of wind power at wind farm sites, schedulers are able to plan the operation of conventional power capacity to accommodate the fluctuating demands of consumers and wind farm output. The results of a study to assess the value of forecasting at several potential wind farm sites in the UK and in the US state of Iowa using the Reading University/Rutherford Appleton Laboratory National Grid Model (NGM) are presented. The results are assessed for different types of wind power forecasting, namely: persistence, optimised numerical weather prediction or perfect forecasting. In particular, it will shown how the NGM has been used to assess the value of numerical weather prediction forecasts from the Danish Meteorological Institute model, HIRLAM, and the US Nested Grid Model, which have been `site tailored` by the use of the linearized flow model WA{sup s}P and by various Model output Statistics (MOS) and autoregressive techniques. (au)

  12. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  13. Atomic spectroscopy and highly accurate measurement: determination of fundamental constants

    International Nuclear Information System (INIS)

    Schwob, C.

    2006-12-01

    This document reviews the theoretical and experimental achievements of the author concerning highly accurate atomic spectroscopy applied for the determination of fundamental constants. A pure optical frequency measurement of the 2S-12D 2-photon transitions in atomic hydrogen and deuterium has been performed. The experimental setting-up is described as well as the data analysis. Optimized values for the Rydberg constant and Lamb shifts have been deduced (R = 109737.31568516 (84) cm -1 ). An experiment devoted to the determination of the fine structure constant with an aimed relative uncertainty of 10 -9 began in 1999. This experiment is based on the fact that Bloch oscillations in a frequency chirped optical lattice are a powerful tool to transfer coherently many photon momenta to the atoms. We have used this method to measure accurately the ratio h/m(Rb). The measured value of the fine structure constant is α -1 = 137.03599884 (91) with a relative uncertainty of 6.7*10 -9 . The future and perspectives of this experiment are presented. This document presented before an academic board will allow his author to manage research work and particularly to tutor thesis students. (A.C.)

  14. KFM: a homemade yet accurate and dependable fallout meter

    International Nuclear Information System (INIS)

    Kearny, C.H.; Barnes, P.R.; Chester, C.V.; Cortner, M.W.

    1978-01-01

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of +-25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The step-by-step illustrated instructions for making and using a KFM are presented. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM

  15. Ultra-accurate collaborative information filtering via directed user similarity

    Science.gov (United States)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  16. An Accurate and Dynamic Computer Graphics Muscle Model

    Science.gov (United States)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  17. Accurate Sample Time Reconstruction of Inertial FIFO Data

    Directory of Open Access Journals (Sweden)

    Sebastian Stieber

    2017-12-01

    Full Text Available In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts—introduced by fabrication inaccuracies, temperature changes and wear-out effects—onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  18. Heterogeneous treatment in the variational nodal method

    International Nuclear Information System (INIS)

    Fanning, T.H.

    1995-01-01

    The variational nodal transport method is reduced to its diffusion form and generalized for the treatment of heterogeneous nodes while maintaining nodal balances. Adapting variational methods to heterogeneous nodes requires the ability to integrate over a node with discontinuous cross sections. In this work, integrals are evaluated using composite gaussian quadrature rules, which permit accurate integration while minimizing computing time. Allowing structure within a nodal solution scheme avoids some of the necessity of cross section homogenization, and more accurately defines the intra-nodal flux shape. Ideally, any desired heterogeneity can be constructed within the node; but in reality, the finite set of basis functions limits the practical resolution to which fine detail can be defined within the node. Preliminary comparison tests show that the heterogeneous variational nodal method provides satisfactory results even if some improvements are needed for very difficult, configurations

  19. The New Aptima HCV Quant Dx Real-time TMA Assay Accurately Quantifies Hepatitis C Virus Genotype 1-6 RNA.

    Science.gov (United States)

    Chevaliez, Stéphane; Dubernet, Fabienne; Dauvillier, Claude; Hézode, Christophe; Pawlotsky, Jean-Michel

    2017-06-01

    Sensitive and accurate hepatitis C virus (HCV) RNA detection and quantification is essential for the management of chronic hepatitis C therapy. Currently available platforms and assays are usually batched and require at least 5hours of work to complete the analyses. The aim of this study was to evaluate the ability of the newly developed Aptima HCV Quant Dx assay that eliminates the need for batch processing and automates all aspects of nucleic acid testing in a single step, to accurately detect and quantify HCV RNA in a large series of patients infected with different HCV genotypes. The limit of detection was estimated to be 2.3 IU/mL. The specificity of the assay was 98.6% (95% confidence interval: 96.1%-99.5%). Intra-assay and inter-assay coefficients of variation ranged from 0.09% to 5.61%, and 1.05% to 3.65%, respectively. The study of serum specimens from patients infected with HCV genotypes 1 to 6 showed a satisfactory relationship between HCV RNA levels measured by the Aptima HCV Quant Dx assay, and both real-time PCR comparators (Abbott RealTime HCV and Cobas AmpliPrep/Cobas TaqMan HCV Test, version 2.0, assays). the new Aptima HCV Quant Dx assay is rapid, sensitive, reasonably specific and reproducible and accurately quantifies HCV RNA in serum samples from patients with chronic HCV infection, including patients on antiviral treatment. The Aptima HCV Quant Dx assay can thus be confidently used to detect and quantify HCV RNA in both clinical trials with new anti-HCV drugs and clinical practice in Europe and the US. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Accurate Detection of Carcinoma Cells by Use of a Cell Microarray Chip

    Science.gov (United States)

    Yamamura, Shohei; Yatsushiro, Shouki; Yamaguchi, Yuka; Abe, Kaori; Shinohara, Yasuo; Tamiya, Eiichi; Baba, Yoshinobu; Kataoka, Masatoshi

    2012-01-01

    Background Accurate detection and analysis of circulating tumor cells plays an important role in the diagnosis and treatment of metastatic cancer treatment. Methods and Findings A cell microarray chip was used to detect spiked carcinoma cells among leukocytes. The chip, with 20,944 microchambers (105 µm width and 50 µm depth), was made from polystyrene; and the formation of monolayers of leukocytes in the microchambers was observed. Cultured human T lymphoblastoid leukemia (CCRF-CEM) cells were used to examine the potential of the cell microarray chip for the detection of spiked carcinoma cells. A T lymphoblastoid leukemia suspension was dispersed on the chip surface, followed by 15 min standing to allow the leukocytes to settle down into the microchambers. Approximately 29 leukocytes were found in each microchamber when about 600,000 leukocytes in total were dispersed onto a cell microarray chip. Similarly, when leukocytes isolated from human whole blood were used, approximately 89 leukocytes entered each microchamber when about 1,800,000 leukocytes in total were placed onto the cell microarray chip. After washing the chip surface, PE-labeled anti-cytokeratin monoclonal antibody and APC-labeled anti-CD326 (EpCAM) monoclonal antibody solution were dispersed onto the chip surface and allowed to react for 15 min; and then a microarray scanner was employed to detect any fluorescence-positive cells within 20 min. In the experiments using spiked carcinoma cells (NCI-H1650, 0.01 to 0.0001%), accurate detection of carcinoma cells was achieved with PE-labeled anti-cytokeratin monoclonal antibody. Furthermore, verification of carcinoma cells in the microchambers was performed by double staining with the above monoclonal antibodies. Conclusion The potential application of the cell microarray chip for the detection of CTCs was shown, thus demonstrating accurate detection by double staining for cytokeratin and EpCAM at the single carcinoma cell level. PMID:22396762

  1. Accurate detection of carcinoma cells by use of a cell microarray chip.

    Directory of Open Access Journals (Sweden)

    Shohei Yamamura

    Full Text Available BACKGROUND: Accurate detection and analysis of circulating tumor cells plays an important role in the diagnosis and treatment of metastatic cancer treatment. METHODS AND FINDINGS: A cell microarray chip was used to detect spiked carcinoma cells among leukocytes. The chip, with 20,944 microchambers (105 µm width and 50 µm depth, was made from polystyrene; and the formation of monolayers of leukocytes in the microchambers was observed. Cultured human T lymphoblastoid leukemia (CCRF-CEM cells were used to examine the potential of the cell microarray chip for the detection of spiked carcinoma cells. A T lymphoblastoid leukemia suspension was dispersed on the chip surface, followed by 15 min standing to allow the leukocytes to settle down into the microchambers. Approximately 29 leukocytes were found in each microchamber when about 600,000 leukocytes in total were dispersed onto a cell microarray chip. Similarly, when leukocytes isolated from human whole blood were used, approximately 89 leukocytes entered each microchamber when about 1,800,000 leukocytes in total were placed onto the cell microarray chip. After washing the chip surface, PE-labeled anti-cytokeratin monoclonal antibody and APC-labeled anti-CD326 (EpCAM monoclonal antibody solution were dispersed onto the chip surface and allowed to react for 15 min; and then a microarray scanner was employed to detect any fluorescence-positive cells within 20 min. In the experiments using spiked carcinoma cells (NCI-H1650, 0.01 to 0.0001%, accurate detection of carcinoma cells was achieved with PE-labeled anti-cytokeratin monoclonal antibody. Furthermore, verification of carcinoma cells in the microchambers was performed by double staining with the above monoclonal antibodies. CONCLUSION: The potential application of the cell microarray chip for the detection of CTCs was shown, thus demonstrating accurate detection by double staining for cytokeratin and EpCAM at the single carcinoma cell level.

  2. Accurate representation of geostrophic and hydrostatic balance in unstructured mesh finite element ocean modelling

    Science.gov (United States)

    Maddison, J. R.; Marshall, D. P.; Pain, C. C.; Piggott, M. D.

    Accurate representation of geostrophic and hydrostatic balance is an essential requirement for numerical modelling of geophysical flows. Potentially, unstructured mesh numerical methods offer significant benefits over conventional structured meshes, including the ability to conform to arbitrary bounding topography in a natural manner and the ability to apply dynamic mesh adaptivity. However, there is a need to develop robust schemes with accurate representation of physical balance on arbitrary unstructured meshes. We discuss the origin of physical balance errors in a finite element discretisation of the Navier-Stokes equations using the fractional timestep pressure projection method. By considering the Helmholtz decomposition of forcing terms in the momentum equation, it is shown that the components of the buoyancy and Coriolis accelerations that project onto the non-divergent velocity tendency are the small residuals between two terms of comparable magnitude. Hence there is a potential for significant injection of imbalance by a numerical method that does not compute these residuals accurately. This observation is used to motivate a balanced pressure decomposition method whereby an additional "balanced pressure" field, associated with buoyancy and Coriolis accelerations, is solved for at increased accuracy and used to precondition the solution for the dynamical pressure. The utility of this approach is quantified in a fully non-linear system in exact geostrophic balance. The approach is further tested via quantitative comparison of unstructured mesh simulations of the thermally driven rotating annulus against laboratory data. Using a piecewise linear discretisation for velocity and pressure (a stabilised P1P1 discretisation), it is demonstrated that the balanced pressure decomposition method is required for a physically realistic representation of the system.

  3. An algorithm to extract more accurate stream longitudinal profiles from unfilled DEMs

    Science.gov (United States)

    Byun, Jongmin; Seong, Yeong Bae

    2015-08-01

    Morphometric features observed from a stream longitudinal profile (SLP) reflect channel responses to lithological variation and changes in uplift or climate; therefore, they constitute essential indicators in the studies for the dynamics between tectonics, climate, and surface processes. The widespread availability of digital elevation models (DEMs) and their processing enable semi-automatic extraction of SLPs as well as additional stream profile parameters, thus reducing the time spent for extracting them and simultaneously allowing regional-scale studies of SLPs. However, careful consideration is required to extract SLPs directly from a DEM, because the DEM must be altered by depression filling process to ensure the continuity of flows across it. Such alteration inevitably introduces distortions to the SLP, such as stair steps, bias of elevation values, and inaccurate stream paths. This paper proposes a new algorithm, called maximum depth tracing algorithm (MDTA), to extract more accurate SLPs using depression-unfilled DEMs. The MDTA supposes that depressions in DEMs are not necessarily artifacts to be removed, and that elevation values within them are useful to represent more accurately the real landscape. To ensure the continuity of flows even across the unfilled DEM, the MDTA first determines the outlet of each depression and then reverses flow directions of the cells on the line of maximum depth within each depression, beginning from the outlet and toward the sink. It also calculates flow accumulation without disruption across the unfilled DEM. Comparative analysis with the profiles extracted by the hydrologic functions implemented in the ArcGIS™ was performed to illustrate the benefits from the MDTA. It shows that the MDTA provides more accurate stream paths on depression areas, and consequently reduces distortions of the SLPs derived from the paths, such as exaggerated elevation values and negatively biased slopes that are commonly observed in the SLPs

  4. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    Science.gov (United States)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6

  5. Monitoring of organic loads at waste water treatment plant with due consideration of factual necessity, technical feasibility and statutory requirements; Erfassung der organischen Belastung bei Abwasserreinigungsanlagen unter Beachtung der fachlichen Notwendigkeit, der technischen Moeglichkeiten und der gesetzlichen Auflagen

    Energy Technology Data Exchange (ETDEWEB)

    Baumann, P. [Stuttgart Univ. (Germany). Inst. fuer Siedlungswasserbau, Wasserguete- und Abfallwirtschaft

    1999-07-01

    Between the statutory requirement and factual necessity for monitoring organic loads in waste water discharged to municipal and industrial waste water treatment plant and effluents from these there are substantial discrepancies. The paper points out the different approaches and gives recommendations on how to proceed in the future. At plant with stable nitrification, self and external monitoring for BOD{sub 5} can be distinctly reduced without fear of impaired process transparency or water quality. Monitoring organic loads online is little expedient technically, especially where effluent from municipal sewage treatment plants is concerned. But in the industrial sector there exist the most diverse applications with a view to carbon elimination. (orig.) [German] Zwischen den gesetzlichen Anforderungen und der fachlichen Notwendigkeit bei der Erfassung der organischen Belastung im Zu- und Ablauf von kommunalen und industriellen Klaeranlagen bestehen erhebliche Diskrepanzen. In diesem Beitrag werden die unterschiedlichen Ansaetze aufgezeigt und Empfehlungen fuer die zukuenftige Vorgehensweise gegeben. Bei Anlagen mit stabiler Nitrifikation ist die Selbst- wie Fremdueberwachung bezueglich BSB{sub 5} deutlich zu reduzieren, ohne dass Einbussen fuer Prozesstransparenz und Gewaesserqualitaet zu erwarten sind. Die online-Ueberwachung der organischen Belastung ist insbesondere bei Ablaeufen in kommunalen Klaeranlagen fachlich wenig sinnvoll, im Bereich der industriellen Abwasserreinigung ergeben sich dafuer bei dem Reinigungsziel der Kohlenstoffelimination dagegen verschiedenste Anwendungsmoeglichkeiten. (orig.)

  6. A study of the current knowledge base in treating snake bite amongst doctors in the high-risk countries of India and Pakistan: does snake bite treatment training reflect local requirements?

    Science.gov (United States)

    Simpson, Ian D

    2008-11-01

    The call for greater production of better quality anti-snake venom (ASV) is a major thrust in the effort to reduce snake bite mortality. However, snake bite mortality has many causes and these should also be addressed. A key feature of efficient ASV usage is ensuring that doctors are trained to administer ASV only when it is required and in amounts that are necessary to neutralize venom. The need for better snake bite management training has been referred to, but little attention has been paid to how effectively medical education actually prepares doctors to treat snake bite. The objective of this study is to evaluate the current level of knowledge amongst doctors in India and Pakistan, two countries with the highest snake bite mortality in absolute terms. Results show that the use of current textbooks and medical education do not adequately prepare doctors to treat snake bite, particularly in the areas of use of ASV, dealing with adverse reactions to ASV and specific measures to deal with neurotoxic bites. The central conclusion of the paper is that local protocols and training are required to adequately prepare doctors to improve treatment and reduce mortality.

  7. Towards the accurate electronic structure descriptions of typical high-constant dielectrics

    Science.gov (United States)

    Jiang, Ting-Ting; Sun, Qing-Qing; Li, Ye; Guo, Jiao-Jiao; Zhou, Peng; Ding, Shi-Jin; Zhang, David Wei

    2011-05-01

    High-constant dielectrics have gained considerable attention due to their wide applications in advanced devices, such as gate oxides in metal-oxide-semiconductor devices and insulators in high-density metal-insulator-metal capacitors. However, the theoretical investigations of these materials cannot fulfil the requirement of experimental development, especially the requirement for the accurate description of band structures. We performed first-principles calculations based on the hybrid density functionals theory to investigate several typical high-k dielectrics such as Al2O3, HfO2, ZrSiO4, HfSiO4, La2O3 and ZrO2. The band structures of these materials are well described within the framework of hybrid density functionals theory. The band gaps of Al2O3, HfO2, ZrSiO4, HfSiO4, La2O3 and ZrO2are calculated to be 8.0 eV, 5.6 eV, 6.2 eV, 7.1 eV, 5.3 eV and 5.0 eV, respectively, which are very close to the experimental values and far more accurate than those obtained by the traditional generalized gradient approximation method.

  8. Simple, fast and accurate two-diode model for photovoltaic modules

    Energy Technology Data Exchange (ETDEWEB)

    Ishaque, Kashif; Salam, Zainal; Taheri, Hamed [Faculty of Electrical Engineering, Universiti Teknologi Malaysia, UTM 81310, Skudai, Johor Bahru (Malaysia)

    2011-02-15

    This paper proposes an improved modeling approach for the two-diode model of photovoltaic (PV) module. The main contribution of this work is the simplification of the current equation, in which only four parameters are required, compared to six or more in the previously developed two-diode models. Furthermore the values of the series and parallel resistances are computed using a simple and fast iterative method. To validate the accuracy of the proposed model, six PV modules of different types (multi-crystalline, mono-crystalline and thin-film) from various manufacturers are tested. The performance of the model is evaluated against the popular single diode models. It is found that the proposed model is superior when subjected to irradiance and temperature variations. In particular the model matches very accurately for all important points of the I-V curves, i.e. the peak power, short-circuit current and open circuit voltage. The modeling method is useful for PV power converter designers and circuit simulator developers who require simple, fast yet accurate model for the PV module. (author)

  9. Forest understory trees can be segmented accurately within sufficiently dense airborne laser scanning point clouds.

    Science.gov (United States)

    Hamraz, Hamid; Contreras, Marco A; Zhang, Jun

    2017-07-28

    Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.

  10. LocARNA-P: Accurate boundary prediction and improved detection of structural RNAs

    DEFF Research Database (Denmark)

    Will, Sebastian; Joshi, Tejal; Hofacker, Ivo L.

    2012-01-01

    Current genomic screens for noncoding RNAs (ncRNAs) predict a large number of genomic regions containing potential structural ncRNAs. The analysis of these data requires highly accurate prediction of ncRNA boundaries and discrimination of promising candidate ncRNAs from weak predictions. Existing......ARNA/LocARNA-P, and the software package, including documentation and a pipeline for refining screens for structural ncRNA, at http://www.bioinf.uni-freiburg.de/Supplements/LocARNA-P/.......Current genomic screens for noncoding RNAs (ncRNAs) predict a large number of genomic regions containing potential structural ncRNAs. The analysis of these data requires highly accurate prediction of ncRNA boundaries and discrimination of promising candidate ncRNAs from weak predictions. Existing...... methods struggle with these goals because they rely on sequence-based multiple sequence alignments, which regularly misalign RNA structure and therefore do not support identification of structural similarities. To overcome this limitation, we compute columnwise and global reliabilities of alignments based...

  11. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    Energy Technology Data Exchange (ETDEWEB)

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these

  12. Anatomically accurate, finite model eye for optical modeling.

    Science.gov (United States)

    Liou, H L; Brennan, N A

    1997-08-01

    There is a need for a schematic eye that models vision accurately under various conditions such as refractive surgical procedures, contact lens and spectacle wear, and near vision. Here we propose a new model eye close to anatomical, biometric, and optical realities. This is a finite model with four aspheric refracting surfaces and a gradient-index lens. It has an equivalent power of 60.35 D and an axial length of 23.95 mm. The new model eye provides spherical aberration values within the limits of empirical results and predicts chromatic aberration for wavelengths between 380 and 750 nm. It provides a model for calculating optical transfer functions and predicting optical performance of the eye.

  13. Accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, Takashi; Maruishi, Yoshihiro (Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works); Aritomi, Masanori; Kawa, Tunemichi

    1993-05-01

    An accurate volume measurement system for a large amount of plutonium nitrate solution stored in a reprocessing or a conversion plant has been developed at the Plutonium Conversion Development Facility (PCDF) in the Power Reactor and Nuclear Fuel Development Corp. (PNC) Tokai Works. A pair of differential digital quartz pressure transducers is utilized in the volume measurement system. To obtain high accuracy, it is important that the non-linearity of the transducer is minimized within the measurement range, the zero point is stabilized, and the damping property of the pneumatic line is designed to minimize pressure oscillation. The accuracy of the pressure measurement can always be within 2Pa with re-calibration once a year. In the PCDF, the overall uncertainty of the volume measurement has been evaluated to be within 0.2 %. This system has been successfully applied to the Japanese government's and IAEA's routine inspection since 1984. (author).

  14. Chip breaking for an automated accurate turning system

    Energy Technology Data Exchange (ETDEWEB)

    Burnham, M.W. (BDM Corp., Albuquerque, NM (USA)); Abbatiello, L.A. (Oak Ridge Y-12 Plant, TN (USA))

    1988-01-01

    Based upon a survey of chip breakup information, the various methods have been evaluated for application to automated accurate turning systems. Many chip breaking methods work well on shafts or cylinders but fail to break chips for an entire inside or outside contouring cut. Many metals produce straight or snarled chip forms at small depths of cut, feed rates, or moderate surface speeds. These chip forms can be a cause of workpiece and tool damage. Such forms also interfere with on-machine gaging, part transfer, and tool change. Often the chip wraps around the tool holder and is difficult to remove even in manual operation. Computer analysis now makes it possible to get the most of each types of chip breaking system. Reliable ship breaking is urgently needed for automated systems, especially those operating in an unmanned mode. 83 refs., 10 figs., 2 tabs.

  15. Phase rainbow refractometry for accurate droplet variation characterization.

    Science.gov (United States)

    Wu, Yingchun; Promvongsa, Jantarat; Saengkaew, Sawitree; Wu, Xuecheng; Chen, Jia; Gréhan, Gérard

    2016-10-15

    We developed a one-dimensional phase rainbow refractometer for the accurate trans-dimensional measurements of droplet size on the micrometer scale as well as the tiny droplet diameter variations at the nanoscale. The dependence of the phase shift of the rainbow ripple structures on the droplet variations is revealed. The phase-shifting rainbow image is recorded by a telecentric one-dimensional rainbow imaging system. Experiments on the evaporating monodispersed droplet stream show that the phase rainbow refractometer can measure the tiny droplet diameter changes down to tens of nanometers. This one-dimensional phase rainbow refractometer is capable of measuring the droplet refractive index and diameter, as well as variations.

  16. Fast and accurate Voronoi density gridding from Lagrangian hydrodynamics data

    Science.gov (United States)

    Petkova, Maya A.; Laibe, Guillaume; Bonnell, Ian A.

    2018-01-01

    Voronoi grids have been successfully used to represent density structures of gas in astronomical hydrodynamics simulations. While some codes are explicitly built around using a Voronoi grid, others, such as Smoothed Particle Hydrodynamics (SPH), use particle-based representations and can benefit from constructing a Voronoi grid for post-processing their output. So far, calculating the density of each Voronoi cell from SPH data has been done numerically, which is both slow and potentially inaccurate. This paper proposes an alternative analytic method, which is fast and accurate. We derive an expression for the integral of a cubic spline kernel over the volume of a Voronoi cell and link it to the density of the cell. Mass conservation is ensured rigorously by the procedure. The method can be applied more broadly to integrate a spherically symmetric polynomial function over the volume of a random polyhedron.

  17. Accurate derivative evaluation for any Grad–Shafranov solver

    Energy Technology Data Exchange (ETDEWEB)

    Ricketson, L.F. [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Cerfon, A.J., E-mail: cerfon@cims.nyu.edu [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Rachh, M. [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Freidberg, J.P. [Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2016-01-15

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad–Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  18. Accurate micro Hall effect measurements on scribe line pads

    DEFF Research Database (Denmark)

    Østerberg, Frederik Westergaard; Petersen, Dirch Hjorth; Wang, Fei

    2009-01-01

    be extracted from micro four-point measurements performed on a rectangular pad. The dimension of the investigated pad is 400 × 430 ¿m2, and the probe pitches range from 20 ¿m to 50 ¿m. The Monte Carlo method is used to find the optimal way to perform the Hall measurement and extract Hall mobility most......Hall mobility and sheet carrier density are important parameters to monitor in advanced semiconductor production. If micro Hall effect measurements are done on small pads in scribe lines, these parameters may be measured without using valuable test wafers. We report how Hall mobility can...... accurate in less than a minute. Measurements are performed on shallow trench isolation patterned silicon wafers to verify the results from the Monte Carlo method....

  19. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  20. Accurate and efficient spin integration for particle accelerators

    International Nuclear Information System (INIS)

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.

    2015-01-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  1. Accurate positioning of long, flexible ARM's (Articulated Robotic Manipulator)

    Science.gov (United States)

    Malachowski, Michael J.

    1988-01-01

    An articulated robotic manipulator (ARM) system is being designed for space applications. Work being done on a concept utilizing an infinitely stiff laser beam for position reference is summarized. The laser beam is projected along the segments of the ARM, and the position is sensed by the beam rider modules (BRM) mounted on the distal ends of the segments. The BRM concept is the heart of the system. It utilizes a combination of lateral displacements and rotational and distance measurement sensors. These determine the relative position of the two ends of the segments with respect to each other in six degrees of freedom. The BRM measurement devices contain microprocessor controlled data acquisition and active positioning components. An indirect adaptive controller is used to accurately control the position of the ARM.

  2. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    DEFF Research Database (Denmark)

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain...... model of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  3. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  4. Zadoff-Chu coded ultrasonic signal for accurate range estimation

    KAUST Repository

    AlSharif, Mohammed H.

    2017-11-02

    This paper presents a new adaptation of Zadoff-Chu sequences for the purpose of range estimation and movement tracking. The proposed method uses Zadoff-Chu sequences utilizing a wideband ultrasonic signal to estimate the range between two devices with very high accuracy and high update rate. This range estimation method is based on time of flight (TOF) estimation using cyclic cross correlation. The system was experimentally evaluated under different noise levels and multi-user interference scenarios. For a single user, the results show less than 7 mm error for 90% of range estimates in a typical indoor environment. Under the interference from three other users, the 90% error was less than 25 mm. The system provides high estimation update rate allowing accurate tracking of objects moving with high speed.

  5. Accurate bond dissociation energies (D 0) for FHF- isotopologues

    Science.gov (United States)

    Stein, Christopher; Oswald, Rainer; Sebald, Peter; Botschwina, Peter; Stoll, Hermann; Peterson, Kirk A.

    2013-09-01

    Accurate bond dissociation energies (D 0) are determined for three isotopologues of the bifluoride ion (FHF-). While the zero-point vibrational contributions are taken from our previous work (P. Sebald, A. Bargholz, R. Oswald, C. Stein, P. Botschwina, J. Phys. Chem. A, DOI: 10.1021/jp3123677), the equilibrium dissociation energy (D e ) of the reaction ? was obtained by a composite method including frozen-core (fc) CCSD(T) calculations with basis sets up to cardinal number n = 7 followed by extrapolation to the complete basis set limit. Smaller terms beyond fc-CCSD(T) cancel each other almost completely. The D 0 values of FHF-, FDF-, and FTF- are predicted to be 15,176, 15,191, and 15,198 cm-1, respectively, with an uncertainty of ca. 15 cm-1.

  6. Accurate and efficient spin integration for particle accelerators

    Directory of Open Access Journals (Sweden)

    Dan T. Abell

    2015-02-01

    Full Text Available Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  7. Accurate and efficient spin integration for particle accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Abell, Dan T.; Meiser, Dominic [Tech-X Corporation, Boulder, CO (United States); Ranjbar, Vahid H. [Brookhaven National Laboratory, Upton, NY (United States); Barber, Desmond P. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2015-01-15

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  8. Accurate evaluation and analysis of functional genomics data and methods

    Science.gov (United States)

    Greene, Casey S.; Troyanskaya, Olga G.

    2016-01-01

    The development of technology capable of inexpensively performing large-scale measurements of biological systems has generated a wealth of data. Integrative analysis of these data holds the promise of uncovering gene function, regulation, and, in the longer run, understanding complex disease. However, their analysis has proved very challenging, as it is difficult to quickly and effectively assess the relevance and accuracy of these data for individual biological questions. Here, we identify biases that present challenges for the assessment of functional genomics data and methods. We then discuss evaluation methods that, taken together, begin to address these issues. We also argue that the funding of systematic data-driven experiments and of high-quality curation efforts will further improve evaluation metrics so that they more-accurately assess functional genomics data and methods. Such metrics will allow researchers in the field of functional genomics to continue to answer important biological questions in a data-driven manner. PMID:22268703

  9. Stereotypes of Age Differences in Personality Traits: Universal and Accurate?

    Science.gov (United States)

    Chan, Wayne; McCrae, Robert R.; De Fruyt, Filip; Jussim, Lee; Löckenhoff, Corinna E.; De Bolle, Marleen; Costa, Paul T.; Sutin, Angelina R.; Realo, Anu; Allik, Jüri; Nakazato, Katsuharu; Shimonaka, Yoshiko; Hřebíčková, Martina; Kourilova, Sylvie; Yik, Michelle; Ficková, Emília; Brunner-Sciarra, Marina; de Figueora, Nora Leibovich; Schmidt, Vanina; Ahn, Chang-kyu; Ahn, Hyun-nie; Aguilar-Vafaie, Maria E.; Siuta, Jerzy; Szmigielska, Barbara; Cain, Thomas R.; Crawford, Jarret T.; Mastor, Khairul Anwar; Rolland, Jean-Pierre; Nansubuga, Florence; Miramontez, Daniel R.; Benet-Martínez, Veronica; Rossier, Jérôme; Bratko, Denis; Halberstadt, Jamin; Yamaguchi, Mami; Knežević, Goran; Martin, Thomas A.; Gheorghiu, Mirona; Smith, Peter B.; Barbaranelli, Claduio; Wang, Lei; Shakespeare-Finch, Jane; Lima, Margarida P.; Klinkosz, Waldemar; Sekowski, Andrzej; Alcalay, Lidia; Simonetti, Franco; Avdeyeva, Tatyana V.; Pramila, V. S.; Terracciano, Antonio

    2012-01-01

    Age trajectories for personality traits are known to be similar across cultures. To address whether stereotypes of age groups reflect these age-related changes in personality, we asked participants in 26 countries (N = 3,323) to rate typical adolescents, adults, and old persons in their own country. Raters across nations tended to share similar beliefs about different age groups; adolescents were seen as impulsive, rebellious, undisciplined, preferring excitement and novelty, whereas old people were consistently considered lower on impulsivity, activity, antagonism, and Openness. These consensual age group stereotypes correlated strongly with published age differences on the five major dimensions of personality and most of 30 specific traits, using as criteria of accuracy both self-reports and observer ratings, different survey methodologies, and data from up to 50 nations. However, personal stereotypes were considerably less accurate, and consensual stereotypes tended to exaggerate differences across age groups. PMID:23088227

  10. Establishing Accurate and Sustainable Geospatial Reference Layers in Developing Countries

    Science.gov (United States)

    Seaman, V. Y.

    2017-12-01

    Accurate geospatial reference layers (settlement names & locations, administrative boundaries, and population) are not readily available for most developing countries. This critical information gap makes it challenging for governments to efficiently plan, allocate resources, and provide basic services. It also hampers international agencies' response to natural disasters, humanitarian crises, and other emergencies. The current work involves a recent successful effort, led by the Bill & Melinda Gates Foundation and the Government of Nigeria, to obtain such data. The data collection began in 2013, with local teams collecting names, coordinates, and administrative attributes for over 100,000 settlements using ODK-enabled smartphones. A settlement feature layer extracted from satellite imagery was used to ensure all settlements were included. Administrative boundaries (Ward, LGA) were created using the settlement attributes. These "new" boundary layers were much more accurate than existing shapefiles used by the government and international organizations. The resulting data sets helped Nigeria eradicate polio from all areas except in the extreme northeast, where security issues limited access and vaccination activities. In addition to the settlement and boundary layers, a GIS-based population model was developed, in partnership with Oak Ridge National Laboratories and Flowminder), that used the extracted settlement areas and characteristics, along with targeted microcensus data. This model provides population and demographics estimates independent of census or other administrative data, at a resolution of 90 meters. These robust geospatial data layers found many other uses, including establishing catchment area settlements and populations for health facilities, validating denominators for population-based surveys, and applications across a variety of government sectors. Based on the success of the Nigeria effort, a partnership between DfID and the Bill & Melinda Gates

  11. Accurate thermodynamic characterization of a synthetic coal mine methane mixture

    International Nuclear Information System (INIS)

    Hernández-Gómez, R.; Tuma, D.; Villamañán, M.A.; Mondéjar, M.E.; Chamorro, C.R.

    2014-01-01

    Highlights: • Accurate density data of a 10 components synthetic coal mine methane mixture are presented. • Experimental data are compared with the densities calculated from the GERG-2008 equation of state. • Relative deviations in density were within a 0.2% band at temperatures above 275 K. • Densities at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations. -- Abstract: In the last few years, coal mine methane (CMM) has gained significance as a potential non-conventional gas fuel. The progressive depletion of common fossil fuels reserves and, on the other hand, the positive estimates of CMM resources as a by-product of mining promote this fuel gas as a promising alternative fuel. The increasing importance of its exploitation makes it necessary to check the capability of the present-day models and equations of state for natural gas to predict the thermophysical properties of gases with a considerably different composition, like CMM. In this work, accurate density measurements of a synthetic CMM mixture are reported in the temperature range from (250 to 400) K and pressures up to 15 MPa, as part of the research project EMRP ENG01 of the European Metrology Research Program for the characterization of non-conventional energy gases. Experimental data were compared with the densities calculated with the GERG-2008 equation of state. Relative deviations between experimental and estimated densities were within a 0.2% band at temperatures above 275 K, while data at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations

  12. Accurate atom-mapping computation for biochemical reactions.

    Science.gov (United States)

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  13. Accurate molecular classification of cancer using simple rules

    Directory of Open Access Journals (Sweden)

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  14. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    Science.gov (United States)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  15. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without c ontrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  16. WGS accurately predicts antimicrobial resistance in Escherichia coli.

    Science.gov (United States)

    Tyson, Gregory H; McDermott, Patrick F; Li, Cong; Chen, Yuansha; Tadesse, Daniel A; Mukherjee, Sampa; Bodeis-Jones, Sonya; Kabera, Claudine; Gaines, Stuart A; Loneragan, Guy H; Edrington, Tom S; Torrence, Mary; Harhay, Dayna M; Zhao, Shaohua

    2015-10-01

    The objective of this study was to determine the effectiveness of WGS in identifying resistance genotypes of MDR Escherichia coli and whether these correlate with observed phenotypes. Seventy-six E. coli strains were isolated from farm cattle and measured for phenotypic resistance to 15 antimicrobials with the Sensititre(®) system. Isolates with resistance to at least four antimicrobials in three classes were selected for WGS using an Illumina MiSeq. Genotypic analysis was conducted with in-house Perl scripts using BLAST analysis to identify known genes and mutations associated with clinical resistance. Over 30 resistance genes and a number of resistance mutations were identified among the E. coli isolates. Resistance genotypes correlated with 97.8% specificity and 99.6% sensitivity to the identified phenotypes. The majority of discordant results were attributable to the aminoglycoside streptomycin, whereas there was a perfect genotype-phenotype correlation for most antibiotic classes such as tetracyclines, quinolones and phenicols. WGS also revealed information about rare resistance mechanisms, such as structural mutations in chromosomal copies of ampC conferring third-generation cephalosporin resistance. WGS can provide comprehensive resistance genotypes and is capable of accurately predicting resistance phenotypes, making it a valuable tool for surveillance. Moreover, the data presented here showing the ability to accurately predict resistance suggest that WGS may be used as a screening tool in selecting anti-infective therapy, especially as costs drop and methods improve. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  17. Anatomically accurate hard priors for transrectal electrical impedance tomography (TREIT) of the prostate

    International Nuclear Information System (INIS)

    Syed, H; Borsic, A; Hartov, A; Halter, R J

    2012-01-01

    Current prostate biopsy procedures entail sampling tissues at template-based locations that are not patient specific. Ultrasound (US)-coupled transrectal electrical impedance tomography (TREIT), featuring an endorectal US probe retrofitted with electrodes, has been developed for prostate imaging. This multi-modal imaging system aims to identify suspicious tumor regions based on their electrical properties and ultimately provide additional patient-specific locations where to take biopsy samples. Unfortunately, the open-domain geometry associated with TREIT results in a severely ill-posed problem due to the small number of measurements and unbounded imaging domain. Furthermore, reconstructing contrasts within the prostate volume is challenging because the conductivity differences between the prostate and surrounding tissues are much larger than the conductivity differences between benign and malignant tissues within the prostate. To help overcome these problems, anatomically accurate hard priors can be employed to limit estimation of the electrical property distribution to within the prostate volume; however, this requires the availability of structural information. Here, a method that extracts the prostate surface from US images and incorporates this surface into the image reconstruction algorithm has been developed to enable estimation of electrical parameters within the prostate volume. In this paper, the performance of this algorithm is evaluated against a more traditional EIT algorithm that does not use anatomically accurate structural information, in the context of numerical simulations and phantom experiments. The developed anatomically accurate hard-prior algorithm demonstrably identifies contrasts within the prostate volume while an algorithm that does not rely on anatomically accurate structural information is unable to localize these contrasts. While inclusions are identified in the correct locations, they are found to be smaller in size than the actual

  18. Accurate Interpretation of the 12-Lead ECG Electrode Placement: A Systematic Review

    Science.gov (United States)

    Khunti, Kirti

    2014-01-01

    Background: Coronary heart disease (CHD) patients require monitoring through ECGs; the 12-lead electrocardiogram (ECG) is considered to be the non-invasive gold standard. Examples of incorrect treatment because of inaccurate or poor ECG monitoring techniques have been reported in the literature. The findings that only 50% of nurses and less than…

  19. An Investigation into the Required Equipment and Procedures for the Accurate Measurement of Pressure in Hydraulic Fluid Power Systems

    Science.gov (United States)

    1976-05-27

    Q(GPM) Q(GPM) Q(;PM) Q(GPM) Q(GPM) 50 .00157 .00190 .0163 .0128 .0216 . 0235 .272 .0252 100 .00297 .00386 .0290 .0231 .0377 .C40S .C572 0.477 AP gaugr...using HP -SS programmable calculator and the program given below. 01 R/S OS f 02 f 06 log1 0 03 log 1 07 E 04 R/S 08 GOTO-01 -22 - After all values of P...the 32 inch long, 1/2 inch seamless steel tubing was installed into the FPI 190 HP test cell number three pump supply line (see Figure 1.3.1.2.2.2). The

  20. PKU-beta/TLK1 regulates myosin II activities, and is required for accurate equaled chromosome segregation.

    Science.gov (United States)

    Hashimoto, Mitsumasa; Matsui, Tadashi; Iwabuchi, Kuniyoshi; Date, Takayasu

    2008-11-17

    Tousled-like kinase 1 (or protein kinase ubiquitous, PKU-beta/TLK1) is a serine/threonine protein kinase that is implicated in chromatin remodeling, DNA replication and mitosis. RNAi-mediated PKU-beta/TLK1-depleted human cells showed aneuploidy, and immunofluorescence analysis of these cells revealed the unequal segregation of daughter chromosomes. Immunoblots indicated a substantial reduction in the phosphorylation level of Ser19/Thr18 on the myosin II regulatory light chain (MRLC) in PKU-beta/TLK1-depleted cells, with no change in total MRLC protein. To confirm the relationship between mitotic aberration and MRLC dysfunction, we expressed wild type MRLC or DD-MRLC (mimics diphosphorylation; substitution of both Thr18 and Ser19 with aspartate) in PKU-beta/TLK1-depleted cells. DD-MRLC expression dramatically reduced the unequal segregation of chromosomes. Our data suggest that human PKU-beta/TLK1 plays an important role in chromosome integrity via the regulation of myosin II dynamics by phosphorylating MRLC during mitosis.

  1. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Directory of Open Access Journals (Sweden)

    Farmerie William G

    2006-08-01

    Full Text Available Abstract Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20 System (454 Life Sciences Corporation, to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae and Platanus occidentalis (Platanaceae. Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy

  2. A spectroscopic transfer standard for accurate atmospheric CO measurements

    Science.gov (United States)

    Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker

    2016-04-01

    Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been

  3. Improvement of sustainability of irrigation in olive by the accurate management of regulated deficit irrigation

    Science.gov (United States)

    Memmi, Houssem; Moreno, Marta M.; Gijón, M. Carmen; Pérez-López, David

    2015-04-01

    Regulated Deficit Irrigation (RDI) is a useful tool to balance the improvement of productivity and water saving. This methodology is based in keeping the maximum yield with deficit irrigation. The key consists in setting water deficit during a non-sensitive phenological period. In olive, this phenological period is pit hardening, although, the accurate delimitation of the end of this period is nowadays under researching. Another interesting point in this methodology is how deep can be the water stress during the non-sensitive period. In this assay, three treatments were used in 2012 and 2013. A control treatment (T0), irrigated following FAO methodology, without water stress during the whole season and two RDI treatments in which water stress was avoided only during stage I and III of fruit growth. During stage II, widely considered as pit hardening, irrigation was ceased until trees reach the stated water stress threshold. Water status was monitored by means of stem water potential (ψs) measurements. When ψs value reached -2 MPa in T1 treatment, trees were irrigated but with a low amount of water with the aim of keeping this water status for the whole stage II. The same methodology was used for T2 treatment, but with a threshold of -3 MPa. Water status was also controlled by leaf conductance measurements. Fruit size and yield were determined at the end of each season. The statistically design was a randomized complete blocks with four repetitions. The irrigation amount in T1 and T2 was 50% and 65% less than T0 at the end of the study. There were no significant differences among treatments in terms of yield in 2012 (year off) and 2013 (year on).

  4. Nutritional status in sick children and adolescents is not accurately reflected by BMI-SDS.

    Science.gov (United States)

    Fusch, Gerhard; Raja, Preeya; Dung, Nguyen Quang; Karaolis-Danckert, Nadina; Barr, Ronald; Fusch, Christoph

    2013-01-01

    Nutritional status provides helpful information of disease severity and treatment effectiveness. Body mass index standard deviation scores (BMI-SDS) provide an approximation of body composition and thus are frequently used to classify nutritional status of sick children and adolescents. However, the accuracy of estimating body composition in this population using BMI-SDS has not been assessed. Thus, this study aims to evaluate the accuracy of nutritional status classification in sick infants and adolescents using BMI-SDS, upon comparison to classification using percentage body fat (%BF) reference charts. BMI-SDS was calculated from anthropometric measurements and %BF was measured using dual-energy x-ray absorptiometry (DXA) for 393 sick children and adolescents (5 months-18 years). Subjects were classified by nutritional status (underweight, normal weight, overweight, and obese), using 2 methods: (1) BMI-SDS, based on age- and gender-specific percentiles, and (2) %BF reference charts (standard). Linear regression and a correlation analysis were conducted to compare agreement between both methods of nutritional status classification. %BF reference value comparisons were also made between 3 independent sources based on German, Canadian, and American study populations. Correlation between nutritional status classification by BMI-SDS and %BF agreed moderately (r (2) = 0.75, 0.76 in boys and girls, respectively). The misclassification of nutritional status in sick children and adolescents using BMI-SDS was 27% when using German %BF references. Similar rates observed when using Canadian and American %BF references (24% and 23%, respectively). Using BMI-SDS to determine nutritional status in a sick population is not considered an appropriate clinical tool for identifying individual underweight or overweight children or adolescents. However, BMI-SDS may be appropriate for longitudinal measurements or for screening purposes in large field studies. When accurate nutritional

  5. Combining MFD and PIE for accurate single-pair Förster resonance energy transfer measurements.

    Science.gov (United States)

    Kudryavtsev, Volodymyr; Sikor, Martin; Kalinin, Stanislav; Mokranjac, Dejana; Seidel, Claus A M; Lamb, Don C

    2012-03-01

    Single-pair Förster resonance energy transfer (spFRET) experiments using single-molecule burst analysis on a confocal microscope are an ideal tool to measure inter- and intramolecular distances and dynamics on the nanoscale. Different techniques have been developed to maximize the amount of information available in spFRET burst analysis experiments. Multiparameter fluorescence detection (MFD) is used to monitor a variety of fluorescence parameters simultaneously and pulsed interleaved excitation (PIE) employs direct excitation of the acceptor to probe its presence and photoactivity. To calculate accurate FRET efficiencies from spFRET experiments with MFD or PIE, several calibration measurements are usually required. Herein, we demonstrate that by combining MFD with PIE information regarding all calibration factors as well as an accurate determination of spFRET histograms can be performed in a single measurement. In addition, the quality of overlap of the different detection volumes as well as the detection of acceptor photophysics can be investigated with MFD-PIE. Bursts containing acceptor photobleaching can be identified and excluded from further investigation while bursts that contain FRET dynamics are unaffected by this analysis. We have employed MFD-PIE to accurately analyze the effects of nucleotides and substrate on the interdomain separation in DnaK, the major bacterial heat shock protein 70 (Hsp70). The interdomain distance increases from 47 Å in the ATP-bound state to 84 Å in the ADP-bound state and slightly contracts to 77 Å when a substrate is bound. This is in contrast to what was observed for the mitochondrial member of the Hsp70s, Ssc1, supporting the notion of evolutionary specialization of Hsp70s for different cellular functions in different organisms and cell organelles. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. The ACGME Case Log System May Not Accurately Represent Operative Experience Among General Surgery Interns.

    Science.gov (United States)

    Naik, Nimesh D; Abbott, Eduardo F; Aho, Johnathon M; Pandian, T K; Thiels, Cornelius A; Heller, Stephanie F; Farley, David R

    To assess if the Accreditation Council for Graduate Medical Education (ACGME) case log system accurately captures operative experience of our postgraduate year 1 (PGY-1) residents. ACGME case log information was retrospectively obtained for 5 cohorts of PGY-1 residents (2011-2015) and compared to the number of operative cases captured by an institutional automated operative case report system, Surgical Access Utility System (SAUS). SAUS automatically captures all surgical team members who are listed in the operative dictation for a given case, including interns. A paired t-test analysis was used to compare number of cases coded between the 2 systems. Academic, tertiary care referral center with a large general surgery training program. PGY-1 general surgery trainees (interns) from the years 2011-2015. Forty-nine PGY-1 general surgery residents were identified over a 5-year period. Mean operative case volume per intern, per year, captured by the automated SAUS was 176.5 ± 28.1 (SD) compared to 126.3 ± 58.0 ACGME cases logged (mean difference = 50.2 cases, p log data may not accurately reflect the actual operative experience of our PGY-1 residents. If such data holds true for other general surgery training programs, the true impact of duty hour regulations on operative volume may be unclear when using the ACGME case log data. This current standard approach for using ACGME case logs as a representation of operative experience requires further scrutiny and potential revision to more accurately determine operative experience for accreditation purposes. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  7. Accurate path integration in continuous attractor network models of grid cells.

    Directory of Open Access Journals (Sweden)

    Yoram Burak

    2009-02-01

    Full Text Available Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  8. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    Directory of Open Access Journals (Sweden)

    Stovgaard Kasper

    2010-08-01

    Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for

  9. Accurate determination of rates from non-uniformly sampled relaxation data

    International Nuclear Information System (INIS)

    Stetz, Matthew A.; Wand, A. Joshua

    2016-01-01

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.

  10. Accurate determination of rates from non-uniformly sampled relaxation data

    Energy Technology Data Exchange (ETDEWEB)

    Stetz, Matthew A.; Wand, A. Joshua, E-mail: wand@upenn.edu [University of Pennsylvania Perelman School of Medicine, Johnson Research Foundation and Department of Biochemistry and Biophysics (United States)

    2016-08-15

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.

  11. Accurate determination of rates from non-uniformly sampled relaxation data

    Science.gov (United States)

    Stetz, Matthew A.; Wand, A. Joshua

    2016-01-01

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding (IST) are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25% sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25%, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time. PMID:27393626

  12. Shear-wave elastography contributes to accurate tumour size estimation when assessing small breast cancers.

    Science.gov (United States)

    Mullen, R; Thompson, J M; Moussa, O; Vinnicombe, S; Evans, A

    2014-12-01

    To assess whether the size of peritumoural stiffness (PTS) on shear-wave elastography (SWE) for small primary breast cancers (≤15 mm) was associated with size discrepancies between grey-scale ultrasound (GSUS) and final histological size and whether the addition of PTS size to GSUS size might result in more accurate tumour size estimation when compared to final histological size. A retrospective analysis of 86 consecutive patients between August 2011 and February 2013 who underwent breast-conserving surgery for tumours of size ≤15 mm at ultrasound was carried out. The size of PTS stiffness was compared to mean GSUS size, mean histological size, and the extent of size discrepancy between GSUS and histology. PTS size and GSUS were combined and compared to the final histological size. PTS of >3 mm was associated with a larger mean final histological size (16 versus 11.3 mm, p size of >3 mm was associated with a higher frequency of underestimation of final histological size by GSUS of >5 mm (63% versus 18%, p size led to accurate estimation of the final histological size (p = 0.03). The size of PTS was not associated with margin involvement (p = 0.27). PTS extending beyond 3 mm from the grey-scale abnormality is significantly associated with underestimation of tumour size of >5 mm for small invasive breast cancers. Taking into account the size of PTS also led to accurate estimation of the final histological size. Further studies are required to assess the relationship of the extent of SWE stiffness and margin status. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  13. Technical Note: Accurate, Explicit Pipe Sizing Formula For Turbulent ...

    African Journals Online (AJOL)

    This paper develops an explicit formula for computing the diameter of pipes, which is applicable to all turbulent flows. The formula not only avoids iteration but still estimates pipe diameters over the entire range of turbulent flows with an error of less than 4% in the worst cases. This is superior to (without requiring a higher ...

  14. Survey of a community-based infusion program for Australian patients with rheumatoid arthritis requiring treatment with tocilizumab: patient characteristics and drivers of patient satisfaction and patient-perceived benefits and concerns

    Directory of Open Access Journals (Sweden)

    Voight L

    2012-04-01

    Full Text Available Louisa VoightCoast Joint Care, Maroochydore, Queensland, AustraliaBackground: Tocilizumab is an effective therapy for patients with moderate to severe rheumatoid arthritis that is administered by infusion over one hour every 4 weeks. The community-based infusion (ACTiv program was introduced to Australia in August 2010 to provide accessible and convenient treatment for patients with rheumatoid arthritis who require tocilizumab. The primary objectives of this study were to determine the characteristics of patients in the ACTiv program, patient satisfaction, and patient-perceived benefits and concerns with the ACTiv program, and drivers of patient satisfaction and patient-perceived benefits and concerns.Methods: A voluntary self-administered survey was given to all 608 patients in the ACTiv program between January 27, 2011 and March 31, 2011.Results: A total of 351 surveys were returned completed, giving a response rate of 58% (351/608. Most patients in the ACTiv program were women aged 40–64 years, with a mean disease duration of 13.7 years and moderate disability, who had been in the ACTiv program for ≥5 months. Most patients (88%, 302/342 were either very satisfied or satisfied with the ACTiv program and believed that they were very unlikely or somewhat unlikely to switch from the ACTiv program (64%, 214/335. The most important benefit was the reassurance of receiving treatment from a trained nurse in a professional medical environment (33%, 102/309. The most important concern was the fear of side effects (48%, 134/280. The main drivers of patient satisfaction and patient-perceived benefits and concerns of patients were health profile, previous medication experience, and length of treatment time in the program.Conclusion: The ACTiv program is used by patients of various ages, family life situations, and locations. Patient satisfaction with the program is high, which enables patients to benefit from long-term use of tocilizumab

  15. Patients with Treatment-Requiring Chronic Graft versus Host Disease after Allogeneic Stem Cell Transplantation Have Altered Metabolic Profiles due to the Disease and Immunosuppressive Therapy: Potential Implication for Biomarkers

    Directory of Open Access Journals (Sweden)

    Håkon Reikvam

    2018-01-01

    Full Text Available Chronic graft versus host disease (cGVHD is a common long-term complication after allogeneic hematopoietic stem cell transplantation. The objective of our study was to compare the metabolic profiles for allotransplant recipients and thereby identify metabolic characteristics of patients with treatment-requiring cGVHD. The study included 51 consecutive patients (29 men and 22 women; median age: 44 years, range: 15–66 years transplanted with peripheral blood stem cells derived from human leukocyte antigen-matched family donors. All serum samples investigated by global metabolomic profiling were collected approximately 1 year posttransplant (median 358 days. Thirty-one of the 51 patients (61% had cGVHD 1 year posttransplant. The affected organs were (number of patients liver/bile duct (23, eyes (15, gastrointestinal tract (14, skin (13, mouth (10, lungs (3, and urogenital tract (1. We compared the metabolic profile for patients with and without cGVHD, and a Random Forrest Classification Analysis then resulted in 75% accuracy in differentiating the two groups. The 30 top-ranked metabolites from this comparison included increased levels of bile acids, several metabolites from the cytokine-responsive kynurenine pathway for tryptophan degradation, pro-inflammatory lipid metabolites, phenylalanine and tyrosine metabolites derived from the gut microbial flora, and metabolites reflecting increased oxidative stress. However, nine of these 30 top-ranked metabolites were probably altered due to cyclosporine or steroid treatment, and we therefore did a hierarchical clustering analysis including all 51 patients but only based on the other 21 cGVHD-specific metabolites. This analysis identified three patient subsets: one cluster included mainly patients without cGVHD and had generally low metabolite levels; another cluster included mainly patients with cGVHD (most patients with at least three affected organs and high metabolite levels, and the last

  16. Accurate measurement of RF exposure from emerging wireless communication systems

    International Nuclear Information System (INIS)

    Letertre, Thierry; Toffano, Zeno; Monebhurrun, Vikass

    2013-01-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  17. Study of accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  18. Accurate Long-Term Analytic Representation of Mercury Motion

    Science.gov (United States)

    Kudryavtsev, Sergey M.

    New analytical representation of Mercury motion accurate over at least six thousand years is developed. For this purpose author’s original technique for spectral analysis of a function tabulated over a long period of time is employed. By using the latest JPL numerical ephemerides DE-406 orbital elements of Mercury were first calculated on every day within 3000BC 3000AD. Expansion of the orbital elements to Poisson series is made where both the amplitudes and arguments of the series' terms are high-degree time polynomials. (As opposed to the classical Fourier analysis where terms' amplitudes are constants and arguments are always linear functions.) Over long-term intervals this approach leads to essential reduction of the series' length and improvement in their accuracy. As a result maximum error in calculating mean orbital longitude of Mercury over six thousand years by the new expansion is less than 0.1 arcsec and the relevant number of Poisson terms is 142 only. For comparison VSOP87 planetary theory provides accuracy of 1 arcsec in calculating the same parameter and uses for that 810 trigonometric terms and 451 mixed ones. The work is supported in part by grant 02-02-16887 from the Russian Foundation for Basic Research

  19. Machine learning of accurate energy-conserving molecular force fields

    Science.gov (United States)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E.; Poltavsky, Igor; Schütt, Kristof T.; Müller, Klaus-Robert

    2017-01-01

    Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. PMID:28508076

  20. Accurate Complex Systems Design: Integrating Serious Games with Petri Nets

    Directory of Open Access Journals (Sweden)

    Kirsten Sinclair

    2016-03-01

    Full Text Available Difficulty understanding the large number of interactions involved in complex systems makes their successful engineering a problem. Petri Nets are one graphical modelling technique used to describe and check proposed designs of complex systems thoroughly. While automatic analysis capabilities of Petri Nets are useful, their visual form is less so, particularly for communicating the design they represent. In engineering projects, this can lead to a gap in communications between people with different areas of expertise, negatively impacting achieving accurate designs.In contrast, although capable of representing a variety of real and imaginary objects effectively, behaviour of serious games can only be analysed manually through interactive simulation. This paper examines combining the complementary strengths of Petri Nets and serious games. The novel contribution of this work is a serious game prototype of a complex system design that has been checked thoroughly. Underpinned by Petri Net analysis, the serious game can be used as a high-level interface to communicate and refine the design.Improvement of a complex system design is demonstrated by applying the integration to a proof-of-concept case study.   

  1. EFFICIENT AND ACCURATE INDOOR LOCALIZATION USING LANDMARK GRAPHS

    Directory of Open Access Journals (Sweden)

    F. Gu

    2016-06-01

    Full Text Available Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  2. An accurate projection algorithm for array processor based SPECT systems

    International Nuclear Information System (INIS)

    King, M.A.; Schwinger, R.B.; Cool, S.L.

    1985-01-01

    A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT

  3. Accurate calculation of field and carrier distributions in doped semiconductors

    Directory of Open Access Journals (Sweden)

    Wenji Yang

    2012-06-01

    Full Text Available We use the numerical squeezing algorithm(NSA combined with the shooting method to accurately calculate the built-in fields and carrier distributions in doped silicon films (SFs in the micron and sub-micron thickness range and results are presented in graphical form for variety of doping profiles under different boundary conditions. As a complementary approach, we also present the methods and the results of the inverse problem (IVP - finding out the doping profile in the SFs for given field distribution. The solution of the IVP provides us the approach to arbitrarily design field distribution in SFs - which is very important for low dimensional (LD systems and device designing. Further more, the solution of the IVP is both direct and much easy for all the one-, two-, and three-dimensional semiconductor systems. With current efforts focused on the LD physics, knowing of the field and carrier distribution details in the LD systems will facilitate further researches on other aspects and hence the current work provides a platform for those researches.

  4. Accurate calculation of high harmonics generated by relativistic Thomson scattering

    International Nuclear Information System (INIS)

    Popa, Alexandru

    2008-01-01

    The recent emergence of the field of ultraintense laser pulses, corresponding to beam intensities higher than 10 18 W cm -2 , brings about the problem of the high harmonic generation (HHG) by the relativistic Thomson scattering of the electromagnetic radiation by free electrons. Starting from the equations of the relativistic motion of the electron in the electromagnetic field, we give an exact solution of this problem. Taking into account the Lienard-Wiechert equations, we obtain a periodic scattered electromagnetic field. Without loss of generality, the solution is strongly simplified by observing that the electromagnetic field is always normal to the direction electron-detector. The Fourier series expansion of this field leads to accurate expressions of the high harmonics generated by the Thomson scattering. Our calculations lead to a discrete HHG spectrum, whose shape and angular distribution are in agreement with the experimental data from the literature. Since no approximations were made, our approach is also valid in the ultrarelativistic regime, corresponding to intensities higher than 10 23 W cm -2 , where it predicts a strong increase of the HHG intensities and of the order of harmonics. In this domain, the nonlinear Thomson scattering could be an efficient source of hard x-rays

  5. Highly Accurate Calculations of the Phase Diagram of Cold Lithium

    Science.gov (United States)

    Shulenburger, Luke; Baczewski, Andrew

    The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. Accurate heat capacity data at phase transitions from relaxation calorimetry

    Science.gov (United States)

    Suzuki, Hal; Inaba, Akira; Meingast, Christoph

    2010-10-01

    Extracting accurate heat capacities by conventional relaxation calorimetry at first-order or very sharp second-order phase transitions is extremely difficult. The so-called "scanning method" provides a key to overcome this challenge. Here, we introduce new corrections in the data analysis of this method. Critical examinations of the improvements are made experimentally by investigating the well-studied first-order ferroelectric phase transitions of KH 2PO 4 and BaTiO 3 using a commercial relaxation calorimeter Physical Property Measurement System (PPMS) supplied by Quantum Design. The results for KH 2PO 4 are shown to be excellent; a very sharp peak in heat capacity is obtained and the absolute values are shown to agree well with the previous results obtained by adiabatic calorimetry on much larger samples. The critical behavior of the heat capacity in the vicinity of the transition temperature, as well as the thermodynamic quantities such as the transition enthalpy and entropy, also agrees very well with the previous results. For BaTiO 3, clear hysteretic behavior of the transition is observed for heating and cooling curves.

  7. Accurate determination of segmented X-ray detector geometry

    Science.gov (United States)

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton

    2015-01-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  8. Iterative feature refinement for accurate undersampled MR image reconstruction

    Science.gov (United States)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  9. Interferometry imaging technique for accurate deep-space probe positioning

    Science.gov (United States)

    Zheng, Weimin; Tong, Fengxian; Zhang, Juan; Liu, Lei; Shu, Fengchun

    2017-12-01

    Very long baseline interferometry (VLBI) is a radio astronomy tool with very high spatial resolution. It uses two or more radio telescopes to track the faraway object and gets its visibility. The intensity distribution image of radio source can be obtained by the inverse Fourier transformation of the visibilities sampled on UV plane perpendicular to the line of sight. Chinese VLBI Network (CVN) consists of 5 radio telescopes, and its highest spatial resolution is equivalent to that of a ∼3000 km diameters single dish antenna. This paper introduces the interferometry imaging principle, the imaging results of ChangE lunar and Mars Express probes. The measured ChangE-3 (CE-3) Rover relative position accuracy is about 1 m by this method. The 1 m accuracy is verified by comparisons with Rover null position and the onboard stereo vision measurement results. The successful imaging of spacecraft indicates that the interferometry imaging technology can be used for accurate spacecraft positioning in the future.

  10. Concurrent and Accurate Short Read Mapping on Multicore Processors.

    Science.gov (United States)

    Martínez, Héctor; Tárraga, Joaquín; Medina, Ignacio; Barrachina, Sergio; Castillo, Maribel; Dopazo, Joaquín; Quintana-Ortí, Enrique S

    2015-01-01

    We introduce a parallel aligner with a work-flow organization for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, HPG Aligner SA (HPG Aligner SA is an open-source application. The software is available at http://www.opencb.org, exploits a suffix array to rapidly map a large fraction of the RNA fragments (reads), as well as leverages the accuracy of the Smith-Waterman algorithm to deal with conflictive reads. The aligner is enhanced with a careful strategy to detect splice junctions based on an adaptive division of RNA reads into small segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing crucial information for the successful alignment of the complete reads. The experimental results on a platform with Intel multicore technology report the parallel performance of HPG Aligner SA, on RNA reads of 100-400 nucleotides, which excels in execution time/sensitivity to state-of-the-art aligners such as TopHat 2+Bowtie 2, MapSplice, and STAR.

  11. Iterative feature refinement for accurate undersampled MR image reconstruction

    International Nuclear Information System (INIS)

    Wang, Shanshan; Liu, Jianbo; Liu, Xin; Zheng, Hairong; Liang, Dong; Liu, Qiegen; Ying, Leslie

    2016-01-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches. (paper)

  12. Wavelenght-Dispersive X-Ray Flourescence Accuration

    Directory of Open Access Journals (Sweden)

    H Widyatmoko

    2010-10-01

    Full Text Available X-Fuorescence spectrometry is a method, which is increasingly applied in the geochemical analysis. X-Fuorescence spectrometry is classified under two categories -WDXRF (wavelenght - dispersive X-ray fluorescence spectrometer and EDXRF (energy-dispersive X - ray fluorescence spectrometer. WDXRF can be configured as a sequential spectrometer , a simultaneous spectrometer or a hibrid instrument, which combines the advantages of the simultaneous and sequential spectrometers into one instrument. Each instrument is different in some characteristics, and each has applications for which it is specifically suited. In this investigation sequential spectrometer PW 1450 was used to analyze the major, minor and trace elements in the samples. The standards used in calibrating the PW 1450 for the analysis of all samples are materials of known composition (30 internatioanal standards and 66 standards from Institut für Mineralogie der Uni. Köln, Germany. Interelement and matrix effects are treated by matrix matching of samples and standards, dilution, preconcentration of the element of interest, and mathematic corrections during data analysis. The examination of two samples and the statistic description using standard deviation and coefficient of variant show that the XFA is accurate enaugh for many elements, especially for the major elements, but for Mg, Ca, K, Na, P, S, Co, Rb, Zn, Ni, Ba, Pb in comparison with Atomic Absorpsion Spectrometry (AAS, Flame Emission Spectrometer (FES, Inductively Coupled Plasma (ICP and photometer it is less sensitive. It is posible to devaluate the errors by using coefficient of variant and standard deviation.

  13. Fast and accurate quantum Monte Carlo for molecular crystals.

    Science.gov (United States)

    Zen, Andrea; Brandenburg, Jan Gerit; Klimeš, Jiří; Tkatchenko, Alexandre; Alfè, Dario; Michaelides, Angelos

    2018-02-20

    Computer simulation plays a central role in modern-day materials science. The utility of a given computational approach depends largely on the balance it provides between accuracy and computational cost. Molecular crystals are a class of materials of great technological importance which are challenging for even the most sophisticated ab initio electronic structure theories to accurately describe. This is partly because they are held together by a balance of weak intermolecular forces but also because the primitive cells of molecular crystals are often substantially larger than those of atomic solids. Here, we demonstrate that diffusion quantum Monte Carlo (DMC) delivers subchemical accuracy for a diverse set of molecular crystals at a surprisingly moderate computational cost. As such, we anticipate that DMC can play an important role in understanding and predicting the properties of a large number of molecular crystals, including those built from relatively large molecules which are far beyond reach of other high-accuracy methods. Copyright © 2018 the Author(s). Published by PNAS.

  14. Fast and accurate reconstruction of HARDI data using compressed sensing

    Science.gov (United States)

    Michailovich, Oleg; Rathi, Yogesh

    2011-01-01

    A spectrum of brain-related disorders are nowadays known to manifest themselves in degradation of the integrity and connectivity of neural tracts in the white matter of the brain. Such damage tends to affect the pattern of water diffusion in the white matter – the information which can be quantified by means of diffusion MRI (dMRI). Unfortunately, practical implementation of dMRI still poses a number of challenges which hamper its wide-spread integration into regular clinical practice. Chief among these is the problem of long scanning times. In particular, in the case of High Angular Resolution Diffusion Imaging (HARDI), the scanning times are known to increase linearly with the number of diffusion-encoding gradients. In this research, we use the theory of compressive sampling (aka compressed sensing) to substantially reduce the number of the diffusion gradients without compromising the informational content of HARDI signals. The experimental part of our study compares the proposed method with a number of alternative approaches, and shows that the former results in more accurate estimation of HARDI data in terms of the mean squared error. PMID:20879281

  15. Acetaminophen glucuronidation accurately reflects gluconeogenesis in fasted dogs.

    Science.gov (United States)

    Schwenk, W F; Kahl, J C

    1996-09-01

    To assess whether acetaminophen glucuronide accurately reflects uridyl diphosphate-glucose (UDP-glucose) derived from gluconeogenesis during fasting, three mongrel dogs received infusions of [U-14C]lactate, [1-13C]galactose, and [6-3H]glucose (after fasting overnight or for 2.5 days). After initiation of the isotopes (3 h), acetaminophen was given, and the urinary acetaminophen glucuronide was isolated. The mean plasma [14C]glucose specific activity (SA) was similar to the mean urinary acetaminophen glucuronide SA both after fasting overnight [299 +/- 19 vs. 296 +/- 14 disintegrations.min-1 (dpm).mumol-1, respectively] and after 2.5 days of fasting (511 +/- 8 vs. 562 +/- 32 dpm/mumol, respectively). Mean plasma glucose flux calculated using [6-3H]glucose decreased (P dogs, plasma glucose and UDP-glucose, as sampled by acetaminophen, equally reflect gluconeogenesis and appear to come from the same pool of glucose 6-phosphate. In addition, cycling of glucose moieties through UDP-glucose and glycogen decreases with an increased period of fasting.

  16. Machine learning of accurate energy-conserving molecular force fields.

    Science.gov (United States)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E; Poltavsky, Igor; Schütt, Kristof T; Müller, Klaus-Robert

    2017-05-01

    Using conservation of energy-a fundamental property of closed classical and quantum mechanical systems-we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol -1 for energies and 1 kcal mol -1 Å̊ -1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.

  17. A comparison of accurate automatic hippocampal segmentation methods.

    Science.gov (United States)

    Zandifar, Azar; Fonov, Vladimir; Coupé, Pierrick; Pruessner, Jens; Collins, D Louis

    2017-07-15

    The hippocampus is one of the first brain structures affected by Alzheimer's disease (AD). While many automatic methods for hippocampal segmentation exist, few studies have compared them on the same data. In this study, we compare four fully automated hippocampal segmentation methods in terms of their conformity with manual segmentation and their ability to be used as an AD biomarker in clinical settings. We also apply error correction to the four automatic segmentation methods, and complete a comprehensive validation to investigate differences between the methods. The effect size and classification performance is measured for AD versus normal control (NC) groups and for stable mild cognitive impairment (sMCI) versus progressive mild cognitive impairment (pMCI) groups. Our study shows that the nonlinear patch-based segmentation method with error correction is the most accurate automatic segmentation method and yields the most conformity with manual segmentation (κ=0.894). The largest effect size between AD versus NC and sMCI versus pMCI is produced by FreeSurfer with error correction. We further show that, using only hippocampal volume, age, and sex as features, the area under the receiver operating characteristic curve reaches up to 0.8813 for AD versus NC and 0.6451 for sMCI versus pMCI. However, the automatic segmentation methods are not significantly different in their performance. Copyright © 2017. Published by Elsevier Inc.

  18. Accurate ab initio vibrational energies of methyl chloride

    International Nuclear Information System (INIS)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2015-01-01

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH 3 35 Cl and CH 3 37 Cl. The respective PESs, CBS-35  HL , and CBS-37  HL , are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY 3 Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35  HL and CBS-37  HL PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm −1 , respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH 3 Cl without empirical refinement of the respective PESs

  19. Accurate ab initio vibrational energies of methyl chloride

    Energy Technology Data Exchange (ETDEWEB)

    Owens, Alec, E-mail: owens@mpi-muelheim.mpg.de [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Thiel, Walter [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  20. Measurement and Accurate Interpretation of the Solubility of Pharmaceutical Salts.

    Science.gov (United States)

    He, Yan; Ho, Chris; Yang, Donglai; Chen, Jeane; Orton, Edward

    2017-05-01

    Salt formation is one of the primary approaches to improve the developability of ionizable poorly water-soluble compounds. Solubility determination of the salt candidates in aqueous media or biorelevant fluids is a critical step in salt screening. Salt solubility measurements can be complicated due to dynamic changes in both solution and solid phases. Because of the early implementation of salt screening in research, solubility measurements often are performed using minimal amount of material. Some salts have transient high solubility on dissolution. Recognition of these transients can be critical in developing these salts into drug products. This minireview focuses on challenges in salt solubility measurements due to the changes in solution caused by self-buffering effects of dissolved species and the changes in solid phase due to solid-state phase transformations. Solubility measurements and their accurate interpretation are assessed in the context of dissolution monitoring and solid-phase analysis technologies. A harmonized method for reporting salt solubility measurements is recommended to reduce errors and to align with the U.S. Pharmacopeial policy and Food and Drug Administration recommendations for drug products containing pharmaceutical salts. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  1. Podiatry Ankle Duplex Scan: Readily Learned and Accurate in Diabetes.

    Science.gov (United States)

    Normahani, Pasha; Powezka, Katarzyna; Aslam, Mohammed; Standfield, Nigel J; Jaffer, Usman

    2018-03-01

    We aimed to train podiatrists to perform a focused duplex ultrasound scan (DUS) of the tibial vessels at the ankle in diabetic patients; podiatry ankle (PodAnk) duplex scan. Thirteen podiatrists underwent an intensive 3-hour long simulation training session. Participants were then assessed performing bilateral PodAnk duplex scans of 3 diabetic patients with peripheral arterial disease. Participants were assessed using the duplex ultrasound objective structured assessment of technical skills (DUOSATS) tool and an "Imaging Score". A total of 156 vessel assessments were performed. All patients had abnormal waveforms with a loss of triphasic flow. Loss of triphasic flow was accurately detected in 145 (92.9%) vessels; the correct waveform was identified in 139 (89.1%) cases. Participants achieved excellent DUOSATS scores (median 24 [interquartile range: 23-25], max attainable score of 26) as well as "Imaging Scores" (8 [8-8], max attainable score of 8) indicating proficiency in technical skills. The mean time taken for each bilateral ankle assessment was 20.4 minutes (standard deviation ±6.7). We have demonstrated that a focused DUS for the purpose of vascular assessment of the diabetic foot is readily learned using intensive simulation training.

  2. Downhole temperature tool accurately measures well bore profile

    International Nuclear Information System (INIS)

    Cloud, W.B.

    1992-01-01

    This paper reports that an inexpensive temperature tool provides accurate temperatures measurements during drilling operations for better design of cement jobs, workovers, well stimulation, and well bore hydraulics. Valid temperature data during specific wellbore operations can improve initial job design, fluid testing, and slurry placement, ultimately enhancing well bore performance. This improvement applies to cement slurries, breaker activation for slurries, breaker activation for stimulation and profile control, and fluid rheological properties for all downhole operations. The temperature tool has been run standalone mounted inside drill pipe, on slick wire line and braided cable, and as a free-falltool. It has also been run piggyback on both directional surveys (slick line and free-fall) and standard logging runs. This temperature measuring system has been used extensively in field well bores to depths of 20,000 ft. The temperature tool is completely reusable in the field, ever similar to the standard directional survey tools used on may drilling rigs. The system includes a small, rugged, programmable temperature sensor, a standard body housing, various adapters for specific applications, and a personal computer (PC) interface

  3. Determination of accurate metal silicide layer thickness by RBS

    International Nuclear Information System (INIS)

    Kirchhoff, J.F.; Baumann, S.M.; Evans, C.; Ward, I.; Coveney, P.

    1995-01-01

    Rutherford Backscattering Spectrometry (RBS) is a proven useful analytical tool for determining compositional information of a wide variety of materials. One of the most widely utilized applications of RBS is the study of the composition of metal silicides (MSi x ), also referred to as polycides. A key quantity obtained from an analysis of a metal silicide is the ratio of silicon to metal (Si/M). Although compositional information is very reliable in these applications, determination of metal silicide layer thickness by RBS techniques can differ from true layer thicknesses by more than 40%. The cause of these differences lies in how the densities utilized in the RBS analysis are calculated. The standard RBS analysis software packages calculate layer densities by assuming each element's bulk densities weighted by the fractional atomic presence. This calculation causes large thickness discrepancies in metal silicide thicknesses because most films form into crystal structures with distinct densities. Assuming a constant layer density for a full spectrum of Si/M values for metal silicide samples improves layer thickness determination but ignores the underlying physics of the films. We will present results of RBS determination of the thickness various metal silicide films with a range of Si/M values using a physically accurate model for the calculation of layer densities. The thicknesses are compared to scanning electron microscopy (SEM) cross-section micrographs. We have also developed supporting software that incorporates these calculations into routine analyses. (orig.)

  4. Accurate quantification of supercoiled DNA by digital PCR

    Science.gov (United States)

    Dong, Lianhua; Yoo, Hee-Bong; Wang, Jing; Park, Sang-Ryoul

    2016-01-01

    Digital PCR (dPCR) as an enumeration-based quantification method is capable of quantifying the DNA copy number without the help of standards. However, it can generate false results when the PCR conditions are not optimized. A recent international comparison (CCQM P154) showed that most laboratories significantly underestimated the concentration of supercoiled plasmid DNA by dPCR. Mostly, supercoiled DNAs are linearized before dPCR to avoid such underestimations. The present study was conducted to overcome this problem. In the bilateral comparison, the National Institute of Metrology, China (NIM) optimized and applied dPCR for supercoiled DNA determination, whereas Korea Research Institute of Standards and Science (KRISS) prepared the unknown samples and quantified them by flow cytometry. In this study, several factors like selection of the PCR master mix, the fluorescent label, and the position of the primers were evaluated for quantifying supercoiled DNA by dPCR. This work confirmed that a 16S PCR master mix avoided poor amplification of the supercoiled DNA, whereas HEX labels on dPCR probe resulted in robust amplification curves. Optimizing the dPCR assay based on these two observations resulted in accurate quantification of supercoiled DNA without preanalytical linearization. This result was validated in close agreement (101~113%) with the result from flow cytometry. PMID:27063649

  5. Accurate line intensities of methane from first-principles calculations

    Science.gov (United States)

    Nikitin, Andrei V.; Rey, Michael; Tyuterev, Vladimir G.

    2017-10-01

    In this work, we report first-principle theoretical predictions of methane spectral line intensities that are competitive with (and complementary to) the best laboratory measurements. A detailed comparison with the most accurate data shows that discrepancies in integrated polyad intensities are in the range of 0.4%-2.3%. This corresponds to estimations of the best available accuracy in laboratory Fourier Transform spectra measurements for this quantity. For relatively isolated strong lines the individual intensity deviations are in the same range. A comparison with the most precise laser measurements of the multiplet intensities in the 2ν3 band gives an agreement within the experimental error margins (about 1%). This is achieved for the first time for five-atomic molecules. In the Supplementary Material we provide the lists of theoretical intensities at 269 K for over 5000 strongest transitions in the range below 6166 cm-1. The advantage of the described method is that this offers a possibility to generate fully assigned exhaustive line lists at various temperature conditions. Extensive calculations up to 12,000 cm-1 including high-T predictions will be made freely available through the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru) that contains ab initio born line lists and provides a user-friendly graphical interface for a fast simulation of the absorption cross-sections and radiance.

  6. HIPPI: highly accurate protein family classification with ensembles of HMMs

    Directory of Open Access Journals (Sweden)

    Nam-phuong Nguyen

    2016-11-01

    Full Text Available Abstract Background Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. Results We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification. HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. Conclusion HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  7. Chewing simulation with a physically accurate deformable model.

    Science.gov (United States)

    Pascale, Andra Maria; Ruge, Sebastian; Hauth, Steffen; Kordaß, Bernd; Linsen, Lars

    2015-01-01

    Nowadays, CAD/CAM software is being used to compute the optimal shape and position of a new tooth model meant for a patient. With this possible future application in mind, we present in this article an independent and stand-alone interactive application that simulates the human chewing process and the deformation it produces in the food substrate. Chewing motion sensors are used to produce an accurate representation of the jaw movement. The substrate is represented by a deformable elastic model based on the finite linear elements method, which preserves physical accuracy. Collision detection based on spatial partitioning is used to calculate the forces that are acting on the deformable model. Based on the calculated information, geometry elements are added to the scene to enhance the information available for the user. The goal of the simulation is to present a complete scene to the dentist, highlighting the points where the teeth came into contact with the substrate and giving information about how much force acted at these points, which therefore makes it possible to indicate whether the tooth is being used incorrectly in the mastication process. Real-time interactivity is desired and achieved within limits, depending on the complexity of the employed geometric models. The presented simulation is a first step towards the overall project goal of interactively optimizing tooth position and shape under the investigation of a virtual chewing process using real patient data (Fig 1).

  8. Estimation of Uterine Size: How Accurate Are We?

    Science.gov (United States)

    Hoke, Tanya P; Vakili, Babak

    To evaluate the accuracy of gynecologic surgeons at estimating uterine dimensions and weight. Six model uteri of various sizes were created to simulate the size and consistency of a uterus and displayed at 3 stations. The visual station (VS) comprised 2 specimens placed on an unmarked table. The laparoscopic station (LS) consisted of 2 model uteri, each placed in a separate simulated abdomen with a 0 degree laparoscope and 2 operative trocars with standard instruments. The blind weight station (BWS) consisted of blind palpation of 2 separately weighted models (heavy model [HM] and light model [LM]). Participants visually estimated the dimensions of each VS and LS models and blindly palpated the BWS models to estimate weight. Participants included 15 residents, 27 attendings, and 6 medical students. There was no difference in estimation accuracy regarding gender and age. For the VS and LS groups, participants underestimated all dimensions (VS variance = -15.0%; P estimation was less accurate than direct vision (P estimating laparoscopic dimensions (-25.8% vs -41.1%; P = 0.0001). All groups overestimated model weights (HM variance, 92.5%; P estimating dimensions and weights. With surgical decisions often predicated on estimates, education is needed to improve estimation methods.

  9. Accurate Recovery of H i Velocity Dispersion from Radio Interferometers

    Energy Technology Data Exchange (ETDEWEB)

    Ianjamasimanana, R. [Max-Planck Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Blok, W. J. G. de [Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, 7990 AA Dwingeloo (Netherlands); Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV, Groningen (Netherlands)

    2017-05-01

    Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by The H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.

  10. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    International Nuclear Information System (INIS)

    Gupta, V; Wang, Y; Romero, A; Heijmen, B; Hoogeman, M; Myronenko, A; Jordan, P

    2014-01-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain ground truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto

  11. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, V; Wang, Y; Romero, A; Heijmen, B; Hoogeman, M [Erasmus MC Cancer Institute, Rotterdam (Netherlands); Myronenko, A; Jordan, P [Accuray Incorporated, Sunnyvale, United States. (United States)

    2014-06-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain ground truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto

  12. Gene expression signatures of radiation response are specific, durable and accurate in mice and humans.

    Directory of Open Access Journals (Sweden)

    Sarah K Meadows

    2008-04-01

    Full Text Available Previous work has demonstrated the potential for peripheral blood (PB gene expression profiling for the detection of disease or environmental exposures.We have sought to determine the impact of several variables on the PB gene expression profile of an environmental exposure, ionizing radiation, and to determine the specificity of the PB signature of radiation versus other genotoxic stresses. Neither genotype differences nor the time of PB sampling caused any lessening of the accuracy of PB signatures to predict radiation exposure, but sex difference did influence the accuracy of the prediction of radiation exposure at the lowest level (50 cGy. A PB signature of sepsis was also generated and both the PB signature of radiation and the PB signature of sepsis were found to be 100% specific at distinguishing irradiated from septic animals. We also identified human PB signatures of radiation exposure and chemotherapy treatment which distinguished irradiated patients and chemotherapy-treated individuals within a heterogeneous population with accuracies of 90% and 81%, respectively.We conclude that PB gene expression profiles can be identified in mice and humans that are accurate in predicting medical conditions, are specific to each condition and remain highly accurate over time.

  13. Professional information about urinary incontinence on the World Wide Web: is it timely? Is it accurate?

    Science.gov (United States)

    Diering, C L; Palmer, M H

    2001-01-01

    Access to timely and accurate information about urinary continence and incontinence is important to the assessment and treatment of adults with urinary incontinence. The aims of this study were to identify current Web sites containing information about urinary continence that are easily accessible to health care providers and to determine the timeliness and accuracy of the information included on these Web sites. The World Wide Web was searched for sites devoted to health care provider information about urinary continence and incontinence. Two external content reviewers evaluated content in terms of timeliness and accuracy. A table outlining each sites credibility, content, and function was constructed. Two hundred sixty-five sites were located, but only 15 met the inclusion criteria. Readability levels ranged from 6.2 to 14.5 years. All sites provided links, and 53% had internal search engines. Most information located was accurate; however, some sites contained dated information. Forty percent of the sites were not dated, and thus determining the currency of the information was impossible. The World Wide Web is a valuable tool containing state-of-the-art knowledge about urinary continence that WOC nurses can use to educate themselves and others. However, using critical skills to evaluate the information posted on these and any other sites is essential.

  14. A random protein-creatinine ratio accurately predicts baseline proteinuria in early pregnancy.

    Science.gov (United States)

    Hirshberg, Adi; Draper, Jennifer; Curley, Cara; Sammel, Mary D; Schwartz, Nadav

    2014-12-01

    Data surrounding the use of a random urine protein:creatinine ratio (PCR) in the diagnosis of preeclampsia is conflicting. We sought to determine whether PCR in early pregnancy can replace the 24-hour urine collection as the primary screening test in patients at risk for baseline proteinuria. Women requiring a baseline evaluation for proteinuria supplied a urine sample the morning after their 24-hour collection. The PCR was analyzed as a predictor of significant proteinuria (≥150 mg). A regression equation to estimate the 24-hour protein value from the PCR was then developed. Sixty of 135 subjects enrolled completed the study. The median 24-hour urine protein and PCR were 90 mg (IQR: 50-145) and 0.063 (IQR: 0.039-0.083), respectively. Fifteen patients (25%) had significant proteinuria. PCR was strongly correlated with the 24-hour protein value (r = 0.99, p protein = 46.5 + 904.2*PCR] accurately estimates the actual 24-hour protein (95% CI: ±88 mg). A random urine PCR accurately estimates the 24-hour protein excretion in the first half of pregnancy and can be used as the primary screening test for baseline proteinuria in at-risk patients.

  15. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    Science.gov (United States)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  16. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK

    Directory of Open Access Journals (Sweden)

    Chun-Chi Chen

    2016-08-01

    Full Text Available This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs.

  17. Recommended volumetric capacity definitions and protocols for accurate, standardized and unambiguous metrics for hydrogen storage materials

    Science.gov (United States)

    Parilla, Philip A.; Gross, Karl; Hurst, Katherine; Gennett, Thomas

    2016-03-01

    The ultimate goal of the hydrogen economy is the development of hydrogen storage systems that meet or exceed the US DOE's goals for onboard storage in hydrogen-powered vehicles. In order to develop new materials to meet these goals, it is extremely critical to accurately, uniformly and precisely measure materials' properties relevant to the specific goals. Without this assurance, such measurements are not reliable and, therefore, do not provide a benefit toward the work at hand. In particular, capacity measurements for hydrogen storage materials must be based on valid and accurate results to ensure proper identification of promising materials for further development. Volumetric capacity determinations are becoming increasingly important for identifying promising materials, yet there exists controversy on how such determinations are made and whether such determinations are valid due to differing methodologies to count the hydrogen content. These issues are discussed herein, and we show mathematically that capacity determinations can be made rigorously and unambiguously if the constituent volumes are well defined and measurable in practice. It is widely accepted that this occurs for excess capacity determinations and we show here that this can happen for the total capacity determination. Because the adsorption volume is undefined, the absolute capacity determination remains imprecise. Furthermore, we show that there is a direct relationship between determining the respective capacities and the calibration constants used for the manometric and gravimetric techniques. Several suggested volumetric capacity figure-of-merits are defined, discussed and reporting requirements recommended. Finally, an example is provided to illustrate these protocols and concepts.

  18. Prometheus: Scalable and Accurate Emulation of Task-Based Applications on Many-Core Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Kestor, Gokcen; Gioiosa, Roberto; Chavarría-Miranda, Daniel

    2015-03-01

    Modeling the performance of non-deterministic parallel applications on future many-core systems requires the development of novel simulation and emulation techniques and tools. We present “Prometheus”, a fast, accurate and modular emulation framework for task-based applications. By raising the level of abstraction and focusing on runtime synchronization, Prometheus can accurately predict applications’ performance on very large many-core systems. We validate our emulation framework against two real platforms (AMD Interlagos and Intel MIC) and report error rates generally below 4%. We, then, evaluate Prometheus’ performance and scalability: our results show that Prometheus can emulate a task-based application on a system with 512K cores in 11.5 hours. We present two test cases that show how Prometheus can be used to study the performance and behavior of systems that present some of the characteristics expected from exascale supercomputer nodes, such as active power management and processors with a high number of cores but reduced cache per core.

  19. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK.

    Science.gov (United States)

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-08-08

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs.

  20. Comparison of thermistor linearization techniques for accurate temperature measurement in phase change materials

    Energy Technology Data Exchange (ETDEWEB)

    Stankovic, S B; Kyriacou, P A, E-mail: p.kyriacou@city.ac.uk [School of Engineering and Mathematical Sciences, City University London, Northampton Square, London EC1V 0HB (United Kingdom)

    2011-08-17

    Alternate energy technologies are developing rapidly in the recent years. A significant part of this trend is the development of different phase change materials (PCMs). Proper utilization of PCMs requires accurate thermal characterization. There are several methodologies used in this field. This paper stresses the importance of accurate temperature measurements during the implementation of T-history method. Since the temperature sensor size is also important thermistors have been selected as the sensing modality. Two thermistor linearization techniques, one based on Wheatstone bridge and the other based on simple serial-parallel resistor connection, are compared in terms of achievable temperature accuracy through consideration of both, nonlinearity and self-heating errors. Proper calibration was performed before T-history measurement of RT21 (RUBITHERM (registered) GmbH) PCM. Measurement results suggest that the utilization of serial-parallel resistor connection gives better accuracy (less than {+-}0.1 deg. C) in comparison with the Wheatstone bridge based configuration (up to {+-}1.5 deg. C).