WorldWideScience

Sample records for facilitate improved estimations

  1. Improved Radiation Dosimetry/Risk Estimates to Facilitate Environmental Management of Plutonium-Contaminated Sites

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Bobby R.; Tokarskaya, Zoya B.; Zhuntova, Galina V.; Osovets, Sergey V.; Syrchikov, Victor A., Belyaeva, Zinaida D.

    2007-12-14

    This report summarizes 4 years of research achievements in this Office of Science (BER), U.S. Department of Energy (DOE) project. The research described was conducted by scientists and supporting staff at Lovelace Respiratory Research Institute (LRRI)/Lovelace Biomedical and Environmental Research Institute (LBERI) and the Southern Urals Biophysics Institute (SUBI). All project objectives and goals were achieved. A major focus was on obtaining improved cancer risk estimates for exposure via inhalation to plutonium (Pu) isotopes in the workplace (DOE radiation workers) and environment (public exposures to Pu-contaminated soil). A major finding was that low doses and dose rates of gamma rays can significantly suppress cancer induction by alpha radiation from inhaled Pu isotopes. The suppression relates to stimulation of the body's natural defenses, including immunity against cancer cells and selective apoptosis which removes precancerous and other aberrant cells.

  2. Improved Radiation Dosimetry/Risk Estimates to Facilitate Environmental Management of Plutonium-Contaminated Sites

    International Nuclear Information System (INIS)

    Scott, Bobby R.; Tokarskaya, Zoya B.; Zhuntova, Galina V.; Osovets, Sergey V.; Syrchikov, Victor A.; Belyaeva, Zinaida D.

    2007-01-01

    This report summarizes 4 years of research achievements in this Office of Science (BER), U.S. Department of Energy (DOE) project. The research described was conducted by scientists and supporting staff at Lovelace Respiratory Research Institute (LRRI)/Lovelace Biomedical and Environmental Research Institute (LBERI) and the Southern Urals Biophysics Institute (SUBI). All project objectives and goals were achieved. A major focus was on obtaining improved cancer risk estimates for exposure via inhalation to plutonium (Pu) isotopes in the workplace (DOE radiation workers) and environment (public exposures to Pu-contaminated soil). A major finding was that low doses and dose rates of gamma rays can significantly suppress cancer induction by alpha radiation from inhaled Pu isotopes. The suppression relates to stimulation of the body's natural defenses, including immunity against cancer cells and selective apoptosis which removes precancerous and other aberrant cells

  3. Improved Radiation Dosimetry/Risk Estimates to Facilitate Environmental Management of Plutonium Contaminated Sites

    International Nuclear Information System (INIS)

    Scott, Bobby R.

    1999-01-01

    The objective of this research is to evaluate distributions of possible alpha radiation doses to the lung, bone, and liver, and associated health-risk distributions for plutonium (Pu) inhalation exposure scenarios relevant to environmental management of PuO2-contaminated sites. Currently available dosimetry/risk models do not apply to exposure scenarios where relatively small numbers of highly radioactive PuO2 particles are presented for inhalation (stochastic exposure [SE] paradigm). For the SE paradigm, distributions of possible risks are more relevant than point estimates of risk. The main goal of the project is to deliver a computer program that will allow evaluation of the indicated risk distributions for the SE paradigm. However, some of our work also relates to the deterministic exposure [DE] paradigm where large numbers of airborne particles (resuspended dust containing PuO2) are presented for inhalation to members of the public residing or working at a remediated Department of Energy (DOE) site

  4. Improved Radiation Dosimetry/Risk Estimates to Facilitate Environmental Management Of Plutonium Contaminated Sites

    International Nuclear Information System (INIS)

    Scott, B.R.

    2001-01-01

    Currently available radiation dosimetry/health-risk models for inhalation exposure to radionuclides are based on deterministic radiation intake and deterministic radiation doses (local and global). These models are not adequate for brief plutonium (Pu) exposure scenarios related to Department of Energy (DOE) decontamination/decommissioning (D and D) operations because such exposures involve the stochastic-intake (StI) paradigm. For this paradigm, small or moderate numbers of airborne, pure, highly radioactive PuO2 particles could be inhaled and deposited in the respiratory tract in unpredictable numbers (stochastic) during D and D incidents. Probabilistic relationships govern intake via the respiratory tract for the StI paradigm. An StIparadigm incident occurred on March 16, 2000, at Los Alamos National Laboratory. It involved eight workers who inhaled high-specific-activity, alpha-emitting (HSA-aE) 238PuO2-contaminated room air (glovebox-failure incident). Health-risk estimation is not trivial for the StI-exposure paradigm, especially for HSA-aE 238PuO2, as different individuals can have very different and uncertain radioactivity intakes for the same exposure duration and same incident. Indeed, this occurred in the Los Alamos incident. Rather than inappropriate point estimates of intake, dose, and risk, more appropriate probability distributions are needed. A main objective of this project has been to develop a stochastic dosimetry/risk computer model for evaluating radioactivity intake (by inhalation) distributions, organ dose distributions, and health risk distributions for DOE workers who may inhale airborne, alpha-emitting, pure PuO2 at DOE sites such as Rocky Flats. Another objective of this project has been to address the deterministic intake (DI) paradigm where members of the public could inhale, over years, millions and more resuspended, air-transported, PuO2-contaminated dust particles while residing (e.g., farmer) or working (e.g., office worker) at a

  5. Improved radiation dosimetry/risk estimates to facilitate environmental management of plutonium contaminated sites. 1998 annual progress report

    International Nuclear Information System (INIS)

    Scott, B.R.

    1998-01-01

    'The objective of this research is to evaluate distributions of possible alpha radiation doses to the lung, bone, and liver and associated health-risk distributions for plutonium (Pu) inhalation-exposure scenarios relevant to environmental management of PuO 2 -contaminated sites. Currently available dosimetry/risk models do not apply to exposure scenarios where, at most, a small number of highly radioactive PuO 2 particles are inhaled (stochastic exposure [SE] paradigm). For the SE paradigm, risk distributions are more relevant than point estimates of risk. The focus of the research is on the SE paradigm and on high specific activity, alpha-emitting (HSA-aE) particles such as 238 PuO 2 . The scientific goal is to develop a stochastic respiratory tract dosimetry/risk computer model for evaluating the desired absorbed dose distributions and associated health-risk distributions, for Department of Energy (DOE) workers and members of the public. This report summarizes results after 1 year of a 2-year project.'

  6. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  7. Practice Facilitators' and Leaders' Perspectives on a Facilitated Quality Improvement Program.

    Science.gov (United States)

    McHugh, Megan; Brown, Tiffany; Liss, David T; Walunas, Theresa L; Persell, Stephen D

    2018-04-01

    Practice facilitation is a promising approach to helping practices implement quality improvements. Our purpose was to describe practice facilitators' and practice leaders' perspectives on implementation of a practice facilitator-supported quality improvement program and describe where their perspectives aligned and diverged. We conducted interviews with practice leaders and practice facilitators who participated in a program that included 35 improvement strategies aimed at the ABCS of heart health (aspirin use in high-risk individuals, blood pressure control, cholesterol management, and smoking cessation). Rapid qualitative analysis was used to collect, organize, and analyze the data. We interviewed 17 of the 33 eligible practice leaders, and the 10 practice facilitators assigned to those practices. Practice leaders and practice facilitators both reported value in the program's ability to bring needed, high-quality resources to practices. Practice leaders appreciated being able to set the schedule for facilitation and select among the 35 interventions. According to practice facilitators, however, relying on practice leaders to set the pace of the intervention resulted in a lower level of program intensity than intended. Practice leaders preferred targeted assistance, particularly electronic health record documentation guidance and linkages to state smoking cessation programs. Practice facilitators reported that the easiest interventions were those that did not alter care practices. The dual perspectives of practice leaders and practice facilitators provide a more holistic picture of enablers and barriers to program implementation. There may be greater opportunities to assist small practices through simple, targeted practice facilitator-supported efforts rather than larger, comprehensive quality improvement projects. © 2018 Annals of Family Medicine, Inc.

  8. Facilitation: A Novel Way to Improve Students' Well-being

    DEFF Research Database (Denmark)

    Adriansen, Hanne Kirstine Olesen; Madsen, Lene Møller

    2013-01-01

    In this article we analyze a project that used facilitation techniques, which are known from training in industry, to improve the study environment at a public research university in Denmark. In 2009, the project was initiated in one graduate program; and it has subsequently been modified...... and institutionalized. The project did not change the teaching format, but introduced facilitated study-groups using peer learning. Itwas successful in increasing students’ well-being. While peer learning and study groups are well-known in higher education, facilitation is a different and novel tool. We argue...... that facilitation makes study groups more inclusive, and they provide the potential for deep learning by structuring the learning situation...

  9. An Improved Cluster Richness Estimator

    Energy Technology Data Exchange (ETDEWEB)

    Rozo, Eduardo; /Ohio State U.; Rykoff, Eli S.; /UC, Santa Barbara; Koester, Benjamin P.; /Chicago U. /KICP, Chicago; McKay, Timothy; /Michigan U.; Hao, Jiangang; /Michigan U.; Evrard, August; /Michigan U.; Wechsler, Risa H.; /SLAC; Hansen, Sarah; /Chicago U. /KICP, Chicago; Sheldon, Erin; /New York U.; Johnston, David; /Houston U.; Becker, Matthew R.; /Chicago U. /KICP, Chicago; Annis, James T.; /Fermilab; Bleem, Lindsey; /Chicago U.; Scranton, Ryan; /Pittsburgh U.

    2009-08-03

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  10. Improving evapotranspiration estimates in Mediterranean drylands

    DEFF Research Database (Denmark)

    Morillas, Laura; Leuning, Ray; Villagarcia, Luis

    2013-01-01

    An adaptation of a simple model for evapotranspiration (E) estimations in drylands based on remotely sensed leaf area index and the Penman-Monteith equation (PML model) (Leuning et al., 2008) is presented. Three methods for improving the consideration of soil evaporation influence in total evapo-...

  11. A facilitated process towards finding options for improved livestock ...

    African Journals Online (AJOL)

    A participatory multi-stakeholder process of finding options for improving livestock production in the severely degraded communal grazing area of Sterkspruit in South Africa was conducted. Interviews were conducted with individual livestock keepers from two sites to gather data on their demographic characteristics, ...

  12. Improved dose estimates for nuclear criticality accidents

    International Nuclear Information System (INIS)

    Wilkinson, A.D.; Basoglu, B.; Bentley, C.L.; Dunn, M.E.; Plaster, M.J.; Dodds, H.L.; Yamamoto, T.

    1995-01-01

    Slide rules are improved for estimating doses and dose rates resulting from nuclear criticality accidents. The original slide rules were created for highly enriched uranium solutions and metals using hand calculations along with the decades old Way-Wigner radioactive decay relationship and the inverse square law. This work uses state-of-the-art methods and better data to improve the original slide rules and also to extend the slide rule concept to three additional systems; i.e., highly enriched (93.2 wt%) uranium damp (H/ 235 U = 10) powder (U 3 O 8 ) and low-enriched (5 wt%) uranium mixtures (UO 2 F 2 ) with a H/ 235 U ratio of 200 and 500. Although the improved slide rules differ only slightly from the original slide rules, the improved slide rules and also the new slide rules can be used with greater confidence since they are based on more rigorous methods and better nuclear data

  13. Recurrent population improvement of rice breeding facilitated with male sterility

    International Nuclear Information System (INIS)

    Fujimaki, Hiroshi

    1982-01-01

    A new rice breeding system has been developed, making use of genic male sterility to utilize diverse breeding materials and to promote genetic recombination. In this system, recurrent selection technique and introgressive hybridization were used to increase the frequencies of producing desired genotypes and to improve the population in succession. To promote genetic recombination by the recurrent selection technique, intermating within the population is necessary, and to introduce useful germ plasms by the introgressive hybridization, back crossing with new genetic material is necessary. These can be done efficiently by using the recessive alleles for male sterility, and the representative models for thisF type of breeding were presented. (Kaihara, S.)

  14. Interaction force and motion estimators facilitating impedance control of the upper limb rehabilitation robot.

    Science.gov (United States)

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Bengoa, Pablo; Jung, Je Hyung

    2017-07-01

    In order to enhance the performance of rehabilitation robots, it is imperative to know both force and motion caused by the interaction between user and robot. However, common direct measurement of both signals through force and motion sensors not only increases the complexity of the system but also impedes affordability of the system. As an alternative of the direct measurement, in this work, we present new force and motion estimators for the proper control of the upper-limb rehabilitation Universal Haptic Pantograph (UHP) robot. The estimators are based on the kinematic and dynamic model of the UHP and the use of signals measured by means of common low-cost sensors. In order to demonstrate the effectiveness of the estimators, several experimental tests were carried out. The force and impedance control of the UHP was implemented first by directly measuring the interaction force using accurate extra sensors and the robot performance was compared to the case where the proposed estimators replace the direct measured values. The experimental results reveal that the controller based on the estimators has similar performance to that using direct measurement (less than 1 N difference in root mean square error between two cases), indicating that the proposed force and motion estimators can facilitate implementation of interactive controller for the UHP in robotmediated rehabilitation trainings.

  15. Experiences of practice facilitators working on the Improved Delivery of Cardiovascular Care project: Retrospective case study.

    Science.gov (United States)

    Liddy, Clare; Rowan, Margo; Valiquette-Tessier, Sophie-Claire; Drosinis, Paul; Crowe, Lois; Hogg, William

    2018-01-01

    To examine the barriers to and facilitators of practice facilitation experienced by participants in the Improving Delivery of Cardiovascular Care (IDOCC) project. Case studies of practice facilitators' narrative reports. Eastern Ontario. Primary care practices that participated in the IDOCC project. Cases were identified by calculating sum scores in order to determine practices' performance relative to their peers. Two case exemplars were selected that scored within ± 1 SD of the total mean score, and a qualitative analysis of practice facilitators' narrative reports was conducted using a 5-factor implementation framework to identify barriers and facilitators. Narratives were divided into 3 phases: planning, implementation, and sustainability. Barriers and facilitators fluctuated over the intervention's 3 phases. Site A reported more barriers (n = 47) than facilitators (n = 38), while site B reported a roughly equal number of barriers (n = 144) and facilitators (n = 136). In both sites, the most common barriers involved organizational and provider factors and the most common facilitators were associated with innovation and structural factors. Both practices encountered various barriers and facilitators throughout the IDOCC's 3 phases. The case studies reveal the complex interactions of these factors over time, and provide insight into the implementation of practice facilitation programs. Copyright© the College of Family Physicians of Canada.

  16. Improved moment scaling estimation for multifractal signals

    Directory of Open Access Journals (Sweden)

    D. Veneziano

    2009-11-01

    Full Text Available A fundamental problem in the analysis of multifractal processes is to estimate the scaling exponent K(q of moments of different order q from data. Conventional estimators use the empirical moments μ^rq=⟨ | εr(τ|q of wavelet coefficients εr(τ, where τ is location and r is resolution. For stationary measures one usually considers "wavelets of order 0" (averages, whereas for functions with multifractal increments one must use wavelets of order at least 1. One obtains K^(q as the slope of log( μ^rq against log(r over a range of r. Negative moments are sensitive to measurement noise and quantization. For them, one typically uses only the local maxima of | εr(τ| (modulus maxima methods. For the positive moments, we modify the standard estimator K^(q to significantly reduce its variance at the expense of a modest increase in the bias. This is done by separately estimating K(q from sub-records and averaging the results. For the negative moments, we show that the standard modulus maxima estimator is biased and, in the case of additive noise or quantization, is not applicable with wavelets of order 1 or higher. For these cases we propose alternative estimators. We also consider the fitting of parametric models of K(q and show how, by splitting the record into sub-records as indicated above, the accuracy of standard methods can be significantly improved.

  17. Epithelium percentage estimation facilitates epithelial quantitative protein measurement in tissue specimens.

    Science.gov (United States)

    Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui

    2013-12-01

    The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results

  18. Practice Facilitator Strategies for Addressing Electronic Health Record Data Challenges for Quality Improvement: EvidenceNOW.

    Science.gov (United States)

    Hemler, Jennifer R; Hall, Jennifer D; Cholan, Raja A; Crabtree, Benjamin F; Damschroder, Laura J; Solberg, Leif I; Ono, Sarah S; Cohen, Deborah J

    2018-01-01

    Practice facilitators ("facilitators") can play an important role in supporting primary care practices in performing quality improvement (QI), but they need complete and accurate clinical performance data from practices' electronic health records (EHR) to help them set improvement priorities, guide clinical change, and monitor progress. Here, we describe the strategies facilitators use to help practices perform QI when complete or accurate performance data are not available. Seven regional cooperatives enrolled approximately 1500 small-to-medium-sized primary care practices and 136 facilitators in EvidenceNOW, the Agency for Healthcare Research and Quality's initiative to improve cardiovascular preventive services. The national evaluation team analyzed qualitative data from online diaries, site visit field notes, and interviews to discover how facilitators worked with practices on EHR data challenges to obtain and use data for QI. We found facilitators faced practice-level EHR data challenges, such as a lack of clinical performance data, partial or incomplete clinical performance data, and inaccurate clinical performance data. We found that facilitators responded to these challenges, respectively, by using other data sources or tools to fill in for missing data, approximating performance reports and generating patient lists, and teaching practices how to document care and confirm performance measures. In addition, facilitators helped practices communicate with EHR vendors or health systems in requesting data they needed. Overall, facilitators tailored strategies to fit the individual practice and helped build data skills and trust. Facilitators can use a range of strategies to help practices perform data-driven QI when performance data are inaccurate, incomplete, or missing. Support is necessary to help practices, particularly those with EHR data challenges, build their capacity for conducting data-driven QI that is required of them for participating in practice

  19. Do Indonesian Children's Experiences with Large Currency Units Facilitate Magnitude Estimation of Long Temporal Periods?

    Science.gov (United States)

    Cheek, Kim A.

    2017-08-01

    Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.

  20. Enabling Continuous Quality Improvement in Practice: The Role and Contribution of Facilitation.

    Science.gov (United States)

    Harvey, Gillian; Lynch, Elizabeth

    2017-01-01

    Facilitating the implementation of continuous quality improvement (CQI) is a complex undertaking. Numerous contextual factors at a local, organizational, and health system level can influence the trajectory and ultimate success of an improvement program. Some of these contextual factors are amenable to modification, others less so. As part of planning and implementing healthcare improvement, it is important to assess and build an understanding of contextual factors that might present barriers to or enablers of implementation. On the basis of this initial diagnosis, it should then be possible to design and implement the improvement intervention in a way that is responsive to contextual barriers and enablers, often described as "tailoring" the implementation approach. Having individuals in the active role of facilitators is proposed as an effective way of delivering a context-sensitive, tailored approach to implementing CQI. This paper presents an overview of the facilitator role in implementing CQI. Drawing on empirical evidence from the use of facilitator roles in healthcare, the type of skills and knowledge required will be considered, along with the type of facilitation strategies that can be employed in the implementation process. Evidence from both case studies and systematic reviews of facilitation will be reviewed and key lessons for developing and studying the role in the future identified.

  1. An improved estimation and focusing scheme for vector velocity estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1999-01-01

    to reduce spatial velocity dispersion. Examples of different velocity vector conditions are shown using the Field II simulation program. A relative accuracy of 10.1 % is obtained for the lateral velocity estimates for a parabolic velocity profile for a flow perpendicular to the ultrasound beam and a signal...

  2. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  3. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite

  4. What impedes and what facilitates a quality improvement project for older hospitalized patients?

    NARCIS (Netherlands)

    Ijkema, R.; Langelaan, M.; van de Steeg, L.; Wagner, C.

    2014-01-01

    Objective: To gain insight into which factors impede, and which facilitate, the implementation of a complex multi-component improvement initiative in hospitalized older patients. Design: A qualitative study based on semi-structured interviews. The three dimensions of Pettigrew and Whipp's

  5. Independent Coactors May Improve Performance and Lower Workload: Viewing Vigilance Under Social Facilitation.

    Science.gov (United States)

    Claypoole, Victoria L; Szalma, James L

    2018-04-01

    The purpose of the present study was to examine the effects of an independent coactor on vigilance task performance. It was hypothesized that the presence of an independent coactor would improve performance in terms of the proportion of false alarms while also increasing perceived workload and stress. Vigilance, or the ability to maintain attention for extended periods, is of great interest to human factors psychologists. Substantial work has focused on improving vigilance task performance, typically through motivational interventions. Of interest to vigilance researchers is the application of social facilitation as a means of enhancing vigilance. Social facilitation seeks to explain how social presence may improve performance. A total of 100 participants completed a 24-min vigil either alone or in the presence of an independent (confederate) coactor. Participants completed measures of perceived workload and stress. The results indicated that performance (i.e., proportion of false alarms) was improved for those who completed the vigil in the presence of an independent coactor. Interestingly, perceived workload was actually lower for those who completed the vigil in the presence of an independent coactor, although perceived stress was not affected by the manipulation. Authors of future research should extend these findings to other forms of social facilitation and examine vigilance task performance in social contexts in order to determine the utility of social presence for improving vigilance. The use of coactors may be an avenue for organizations to consider utilizing to improve performance because of its relative cost-effectiveness and easy implementation.

  6. Improving Distribution Resiliency with Microgrids and State and Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Tuffner, Francis K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Williams, Tess L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Schneider, Kevin P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elizondo, Marcelo A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Yannan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Chen-Ching [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Yin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gourisetti, Sri Nikhil Gup [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-09-30

    Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking the system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using

  7. An improved silhouette for human pose estimation

    Science.gov (United States)

    Hawes, Anthony H.; Iftekharuddin, Khan M.

    2017-08-01

    We propose a novel method for analyzing images that exploits the natural lines of a human poses to find areas where self-occlusion could be present. Errors caused by self-occlusion cause several modern human pose estimation methods to mis-identify body parts, which reduces the performance of most action recognition algorithms. Our method is motivated by the observation that, in several cases, occlusion can be reasoned using only boundary lines of limbs. An intelligent edge detection algorithm based on the above principle could be used to augment the silhouette with information useful for pose estimation algorithms and push forward progress on occlusion handling for human action recognition. The algorithm described is applicable to computer vision scenarios involving 2D images and (appropriated flattened) 3D images.

  8. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  9. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2015-01-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  10. Improving nuclear envelope dynamics by EBV BFRF1 facilitates intranuclear component clearance through autophagy.

    Science.gov (United States)

    Liu, Guan-Ting; Kung, Hsiu-Ni; Chen, Chung-Kuan; Huang, Cheng; Wang, Yung-Li; Yu, Cheng-Pu; Lee, Chung-Pei

    2018-02-26

    Although a vesicular nucleocytoplasmic transport system is believed to exist in eukaryotic cells, the features of this pathway are mostly unknown. Here, we report that the BFRF1 protein of the Epstein-Barr virus improves vesicular transport of nuclear envelope (NE) to facilitate the translocation and clearance of nuclear components. BFRF1 expression induces vesicles that selectively transport nuclear components to the cytoplasm. With the use of aggregation-prone proteins as tools, we found that aggregated nuclear proteins are dispersed when these BFRF1-induced vesicles are formed. BFRF1-containing vesicles engulf the NE-associated aggregates, exit through from the NE, and putatively fuse with autophagic vacuoles. Chemical treatment and genetic ablation of autophagy-related factors indicate that autophagosome formation and autophagy-linked FYVE protein-mediated autophagic proteolysis are involved in this selective clearance of nuclear proteins. Remarkably, vesicular transport, elicited by BFRF1, also attenuated nuclear aggregates accumulated in neuroblastoma cells. Accordingly, induction of NE-derived vesicles by BFRF1 facilitates nuclear protein translocation and clearance, suggesting that autophagy-coupled transport of nucleus-derived vesicles can be elicited for nuclear component catabolism in mammalian cells.-Liu, G.-T., Kung, H.-N., Chen, C.-K., Huang, C., Wang, Y.-L., Yu, C.-P., Lee, C.-P. Improving nuclear envelope dynamics by EBV BFRF1 facilitates intranuclear component clearance through autophagy.

  11. On Improving Convergence Rates for Nonnegative Kernel Density Estimators

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1980-01-01

    To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...

  12. Improving Collective Estimations Using Resistance to Social Influence.

    Directory of Open Access Journals (Sweden)

    Gabriel Madirolas

    2015-11-01

    Full Text Available Groups can make precise collective estimations in cases like the weight of an object or the number of items in a volume. However, in others tasks, for example those requiring memory or mental calculation, subjects often give estimations with large deviations from factual values. Allowing members of the group to communicate their estimations has the additional perverse effect of shifting individual estimations even closer to the biased collective estimation. Here we show that this negative effect of social interactions can be turned into a method to improve collective estimations. We first obtained a statistical model of how humans change their estimation when receiving the estimates made by other individuals. We confirmed using existing experimental data its prediction that individuals use the weighted geometric mean of private and social estimations. We then used this result and the fact that each individual uses a different value of the social weight to devise a method that extracts the subgroups resisting social influence. We found that these subgroups of individuals resisting social influence can make very large improvements in group estimations. This is in contrast to methods using the confidence that each individual declares, for which we find no improvement in group estimations. Also, our proposed method does not need to use historical data to weight individuals by performance. These results show the benefits of using the individual characteristics of the members in a group to better extract collective wisdom.

  13. Improving Collective Estimations Using Resistance to Social Influence.

    Science.gov (United States)

    Madirolas, Gabriel; de Polavieja, Gonzalo G

    2015-11-01

    Groups can make precise collective estimations in cases like the weight of an object or the number of items in a volume. However, in others tasks, for example those requiring memory or mental calculation, subjects often give estimations with large deviations from factual values. Allowing members of the group to communicate their estimations has the additional perverse effect of shifting individual estimations even closer to the biased collective estimation. Here we show that this negative effect of social interactions can be turned into a method to improve collective estimations. We first obtained a statistical model of how humans change their estimation when receiving the estimates made by other individuals. We confirmed using existing experimental data its prediction that individuals use the weighted geometric mean of private and social estimations. We then used this result and the fact that each individual uses a different value of the social weight to devise a method that extracts the subgroups resisting social influence. We found that these subgroups of individuals resisting social influence can make very large improvements in group estimations. This is in contrast to methods using the confidence that each individual declares, for which we find no improvement in group estimations. Also, our proposed method does not need to use historical data to weight individuals by performance. These results show the benefits of using the individual characteristics of the members in a group to better extract collective wisdom.

  14. Colloid facilitated transport in fractured rocks: parameter estimation and comparison with experimental data

    International Nuclear Information System (INIS)

    Viswanthan, H.S.; Wolfsberg, A.V.; Reimus, P.W.; Ware, D.; Lu, G.

    2003-01-01

    Colloid-facilitated migration of plutonium in fractured rock has been implicated in both field and laboratory studies. Other reactive radionuclides may also experience enhanced mobility due to groundwater colloids. Model prediction of this process is necessary for assessment of contaminant boundaries in systems for which radionuclides are already in the groundwater and for performance assessment of potential repositories for radioactive waste. Therefore, a reactive transport model is developed and parameterized using results from controlled laboratory fracture column experiments. Silica, montmorillonite and clinoptilolite colloids are used in the experiments along with plutonium and Tritium. The goal of the numerical model is to identify and parameterize the physical and chemical processes that affect the colloid-facilitated transport of plutonium in the fractures. The parameters used in this model are similar in form to those that might be used in a field-scale transport model

  15. Barriers and facilitators to learn and improve through morbidity and mortality conferences: a qualitative study.

    Science.gov (United States)

    de Vos, Marit S; Hamming, Jaap F; Marang-van de Mheen, Perla J

    2017-11-12

    To explore barriers and facilitators to successful morbidity and mortality conferences (M&M), driving learning and improvement. This is a qualitative study with semistructured interviews. Inductive, thematic content analysis was used to identify barriers and facilitators, which were structured across a pre-existing framework for change in healthcare. Dutch academic surgical department with a long tradition of M&M. An interview sample of surgeons, residents and physician assistants (n=12). A total of 57 barriers and facilitators to successful M&M, covering 18 themes, varying from 'case type' to 'leadership', were perceived by surgical staff. While some factors related to M&M organisation, others concerned individual or social aspects. Eight factors, of which four were at the social level, had simultaneous positive and negative effects (eg, 'hierarchy' and 'team spirit'). Mediating pathways for M&M success were found to relate to available information , staff motivation and realisation processes. This study provides leads for improvement of M&M practice, as well as for further research on key elements of successful M&M. Various factors were perceived to affect M&M success, of which many were individual and social rather than organisational factors, affecting information and realisation processes but also staff motivation. Based on these findings, practical recommendations were formulated to guide efforts towards best practices for M&M. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. On Improving Density Estimators which are not Bona Fide Functions

    OpenAIRE

    Gajek, Leslaw

    1986-01-01

    In order to improve the rate of decrease of the IMSE for nonparametric kernel density estimators with nonrandom bandwidth beyond $O(n^{-4/5})$ all current methods must relax the constraint that the density estimate be a bona fide function, that is, be nonnegative and integrate to one. In this paper we show how to achieve similar improvement without relaxing any of these constraints. The method can also be applied for orthogonal series, adaptive orthogonal series, spline, jackknife, and other ...

  17. A care improvement program acting as a powerful learning environment to support nursing students learning facilitation competencies.

    Science.gov (United States)

    Jukema, Jan S; Harps-Timmerman, Annelies; Stoopendaal, Annemiek; Smits, Carolien H M

    2015-11-01

    Change management is an important area of training in undergraduate nursing education. Successful change management in healthcare aimed at improving practices requires facilitation skills that support teams in attaining the desired change. Developing facilitation skills in nursing students requires formal educational support. A Dutch Regional Care Improvement Program based on a nationwide format of change management in healthcare was designed to act as a Powerful Learning Environment for nursing students developing competencies in facilitating change. This article has two aims: to provide comprehensive insight into the program components and to describe students' learning experiences in developing their facilitation skills. This Dutch Regional Care Improvement Program considers three aspects of a Powerful Learning Environment: self-regulated learning; problem-based learning; and complex, realistic and challenging learning tasks. These three aspects were operationalised in five distinct areas of facilitation: increasing awareness of the need for change; leadership and project management; relationship building and communication; importance of the local context; and ongoing monitoring and evaluation. Over a period of 18 months, 42 nursing students, supported by trained lecturer-coaches, took part in nine improvement teams in our Regional Care Improvement Program, executing activities in all five areas of facilitation. Based on the students' experiences, we propose refinements to various components of this program, aimed at strengthenin the learning environment. There is a need for further detailed empirical research to study the impact this kind of learning environment has on students developing facilitation competencies in healthcare improvement. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Improving operating room first start efficiency - value of both checklist and a pre-operative facilitator.

    Science.gov (United States)

    Panni, M K; Shah, S J; Chavarro, C; Rawl, M; Wojnarwsky, P K; Panni, J K

    2013-10-01

    There are multiple components leading to improved operating room efficiency. We undertook a project focusing on first case starts; accounting for each delay component on a global basis. Our hypothesis was there would be a reduction in first start delays after we implemented strategies to address the issues identified through this accounting process. An orange sheet checklist was implemented, with specific items that needed to be clear prior to roll back to the operating room (OR), and an OR facilitator was employed to intervene whenever there were any missing items needed for a specific patient. We present the data from this quality improvement project over an 18-month period. Initially, 10.07 (± 0.73) delayed first starts occurred per day but declined steadily over time to a low of 4.95 (± 0.38) per day after 6 months (-49.2 %, P < 0.001). By the end of the project, the most common reasons for delay still included late surgical attending (19%), schedule changes (14%) as well as 'other reasons' (13%), but with an overall reduction per day of each. Total anaesthesia delay initially totalled 11% of the first start delays, but was negligible (< 1%) at the project's completion. While we have a challenging operating room environment based on our patient population, multiple trainees in both the surgery and anaesthesiology teams: an orange sheet - pre-operative checklist in addition to a dedicated pre-operative facilitator; allowed us to make a substantial improvement in our first start on time starts. © 2013 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  19. Improved ice loss estimate of the northwestern Greenland ice sheet

    DEFF Research Database (Denmark)

    Kjeldsen, Kristian Kjellerup; Khan, Shfaqat Abbas; Wahr, J.

    2013-01-01

    We estimate ice volume change rates in the northwest Greenland drainage basin during 2003–2009 using Ice, Cloud and land Elevation Satellite (ICESat) laser altimeter data. Elevation changes are often reported to be largest near the frontal portion of outlet glaciers. To improve the volume change...... estimate, we supplement the ICESat data with altimeter surveys from NASA's Airborne Topographic Mapper from 2002 to 2010 and NASA's Land, Vegetation and Ice Sensor from 2010. The Airborne data are mainly concentrated along the ice margin and thus have a significant impact on the estimate of the volume...... change. Our results show that adding Airborne Topographic Mapper and Land, Vegetation and Ice Sensor data to the ICESat data increases the catchment-wide estimate of ice volume loss by 11%, mainly due to an improved volume loss estimate along the ice sheet margin. Furthermore, our results show...

  20. Conducting an audit to improve the facilitation of emergency maternal and newborn referral in northern Ghana.

    Science.gov (United States)

    Awoonor-Williams, John Koku; Bailey, Patricia E; Yeji, Francis; Adongo, Ayire Emmanuel; Baffoe, Peter; Williams, Afua; Mercer, Sarah

    2015-10-01

    Ghana Health Service conducted an audit to strengthen the referral system for pregnant or recently pregnant women and newborns in northern Ghana. The audit took place in 16 facilities with two 3-month cycles of data collection in 2011. Midwife-led teams tracked 446 referred women until they received definitive treatment. Between the two audit cycles, teams identified and implemented interventions to address gaps in referral services. During this time period, we observed important increases in facilitating referral mechanisms, including a decrease in the dependence on taxis in favour of national or facility ambulances/vehicles; an increase in health workers escorting referrals to the appropriate receiving facility; greater use of referral slips and calling ahead to alert receiving facilities and higher feedback rates. As referral systems require attention from multiple levels of engagement, on the provider end we found that regional managers increasingly resolved staffing shortages; district management addressed the costliness and lack of transport and increased midwives' ability to communicate with pregnant women and drivers; and that facility staff increasingly adhered to guidelines and facilitating mechanisms. By conducting an audit of maternal and newborn referrals, the Ghana Health Service identified areas for improvement that service providers and management at multiple levels addressed, demonstrating a platform for problem solving that could be a model elsewhere.

  1. The expression of glycerol facilitators from various yeast species improves growth on glycerol of Saccharomyces cerevisiae

    Directory of Open Access Journals (Sweden)

    Mathias Klein

    2016-12-01

    Full Text Available Glycerol is an abundant by-product during biodiesel production and additionally has several assets compared to sugars when used as a carbon source for growing microorganisms in the context of biotechnological applications. However, most strains of the platform production organism Saccharomyces cerevisiae grow poorly in synthetic glycerol medium. It has been hypothesized that the uptake of glycerol could be a major bottleneck for the utilization of glycerol in S. cerevisiae. This species exclusively relies on an active transport system for glycerol uptake. This work demonstrates that the expression of predicted glycerol facilitators (Fps1 homologues from superior glycerol-utilizing yeast species such as Pachysolen tannophilus, Komagataella pastoris, Yarrowia lipolytica and Cyberlindnera jadinii significantly improves the growth performance on glycerol of the previously selected glycerol-consuming S. cerevisiae wild-type strain (CBS 6412-13A. The maximum specific growth rate increased from 0.13 up to 0.18 h−1 and a biomass yield coefficient of 0.56 gDW/gglycerol was observed. These results pave the way for exploiting the assets of glycerol in the production of fuels, chemicals and pharmaceuticals based on baker's yeast. Keywords: Yeast, Saccharomyces cerevisiae, Glycerol, Transport, Glycerol facilitator, Fps1, Stl1

  2. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  3. Patient information, education and self-management in bronchiectasis: facilitating improvements to optimise health outcomes.

    Science.gov (United States)

    Hester, Katy L M; Newton, Julia; Rapley, Tim; De Soyza, Anthony

    2018-05-22

    Bronchiectasis is an incurable lung disease characterised by irreversible airway dilatation. It causes symptoms including chronic productive cough, dyspnoea, and recurrent respiratory infections often requiring hospital admission. Fatigue and reductions in quality of life are also reported in bronchiectasis. Patients often require multi-modal treatments that can be burdensome, leading to issues with adherence. In this article we review the provision of, and requirement for, education and information in bronchiectasis. To date, little research has been undertaken to improve self-management in bronchiectasis in comparison to other chronic conditions, such as COPD, for which there has been a wealth of recent developments. Qualitative work has begun to establish that information deficit is one of the potential barriers to self-management, and that patients feel having credible information is fundamental when learning to live with and manage bronchiectasis. Emerging research offers some insights into ways of improving treatment adherence and approaches to self-management education; highlighting ways of addressing the specific unmet information needs of patients and their families who are living with bronchiectasis. We propose non-pharmacological recommendations to optimise patient self-management and symptom recognition; with the aim of facilitating measurable improvements in health outcomes for patients with bronchiectasis.

  4. Evaluation of attention training and metacognitive facilitation to improve reading comprehension in aphasia.

    Science.gov (United States)

    Lee, Jaime B; Moore Sohlberg, McKay

    2013-05-01

    This pilot study investigated the impact of direct attention training combined with metacognitive facilitation on reading comprehension in individuals with aphasia. A single-subject, multiple baseline design was employed across 4 participants to evaluate potential changes in reading comprehension resulting from an 8-week intervention using Attention Process Training-3 (APT-3). The primary outcome measure was a maze reading task. Pre- and posttesting included attention and reading comprehension measures. Visual inspection of graphed performance data across conditions was used as the primary method of analysis. Treatment effect sizes were calculated for changes in reading comprehension probes from baseline to maintenance phases. Two of the study's 4 participants demonstrated improvements in maze reading, with corresponding effect sizes that were small in magnitude according to benchmarks for aphasia treatment research. All 4 participants made improvements on select standardized measures of attention. Interventions that include a metacognitive component with direct attention training may elicit improvements in participants' attention and allocation of resources. Maze passage reading is a repeated measure that appears sensitive to treatment-related changes in reading comprehension. Issues for future research related to measurement, candidacy, and clinical delivery are discussed.

  5. Improvement Schemes for Indoor Mobile Location Estimation: A Survey

    Directory of Open Access Journals (Sweden)

    Jianga Shang

    2015-01-01

    Full Text Available Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hidden Markov models. Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR, and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research.

  6. Improving estimation of flight altitude in wildlife telemetry studies

    Science.gov (United States)

    Poessel, Sharon; Duerr, Adam E.; Hall, Jonathan C.; Braham, Melissa A.; Katzner, Todd

    2018-01-01

    Altitude measurements from wildlife tracking devices, combined with elevation data, are commonly used to estimate the flight altitude of volant animals. However, these data often include measurement error. Understanding this error may improve estimation of flight altitude and benefit applied ecology.There are a number of different approaches that have been used to address this measurement error. These include filtering based on GPS data, filtering based on behaviour of the study species, and use of state-space models to correct measurement error. The effectiveness of these approaches is highly variable.Recent studies have based inference of flight altitude on misunderstandings about avian natural history and technical or analytical tools. In this Commentary, we discuss these misunderstandings and suggest alternative strategies both to resolve some of these issues and to improve estimation of flight altitude. These strategies also can be applied to other measures derived from telemetry data.Synthesis and applications. Our Commentary is intended to clarify and improve upon some of the assumptions made when estimating flight altitude and, more broadly, when using GPS telemetry data. We also suggest best practices for identifying flight behaviour, addressing GPS error, and using flight altitudes to estimate collision risk with anthropogenic structures. Addressing the issues we describe would help improve estimates of flight altitude and advance understanding of the treatment of error in wildlife telemetry studies.

  7. How social information can improve estimation accuracy in human groups.

    Science.gov (United States)

    Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-11-21

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.

  8. Use of a facilitated discussion model for antenatal care to improve communication.

    Science.gov (United States)

    Lori, Jody R; Munro, Michelle L; Chuey, Meagan R

    2016-02-01

    Achieving health literacy is a critical step to improving health outcomes and the health of a nation. However, there is a lack of research on health literacy in low-resource countries, where maternal health outcomes are at their worst. To examine the usefulness and feasibility of providing focused antenatal care (FANC) in a group setting using picture cards to improve patient-provider communication, patient engagement, and improve health literacy. An exploratory, mixed methods design was employed to gather pilot data using the Health Literacy Skills Framework. A busy urban district hospital in the Ashanti Region of Ghana was used to gather data during 2014. A facility-driven convenience sample of midwives (n=6) aged 18 years or older, who could speak English or Twi, and had provided antenatal care at the participating hospital during the previous year prior to the start of the study participated in the study. Data were collected using pre-test and post-test surveys, completed three months after the group FANC was implemented. A semi-structured focus group was conducted with four of the participating midwives and the registered nurse providing support and supervision for the study (n=5) at the time of the post-test. Data were analyzed concurrently to gain a broad understanding of patient communication, engagement, and group FANC. There were no significant differences in the mean communication (t(df=3)=0.541, p=0.626) and engagement (t(df=3)=-0.775, p=0.495) scores between the pre- and post-test. However, the focus group revealed the following themes: (a) improved communication through the use of picture cards; (b) enhanced information sharing and peer support through the facilitated group process and; and (c) an improved understanding of patient concerns. The improved communication noted through the use of picture cards and the enhanced information sharing and peer support elicited through the group FANC undoubtedly provided patients with additional tools to invoke

  9. Facilitated Extinction Training to Improve Pharmacotherapy for Smoking Cessation: A Pilot Feasibility Trial.

    Science.gov (United States)

    Brandon, Thomas H; Unrod, Marina; Drobes, David J; Sutton, Steven K; Hawk, Larry W; Simmons, Vani N; Brandon, Karen O; Roetzheim, Richard G; Meltzer, Lauren R; Miller, Ralph R; Cahill, Shawn P

    2017-09-12

    and therapy for other disorders, to improve the extinction and generalization processes thought to underlie much of varenicline's effect. A Facilitated Extinction intervention was developed and found acceptable to smokers and feasible to implement in a research setting. The study sets the stage for a subsequent randomized controlled trial. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Parameter Estimation for Improving Association Indicators in Binary Logistic Regression

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-02-01

    Full Text Available The aim of this paper is estimation of Binary logistic regression parameters for maximizing the log-likelihood function with improved association indicators. In this paper the parameter estimation steps have been explained and then measures of association have been introduced and their calculations have been analyzed. Moreover a new related indicators based on membership degree level have been expressed. Indeed association measures demonstrate the number of success responses occurred in front of failure in certain number of Bernoulli independent experiments. In parameter estimation, existing indicators values is not sensitive to the parameter values, whereas the proposed indicators are sensitive to the estimated parameters during the iterative procedure. Therefore, proposing a new association indicator of binary logistic regression with more sensitivity to the estimated parameters in maximizing the log- likelihood in iterative procedure is innovation of this study.

  11.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  12. Health Information Technology: Meaningful Use and Next Steps to Improving Electronic Facilitation of Medication Adherence.

    Science.gov (United States)

    Bosworth, Hayden B; Zullig, Leah L; Mendys, Phil; Ho, Michael; Trygstad, Troy; Granger, Christopher; Oakes, Megan M; Granger, Bradi B

    2016-03-15

    The use of health information technology (HIT) may improve medication adherence, but challenges for implementation remain. The aim of this paper is to review the current state of HIT as it relates to medication adherence programs, acknowledge the potential barriers in light of current legislation, and provide recommendations to improve ongoing medication adherence strategies through the use of HIT. We describe four potential HIT barriers that may impact interoperability and subsequent medication adherence. Legislation in the United States has incentivized the use of HIT to facilitate and enhance medication adherence. The Health Information Technology for Economic and Clinical Health (HITECH) was recently adopted and establishes federal standards for the so-called "meaningful use" of certified electronic health record (EHR) technology that can directly impact medication adherence. The four persistent HIT barriers to medication adherence include (1) underdevelopment of data reciprocity across clinical, community, and home settings, limiting the capture of data necessary for clinical care; (2) inconsistent data definitions and lack of harmonization of patient-focused data standards, making existing data difficult to use for patient-centered outcomes research; (3) inability to effectively use the national drug code information from the various electronic health record and claims datasets for adherence purposes; and (4) lack of data capture for medication management interventions, such as medication management therapy (MTM) in the EHR. Potential recommendations to address these issues are discussed. To make meaningful, high quality data accessible, and subsequently improve medication adherence, these challenges will need to be addressed to fully reach the potential of HIT in impacting one of our largest public health issues.

  13. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali; Pä tzold, Matthias

    2012-01-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  14. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  15. Improved ice loss estimate of the northwestern Greenland ice sheet

    NARCIS (Netherlands)

    Kjeldsen, K.K.; Khan, S.A.; van den Broeke, M.R.; van Angelen, J.H.

    2013-01-01

    We estimate ice volume change rates in the northwest Greenland drainage basin during 2003–2009 using Ice, Cloud and land Elevation Satellite (ICESat) laser altimeter data. Elevation changes are often reported to be largest near the frontal portion of outlet glaciers. To improve the volume change

  16. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    Science.gov (United States)

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level

  17. The art and science of cancer education and evaluation: toward facilitating improved patient outcomes.

    Science.gov (United States)

    Johnson, Lenora; Ousley, Anita; Swarz, Jeffrey; Bingham, Raymond J; Erickson, J Bianca; Ellis, Steven; Moody, Terra

    2011-03-01

    Cancer education is a constantly evolving field, as science continues to advance both our understanding of cancer and its effects on patients, families, and communities. Moving discoveries to practice expeditiously is paramount to impacting cancer outcomes. The continuing education of cancer care professionals throughout their practice life is vital to facilitating the adoption of therapeutic innovations. Meanwhile, more general educational programs serve to keep cancer patients, their families, and the public informed of the latest findings in cancer research. The National Cancer Institute conducted an assessment of the current knowledge base for cancer education which involved two literature reviews, one of the general literature of the evaluation of medical and health education efforts, and the other of the preceding 5 years of the Journal of Cancer Education (JCE). These reviews explored a wide range of educational models and methodologies. In general, those that were most effective used multiple methodologies, interactive techniques, and multiple exposures over time. Less than one third of the articles in the JCE reported on a cancer education or communication product, and of these, only 70% had been evaluated for effectiveness. Recommendations to improve the evaluation of cancer education and the educational focus of the JCE are provided.

  18. IMPROVEMENT OF THE RICHNESS ESTIMATES OF maxBCG CLUSTERS

    International Nuclear Information System (INIS)

    Rozo, Eduardo; Rykoff, Eli S.; Koester, Benjamin P.; Hansen, Sarah; Becker, Matthew; Bleem, Lindsey; McKay, Timothy; Hao Jiangang; Evrard, August; Wechsler, Risa H.; Sheldon, Erin; Johnston, David; Annis, James; Scranton, Ryan

    2009-01-01

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L X -richness relation, from σ lnLx 2 = (0.86±0.02) 2 to σ lnLx 2 = (0.69±0.02) 2 . Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the L X -richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to the better treatment of galaxy color data. We also demonstrate the scatter in the L X -richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can easily be generalized to other mass tracers.

  19. Practitioner's knowledge representation a pathway to improve software effort estimation

    CERN Document Server

    Mendes, Emilia

    2014-01-01

    The main goal of this book is to help organizations improve their effort estimates and effort estimation processes by providing a step-by-step methodology that takes them through the creation and validation of models that are based on their own knowledge and experience. Such models, once validated, can then be used to obtain predictions, carry out risk analyses, enhance their estimation processes for new projects and generally advance them as learning organizations.Emilia Mendes presents the Expert-Based Knowledge Engineering of Bayesian Networks (EKEBNs) methodology, which she has used and adapted during the course of several industry collaborations with different companies world-wide over more than 6 years. The book itself consists of two major parts: first, the methodology's foundations in knowledge management, effort estimation (with special emphasis on the intricacies of software and Web development) and Bayesian networks are detailed; then six industry case studies are presented which illustrate the pra...

  20. Improved Sparse Channel Estimation for Cooperative Communication Systems

    Directory of Open Access Journals (Sweden)

    Guan Gui

    2012-01-01

    Full Text Available Accurate channel state information (CSI is necessary at receiver for coherent detection in amplify-and-forward (AF cooperative communication systems. To estimate the channel, traditional methods, that is, least squares (LS and least absolute shrinkage and selection operator (LASSO, are based on assumptions of either dense channel or global sparse channel. However, LS-based linear method neglects the inherent sparse structure information while LASSO-based sparse channel method cannot take full advantage of the prior information. Based on the partial sparse assumption of the cooperative channel model, we propose an improved channel estimation method with partial sparse constraint. At first, by using sparse decomposition theory, channel estimation is formulated as a compressive sensing problem. Secondly, the cooperative channel is reconstructed by LASSO with partial sparse constraint. Finally, numerical simulations are carried out to confirm the superiority of proposed methods over global sparse channel estimation methods.

  1. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  2. Improving doctor-patient communication in the outpatient setting using a facilitation tool: a preliminary study.

    Science.gov (United States)

    Neeman, Naama; Isaac, Thomas; Leveille, Suzanne; Dimonda, Clementina; Shin, Jacob Y; Aronson, Mark D; Freedman, Steven D

    2012-08-01

    Patients often do not fully understand medical information discussed during office visits. This can result in lack of adherence to recommended treatment plans and poorer health outcomes. We developed and implemented a program utilizing an encounter form, which provides structure to the medical interaction and facilitates bidirectional communication and informed decision-making. We conducted a prospective quality improvement intervention at a large tertiary-care academic medical center utilizing the encounter form and studied the effect on patient satisfaction, understanding and confidence in communicating with physicians. The intervention included 108 patients seen by seven physicians in five sub-specialties. Ninety-eight percent of patients were extremely satisfied (77%) or somewhat satisfied (21%) with the program. Ninety-six percent of patients reported being involved in decisions about their care and treatments as well as high levels of understanding of medical information that was discussed during visit. Sixty-nine percent of patients reported that they shared the encounter form with their families and friends. Patients' self-confidence in communicating with their doctors increased from a score of 8.1 to 8.7 post-intervention (P-value = 0.0018). When comparing pre- and post-intervention experiences, only 38% of patients felt that their problems and questions were adequately addressed by other physicians' pre-intervention, compared with 94% post-intervention. We introduced a program to enhance physician-patient communication and found that patients were highly satisfied, more informed and more actively involved in their care. This approach may be an easily generalizable approach to improving physician-patient communication at outpatient visits.

  3. Use of a structured template to facilitate practice-based learning and improvement projects.

    Science.gov (United States)

    McClain, Elizabeth K; Babbott, Stewart F; Tsue, Terance T; Girod, Douglas A; Clements, Debora; Gilmer, Lisa; Persons, Diane; Unruh, Greg

    2012-06-01

    The Accreditation Council for Graduate Medical Education (ACGME) requires residency programs to meet and demonstrate outcomes across 6 competencies. Measuring residents' competency in practice-based learning and improvement (PBLI) is particularly challenging. We developed an educational tool to meet ACGME requirements for PBLI. The PBLI template helped programs document quality improvement (QI) projects and supported increased scholarly activity surrounding PBLI learning. We reviewed program requirements for 43 residency and fellowship programs and identified specific PBLI requirements for QI activities. We also examined ACGME Program Information Form responses on PBLI core competency questions surrounding QI projects for program sites visited in 2008-2009. Data were integrated by a multidisciplinary committee to develop a peer-protected PBLI template guiding programs through process, documentation, and evaluation of QI projects. All steps were reviewed and approved through our GME Committee structure. An electronic template, companion checklist, and evaluation form were developed using identified project characteristics to guide programs through the PBLI process and facilitate documentation and evaluation of the process. During a 24 month period, 27 programs have completed PBLI projects, and 15 have reviewed the template with their education committees, but have not initiated projects using the template. The development of the tool generated program leaders' support because the tool enhanced the ability to meet program-specific objectives. The peer-protected status of this document for confidentiality and from discovery has been beneficial for program usage. The document aggregates data on PBLI and QI initiatives, offers opportunities to increase scholarship in QI, and meets the ACGME goal of linking measures to outcomes important to meeting accreditation requirements at the program and institutional level.

  4. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  5. Barriers and facilitators to implementing continuous quality improvement programs in colonoscopy services: a mixed methods systematic review.

    Science.gov (United States)

    Candas, Bernard; Jobin, Gilles; Dubé, Catherine; Tousignant, Mario; Abdeljelil, Anis Ben; Grenier, Sonya; Gagnon, Marie-Pierre

    2016-02-01

    Continuous quality improvement (CQI) programs may result in quality of care and outcome improvement. However, the implementation of such programs has proven to be very challenging. This mixed methods systematic review identifies barriers and facilitators pertaining to the implementation of CQI programs in colonoscopy services and how they relate to endoscopists, nurses, managers, and patients. We developed a search strategy adapted to 15 databases. Studies had to report on the implementation of a CQI intervention and identified barriers or facilitators relating to any of the four groups of actors directly concerned by the provision of colonoscopies. The quality of the selected studies was assessed and findings were extracted, categorized, and synthesized using a generic extraction grid customized through an iterative process. We extracted 99 findings from the 15 selected publications. Although involving all actors is the most cited factor, the literature mainly focuses on the facilitators and barriers associated with the endoscopists' perspective. The most reported facilitators to CQI implementation are perception of feasibility, adoption of a formative approach, training and education, confidentiality, and assessing a limited number of quality indicators. Receptive attitudes, a sense of ownership and perceptions of positive impacts also facilitate the implementation. Finally, an organizational environment conducive to quality improvement has to be inclusive of all user groups, explicitly supportive, and provide appropriate resources. Our findings corroborate the current models of adoption of innovations. However, a significant knowledge gap remains with respect to barriers and facilitators pertaining to nurses, patients, and managers.

  6. Improved Differential Evolution Algorithm for Parameter Estimation to Improve the Production of Biochemical Pathway

    Directory of Open Access Journals (Sweden)

    Chuii Khim Chong

    2012-06-01

    Full Text Available This paper introduces an improved Differential Evolution algorithm (IDE which aims at improving its performance in estimating the relevant parameters for metabolic pathway data to simulate glycolysis pathway for yeast. Metabolic pathway data are expected to be of significant help in the development of efficient tools in kinetic modeling and parameter estimation platforms. Many computation algorithms face obstacles due to the noisy data and difficulty of the system in estimating myriad of parameters, and require longer computational time to estimate the relevant parameters. The proposed algorithm (IDE in this paper is a hybrid of a Differential Evolution algorithm (DE and a Kalman Filter (KF. The outcome of IDE is proven to be superior than Genetic Algorithm (GA and DE. The results of IDE from experiments show estimated optimal kinetic parameters values, shorter computation time and increased accuracy for simulated results compared with other estimation algorithms

  7. In Vivo-Like Culture Conditions in a Bioreactor Facilitate Improved Tissue Quality in Corneal Storage.

    Science.gov (United States)

    Schmid, Richard; Tarau, Ioana-Sandra; Rossi, Angela; Leonhardt, Stefan; Schwarz, Thomas; Schuerlein, Sebastian; Lotz, Christian; Hansmann, Jan

    2018-01-01

    The cornea is the most-transplanted tissue worldwide. However, the availability and quality of grafts are limited due to the current methods of corneal storage. In this study, a dynamic bioreactor system is employed to enable the control of intraocular pressure and the culture at the air-liquid interface. Thereby, in vivo-like storage conditions are achieved. Different media combinations for endothelium and epithelium are tested in standard and dynamic conditions to enhance the viability of the tissue. In contrast to culture conditions used in eye banks, the combination of the bioreactor and biochrom medium 1 allows to preserve the corneal endothelium and the epithelium. Assessment of transparency, swelling, and the trans-epithelial-electrical-resistance (TEER) strengthens the impact of the in vivo-like tissue culture. For example, compared to corneas stored under static conditions, significantly lower optical densities and significantly higher TEER values were measured (p-value <0.05). Furthermore, healing of epithelial defects is enabled in the bioreactor, characterized by re-epithelialization and initiated stromal regeneration. Based on the obtained results, an easy-to-use 3D-printed bioreactor composed of only two parts was derived to translate the technology from the laboratory to the eye banks. This optimized bioreactor facilitates noninvasive microscopic monitoring. The improved storage conditions ameliorate the quality of corneal grafts and the storage time in the eye banks to increase availability and reduce re-grafting. © 2017 The Authors. Biotechnology Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  8. Reporting to Improve Reproducibility and Facilitate Validity Assessment for Healthcare Database Studies V1.0

    NARCIS (Netherlands)

    Wang, Shirley V.; Schneeweiss, Sebastian; Berger, Marc L.; Brown, Jeffrey; de Vries, Frank; Douglas, Ian; Gagne, Joshua J.; Gini, Rosa; Klungel, Olaf; Mullins, C. Daniel; Nguyen, Michael D.; Rassen, Jeremy A.; Smeeth, Liam; Sturkenboom, Miriam C J M

    2017-01-01

    Purpose: Defining a study population and creating an analytic dataset from longitudinal healthcare databases involves many decisions. Our objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity

  9. Reporting to Improve Reproducibility and Facilitate Validity Assessment for Healthcare Database Studies V1.0

    NARCIS (Netherlands)

    Wang, Shirley V.; Schneeweiss, Sebastian; Berger, Marc L.; Brown, Jeffrey; de Vries, Frank|info:eu-repo/dai/nl/303546670; Douglas, Ian; Gagne, Joshua J.; Gini, Rosa; Klungel, Olaf|info:eu-repo/dai/nl/181447649; Mullins, C. Daniel; Nguyen, Michael D.; Rassen, Jeremy A.; Smeeth, Liam; Sturkenboom, Miriam C J M

    2017-01-01

    Purpose Defining a study population and creating an analytic dataset from longitudinal healthcare databases involves many decisions. Our objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity of

  10. The Source Signature Estimator - System Improvements and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sabel, Per; Brink, Mundy; Eidsvig, Seija; Jensen, Lars

    1998-12-31

    This presentation relates briefly to the first part of the joint project on post-survey analysis of shot-by-shot based source signature estimation. The improvements of a Source Signature Estimator system are analysed. The notional source method can give suboptimal results when not inputting the real array geometry, i.e. actual separations between the sub-arrays of an air gun array, to the notional source algorithm. This constraint has been addressed herein and was implemented for the first time in the field in summer 1997. The second part of this study will show the potential advantages for interpretation when the signature estimates are then to be applied in the data processing. 5 refs., 1 fig.

  11. Improving chemical species tomography of turbulent flows using covariance estimation.

    Science.gov (United States)

    Grauer, Samuel J; Hadwin, Paul J; Daun, Kyle J

    2017-05-01

    Chemical species tomography (CST) experiments can be divided into limited-data and full-rank cases. Both require solving ill-posed inverse problems, and thus the measurement data must be supplemented with prior information to carry out reconstructions. The Bayesian framework formalizes the role of additive information, expressed as the mean and covariance of a joint-normal prior probability density function. We present techniques for estimating the spatial covariance of a flow under limited-data and full-rank conditions. Our results show that incorporating a covariance estimate into CST reconstruction via a Bayesian prior increases the accuracy of instantaneous estimates. Improvements are especially dramatic in real-time limited-data CST, which is directly applicable to many industrially relevant experiments.

  12. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  13. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  14. Dictionary-based fiber orientation estimation with improved spatial consistency.

    Science.gov (United States)

    Ye, Chuyang; Prince, Jerry L

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that

  15. Barriers and Facilitators to Implementing a Change Initiative in Long-Term Care Using the INTERACT® Quality Improvement Program.

    Science.gov (United States)

    Tappen, Ruth M; Wolf, David G; Rahemi, Zahra; Engstrom, Gabriella; Rojido, Carolina; Shutes, Jill M; Ouslander, Joseph G

    Implementation of major organizational change initiatives presents a challenge for long-term care leadership. Implementation of the INTERACT® (Interventions to Reduce Acute Care Transfers) quality improvement program, designed to improve the management of acute changes in condition and reduce unnecessary emergency department visits and hospitalizations of nursing home residents, serves as an example to illustrate the facilitators and barriers to major change in long-term care. As part of a larger study of the impact of INTERACT® on rates of emergency department visits and hospitalizations, staff of 71 nursing homes were called monthly to follow-up on their progress and discuss successful facilitating strategies and any challenges and barriers they encountered during the yearlong implementation period. Themes related to barriers and facilitators were identified. Six major barriers to implementation were identified: the magnitude and complexity of the change (35%), instability of facility leadership (27%), competing demands (40%), stakeholder resistance (49%), scarce resources (86%), and technical problems (31%). Six facilitating strategies were also reported: organization-wide involvement (68%), leadership support (41%), use of administrative authority (14%), adequate training (66%), persistence and oversight on the part of the champion (73%), and unfolding positive results (14%). Successful introduction of a complex change such as the INTERACT® quality improvement program in a long-term care facility requires attention to the facilitators and barriers identified in this report from those at the frontline.

  16. Improvement of Source Number Estimation Method for Single Channel Signal.

    Directory of Open Access Journals (Sweden)

    Zhi Dong

    Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.

  17. Increasing fMRI sampling rate improves Granger causality estimates.

    Directory of Open Access Journals (Sweden)

    Fa-Hsuan Lin

    Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.

  18. Facilitation of decommissioning light water reactors

    International Nuclear Information System (INIS)

    Moore, E.B. Jr.

    1979-12-01

    Information on design features, special equipment, and construction methods useful in the facilitation of decommissioning light water reactors is presented. A wide range of facilitation methods - from improved documentation to special decommissioning tools and techniques - is discussed. In addition, estimates of capital costs, cost savings, and radiation dose reduction associated with these facilitation methods are given

  19. Improved Motion Estimation Using Early Zero-Block Detection

    Directory of Open Access Journals (Sweden)

    Y. Lin

    2008-07-01

    Full Text Available We incorporate the early zero-block detection technique into the UMHexagonS algorithm, which has already been adopted in H.264/AVC JM reference software, to speed up the motion estimation process. A nearly sufficient condition is derived for early zero-block detection. Although the conventional early zero-block detection method can achieve significant improvement in computation reduction, the PSNR loss, to whatever extent, is not negligible especially for high quantization parameter (QP or low bit-rate coding. This paper modifies the UMHexagonS algorithm with the early zero-block detection technique to improve its coding performance. The experimental results reveal that the improved UMHexagonS algorithm greatly reduces computation while maintaining very high coding efficiency.

  20. Improving Google Flu Trends estimates for the United States through transformation.

    Directory of Open Access Journals (Sweden)

    Leah J Martin

    Full Text Available Google Flu Trends (GFT uses Internet search queries in an effort to provide early warning of increases in influenza-like illness (ILI. In the United States, GFT estimates the percentage of physician visits related to ILI (%ILINet reported by the Centers for Disease Control and Prevention (CDC. However, during the 2012-13 influenza season, GFT overestimated %ILINet by an appreciable amount and estimated the peak in incidence three weeks late. Using data from 2010-14, we investigated the relationship between GFT estimates (%GFT and %ILINet. Based on the relationship between the relative change in %GFT and the relative change in %ILINet, we transformed %GFT estimates to better correspond with %ILINet values. In 2010-13, our transformed %GFT estimates were within ± 10% of %ILINet values for 17 of the 29 weeks that %ILINet was above the seasonal baseline value determined by the CDC; in contrast, the original %GFT estimates were within ± 10% of %ILINet values for only two of these 29 weeks. Relative to the %ILINet peak in 2012-13, the peak in our transformed %GFT estimates was 2% lower and one week later, whereas the peak in the original %GFT estimates was 74% higher and three weeks later. The same transformation improved %GFT estimates using the recalibrated 2013 GFT model in early 2013-14. Our transformed %GFT estimates can be calculated approximately one week before %ILINet values are reported by the CDC and the transformation equation was stable over the time period investigated (2010-13. We anticipate our results will facilitate future use of GFT.

  1. Potential for improvement in estimation of solar diffuse irradiance

    International Nuclear Information System (INIS)

    Muneer, T.; Munawwar, S.

    2006-01-01

    Most of the meteorological stations around the world measure global irradiation and provide information on weather elements. Diffuse radiation measurement, however, is unavailable for many of those sites. This accentuates the need to estimate it whereupon it can be used for the simulation of solar applications. This paper explores the role of synoptic information, e.g. sunshine fraction, cloud cover and air mass on the basic k-k t relationship for nine sites across the globe. The influence on the k-k t regressions is studied qualitatively, and the inclusion of these parameters is suggested based on that. Thus, it is recommended to use the complementary data usually provided with the database apart from the global irradiation in order to estimate the diffuse irradiation more accurately. It was found by analysing each synoptic parameter individually that while the sunshine fraction showed a strong bearing, it was followed closely by cloud cover. Air mass, on the other hand, was found to be a weak parameter for general estimation of diffuse radiation. It was concluded that air mass if coupled with other synoptic parameters might improve the estimation accuracy, but it does not show much promise on its own when used with the global irradiation

  2. Tablet Technology to Facilitate Improved Interaction and Communication with Students Studying Mathematics at a Distance

    Science.gov (United States)

    Galligan, Linda; Hobohm, Carola; Loch, Birgit

    2012-01-01

    Teaching and learning of mathematics is challenging when lecturer and students are separated geographically. While student engagement and interaction with the course, with other students and with the lecturer is vital to mathematics learning, it is difficult to facilitate this electronically, because of the nature of mathematics. With tablet…

  3. Evaluation of Attention Training and Metacognitive Facilitation to Improve Reading Comprehension in Aphasia

    Science.gov (United States)

    Lee, Jaime B.; Sohlberg, McKay Moore

    2013-01-01

    Purpose: This pilot study investigated the impact of direct attention training combined with metacognitive facilitation on reading comprehension in individuals with aphasia. Method: A single-subject, multiple baseline design was employed across 4 participants to evaluate potential changes in reading comprehension resulting from an 8-week…

  4. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    Science.gov (United States)

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from 65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children 65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  5. IMPROVED ESTIMATION OF FIBER LENGTH FROM 3-DIMENSIONAL IMAGES

    Directory of Open Access Journals (Sweden)

    Joachim Ohser

    2013-03-01

    Full Text Available A new method is presented for estimating the specific fiber length from 3D images of macroscopically homogeneous fiber systems. The method is based on a discrete version of the Crofton formula, where local knowledge from 3x3x3-pixel configurations of the image data is exploited. It is shown that the relative error resulting from the discretization of the outer integral of the Crofton formula amonts at most 1.2%. An algorithmic implementation of the method is simple and the runtime as well as the amount of memory space are low. The estimation is significantly improved by considering 3x3x3-pixel configurations instead of 2x2x2, as already studied in literature.

  6. An analytical solution for improved HIFU SAR estimation

    International Nuclear Information System (INIS)

    Dillon, C R; Vyas, U; Christensen, D A; Roemer, R B; Payne, A

    2012-01-01

    Accurate determination of the specific absorption rates (SARs) present during high intensity focused ultrasound (HIFU) experiments and treatments provides a solid physical basis for scientific comparison of results among HIFU studies and is necessary to validate and improve SAR predictive software, which will improve patient treatment planning, control and evaluation. This study develops and tests an analytical solution that significantly improves the accuracy of SAR values obtained from HIFU temperature data. SAR estimates are obtained by fitting the analytical temperature solution for a one-dimensional radial Gaussian heating pattern to the temperature versus time data following a step in applied power and evaluating the initial slope of the analytical solution. The analytical method is evaluated in multiple parametric simulations for which it consistently (except at high perfusions) yields maximum errors of less than 10% at the center of the focal zone compared with errors up to 90% and 55% for the commonly used linear method and an exponential method, respectively. For high perfusion, an extension of the analytical method estimates SAR with less than 10% error. The analytical method is validated experimentally by showing that the temperature elevations predicted using the analytical method's SAR values determined for the entire 3D focal region agree well with the experimental temperature elevations in a HIFU-heated tissue-mimicking phantom. (paper)

  7. Improved estimates of coordinate error for molecular replacement

    International Nuclear Information System (INIS)

    Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.

    2013-01-01

    A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates

  8. An Improved Convolutional Neural Network on Crowd Density Estimation

    Directory of Open Access Journals (Sweden)

    Pan Shao-Yun

    2016-01-01

    Full Text Available In this paper, a new method is proposed for crowd density estimation. An improved convolutional neural network is combined with traditional texture feature. The data calculated by the convolutional layer can be treated as a new kind of features.So more useful information of images can be extracted by different features.In the meantime, the size of image has little effect on the result of convolutional neural network. Experimental results indicate that our scheme has adequate performance to allow for its use in real world applications.

  9. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  10. Improving Estimates of Cloud Radiative Forcing over Greenland

    Science.gov (United States)

    Wang, W.; Zender, C. S.

    2014-12-01

    Multiple driving mechanisms conspire to increase melt extent and extreme melt events frequency in the Arctic: changing heat transport, shortwave radiation (SW), and longwave radiation (LW). Cloud Radiative Forcing (CRF) of Greenland's surface is amplified by a dry atmosphere and by albedo feedback, making its contribution to surface melt even more variable in time and space. Unfortunately accurate cloud observations and thus CRF estimates are hindered by Greenland's remoteness, harsh conditions, and low contrast between surface and cloud reflectance. In this study, cloud observations from satellites and reanalyses are ingested into and evaluated within a column radiative transfer model. An improved CRF dataset is obtained by correcting systematic discrepancies derived from sensitivity experiments. First, we compare the surface radiation budgets from the Column Radiation Model (CRM) driven by different cloud datasets, with surface observations from Greenland Climate Network (GC-Net). In clear skies, CRM-estimated surface radiation driven by water vapor profiles from both AIRS and MODIS during May-Sept 2010-2012 are similar, stable, and reliable. For example, although AIRS water vapor path exceeds MODIS by 1.4 kg/m2 on a daily average, the overall absolute difference in downwelling SW is CRM estimates are within 20 W/m2 range of GC-Net downwelling SW. After calibrating CRM in clear skies, the remaining differences between CRM and observed surface radiation are primarily attributable to differences in cloud observations. We estimate CRF using cloud products from MODIS and from MERRA. The SW radiative forcing of thin clouds is mainly controlled by cloud water path (CWP). As CWP increases from near 0 to 200 g/m2, the net surface SW drops from over 100 W/m2 to 30 W/m2 almost linearly, beyond which it becomes relatively insensitive to CWP. The LW is dominated by cloud height. For clouds at all altitudes, the lower the clouds, the greater the LW forcing. By applying

  11. Laser photogrammetry improves size and demographic estimates for whale sharks

    Science.gov (United States)

    Richardson, Anthony J.; Prebble, Clare E.M.; Marshall, Andrea D.; Bennett, Michael B.; Weeks, Scarla J.; Cliff, Geremy; Wintner, Sabine P.; Pierce, Simon J.

    2015-01-01

    Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432–917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420–990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347–1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species. PMID:25870776

  12. Improved Ancestry Estimation for both Genotyping and Sequencing Data using Projection Procrustes Analysis and Genotype Imputation

    Science.gov (United States)

    Wang, Chaolong; Zhan, Xiaowei; Liang, Liming; Abecasis, Gonçalo R.; Lin, Xihong

    2015-01-01

    Accurate estimation of individual ancestry is important in genetic association studies, especially when a large number of samples are collected from multiple sources. However, existing approaches developed for genome-wide SNP data do not work well with modest amounts of genetic data, such as in targeted sequencing or exome chip genotyping experiments. We propose a statistical framework to estimate individual ancestry in a principal component ancestry map generated by a reference set of individuals. This framework extends and improves upon our previous method for estimating ancestry using low-coverage sequence reads (LASER 1.0) to analyze either genotyping or sequencing data. In particular, we introduce a projection Procrustes analysis approach that uses high-dimensional principal components to estimate ancestry in a low-dimensional reference space. Using extensive simulations and empirical data examples, we show that our new method (LASER 2.0), combined with genotype imputation on the reference individuals, can substantially outperform LASER 1.0 in estimating fine-scale genetic ancestry. Specifically, LASER 2.0 can accurately estimate fine-scale ancestry within Europe using either exome chip genotypes or targeted sequencing data with off-target coverage as low as 0.05×. Under the framework of LASER 2.0, we can estimate individual ancestry in a shared reference space for samples assayed at different loci or by different techniques. Therefore, our ancestry estimation method will accelerate discovery in disease association studies not only by helping model ancestry within individual studies but also by facilitating combined analysis of genetic data from multiple sources. PMID:26027497

  13. Towards Improved Snow Water Equivalent Estimation via GRACE Assimilation

    Science.gov (United States)

    Forman, Bart; Reichle, Rofl; Rodell, Matt

    2011-01-01

    Passive microwave (e.g. AMSR-E) and visible spectrum (e.g. MODIS) measurements of snow states have been used in conjunction with land surface models to better characterize snow pack states, most notably snow water equivalent (SWE). However, both types of measurements have limitations. AMSR-E, for example, suffers a loss of information in deep/wet snow packs. Similarly, MODIS suffers a loss of temporal correlation information beyond the initial accumulation and final ablation phases of the snow season. Gravimetric measurements, on the other hand, do not suffer from these limitations. In this study, gravimetric measurements from the Gravity Recovery and Climate Experiment (GRACE) mission are used in a land surface model data assimilation (DA) framework to better characterize SWE in the Mackenzie River basin located in northern Canada. Comparisons are made against independent, ground-based SWE observations, state-of-the-art modeled SWE estimates, and independent, ground-based river discharge observations. Preliminary results suggest improved SWE estimates, including improved timing of the subsequent ablation and runoff of the snow pack. Additionally, use of the DA procedure can add vertical and horizontal resolution to the coarse-scale GRACE measurements as well as effectively downscale the measurements in time. Such findings offer the potential for better understanding of the hydrologic cycle in snow-dominated basins located in remote regions of the globe where ground-based observation collection if difficult, if not impossible. This information could ultimately lead to improved freshwater resource management in communities dependent on snow melt as well as a reduction in the uncertainty of river discharge into the Arctic Ocean.

  14. Improving safety culture in hospitals: Facilitators and barriers to implementation of Systemic Falls Investigative Method (SFIM).

    Science.gov (United States)

    Zecevic, Aleksandra A; Li, Alvin Ho-Ting; Ngo, Charity; Halligan, Michelle; Kothari, Anita

    2017-06-01

    The purpose of this study was to assess the facilitators and barriers to implementation of the Systemic Falls Investigative Method (SFIM) on selected hospital units. A cross-sectional explanatory mixed methods design was used to converge results from a standardized safety culture survey with themes that emerged from interviews and focus groups. Findings were organized by six elements of the Ottawa Model of Research Use framework. A geriatric rehabilitation unit of an acute care hospital and a neurological unit of a rehabilitation hospital were selected purposefully due to the high frequency of falls. Hospital staff who took part in: surveys (n = 39), interviews (n = 10) and focus groups (n = 12), and 38 people who were interviewed during falls investigations: fallers, family, unit staff and hospital management. Implementation of the SFIM to investigate fall occurrences. Percent of positive responses on the Modified Stanford Patient Safety Culture Survey Instrument converged with qualitative themes on facilitators and barriers for intervention implementation. Both hospital units had an overall poor safety culture which hindered intervention implementation. Facilitators were hospital accreditation, strong emphasis on patient safety, infrastructure and dedicated champions. Barriers included heavy workloads, lack of time, lack of resources and poor communication. Successful implementation of SFIM requires regulatory and organizational support, committed frontline staff and allocation of resources to identify active causes and latent contributing factors to falls. System-wide adjustments show promise for promotion of safety culture in hospitals where falls happen regularly. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  15. Improving photometric redshift estimation using GPZ: size information, post processing, and improved photometry

    Science.gov (United States)

    Gomes, Zahra; Jarvis, Matt J.; Almosallam, Ibrahim A.; Roberts, Stephen J.

    2018-03-01

    The next generation of large-scale imaging surveys (such as those conducted with the Large Synoptic Survey Telescope and Euclid) will require accurate photometric redshifts in order to optimally extract cosmological information. Gaussian Process for photometric redshift estimation (GPZ) is a promising new method that has been proven to provide efficient, accurate photometric redshift estimations with reliable variance predictions. In this paper, we investigate a number of methods for improving the photometric redshift estimations obtained using GPZ (but which are also applicable to others). We use spectroscopy from the Galaxy and Mass Assembly Data Release 2 with a limiting magnitude of r Program Data Release 1 and find that it produces significant improvements in accuracy, similar to the effect of including additional features.

  16. Efficacy of proprioceptive neuromuscular facilitation techniques versus traditional prosthetic training for improving ambulatory function in transtibial amputees

    OpenAIRE

    Pallavi Sahay, MPT; Santosh Kr. Prasad, MSc; Shahnawaz Anwer, MPT; P.K. Lenka, PhD; Ratnesh Kumar, MS

    2014-01-01

    The objective of this randomized controlled trial was to evaluate the efficacy of proprioceptive neuromuscular facilitation (PNF) techniques in comparison to traditional prosthetic training (TPT) in improving ambulatory function in transtibial amputees. Thirty study participants (19 men and 11 women) with unilateral transtibial amputation participated in the study. They were randomly allocated to either the traditional training group (i.e., TPT) (n = 15) or the PNF training group (n = 15). Th...

  17. Using A Priori Information to Improve Atmospheric Duct Estimation

    Science.gov (United States)

    Zhao, X.

    2017-12-01

    Knowledge of refractivity condition in the marine atmospheric boundary layer (MABL) is crucial for the prediction of radar and communication systems performance at frequencies above 1 GHz on low-altitude paths. Since early this century, the `refractivity from clutter (RFC)' technique has been proved to be an effective way to estimate the MABL refractivity structure. Refractivity model is very important for RFC techniques. If prior knowledge of the local refractivity information is available (e.g., from numerical weather prediction models, atmospheric soundings, etc.), more accurate parameterized refractivity model can be constructed by the statistical method, e.g. principal analysis, which in turn can be used to improve the quality of the local refractivity retrievals. This work extends the adjoint parabolic equation approach to range-varying atmospheric duct structure inversions, in which a linear empirical reduced-dimension refractivity model constructed from the a priori refractive information is used.

  18. IMPROVING PROJECT SCHEDULE ESTIMATES USING HISTORICAL DATA AND SIMULATION

    Directory of Open Access Journals (Sweden)

    P.H. Meyer

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Many projects are not completed on time or within the original budget. This is caused by uncertainty in project variables as well as the occurrence of risk events. A study was done to determine ways of measuring the risk in development projects executed by a mining company in South Africa. The main objective of the study was to determine whether historical project data would provide a more accurate means of estimating the total project duration. Original estimates and actual completion times for tasks of a number of projects were analysed and compared. The results of the study indicated that a more accurate total duration for a project could be obtained by making use of historical project data. The accuracy of estimates could be improved further by building a comprehensive project schedule database within a specific industry.

    AFRIKAANSE OPSOMMING: Verskeie projekte word nie binne die oorspronklike skedule of begroting voltooi nie. Dit word dikwels veroorsaak deur onsekerheid oor projekveranderlikes en die voorkoms van risiko’s. 'n Studie is gedoen om 'n metode te ontwikkel om risiko te meet vir ontwikkelingsprojekte van 'n mynmaatskappy in Suid Afrika. Die hoofdoel van die studie was om te bepaal of historiese projekdata gebruik kon word om 'n akkurater tydsduur vir 'n projek te beraam. Die geraamde tydsduur van take vir 'n aantal projekte is ontleed en vergelyk met die werklike tydsduur. Die resultate van die studie het getoon dat 'n akkurater totale tydsduur vir die projek verkry kon word deur gebruik te maak van historiese projekdata. Die akkuraatheid kan verder verbeter word deur 'n databasis van projekskedules vir 'n bepaalde industrie te ontwikkel en by datum te hou.

  19. Improved estimate for the muon g-2 using VMD constraints

    Energy Technology Data Exchange (ETDEWEB)

    Benayoun, M. [LPNHE Paris VI/VII, IN2P3/CNRS, F-75252 Paris (France)

    2012-04-15

    The muon anomalous magnetic moment a{sub {mu}} and the hadronic vacuum polarization (HVP) are examined using data analyzed within the framework of a suitably broken HLS model. The analysis relies on all available scan data samples and leaves aside the existing ISR data. The framework provided by our broken HLS model allows for improved estimates of the contributions to a{sub {mu}} from the e{sup +}e{sup -} annihilation cross sections into {pi}{sup +}{pi}{sup -},{pi}{sup 0}{gamma},{eta}{gamma},{pi}{sup +}{pi}{sup -}{pi}{sup 0},K{sup +}K{sup -},K{sup 0}K{sup Macron 0} up to slightly above the {phi} meson mass. Within this framework, the information provided by the {tau}{sup {+-}}{yields}{pi}{sup {+-}}{pi}{sup 0}{nu} decay and by the radiative decays (VP{gamma} and P{gamma}{gamma}) of light flavor mesons play as strong constraints on the model parameters. The discrepancy between the theoretical estimate of the muon anomalous magnetic moment g-2 and its direct BNL measurement is shown to reach conservatively 4.1{sigma} while standard methods used under the same conditions yield 3.5{sigma}.

  20. Improved estimate for the muon g-2 using VMD constraints

    International Nuclear Information System (INIS)

    Benayoun, M.

    2012-01-01

    The muon anomalous magnetic moment a μ and the hadronic vacuum polarization (HVP) are examined using data analyzed within the framework of a suitably broken HLS model. The analysis relies on all available scan data samples and leaves aside the existing ISR data. The framework provided by our broken HLS model allows for improved estimates of the contributions to a μ from the e + e - annihilation cross sections into π + π - ,π 0 γ,ηγ,π + π - π 0 ,K + K - ,K 0 K ¯0 up to slightly above the φ meson mass. Within this framework, the information provided by the τ ± →π ± π 0 ν decay and by the radiative decays (VPγ and Pγγ) of light flavor mesons play as strong constraints on the model parameters. The discrepancy between the theoretical estimate of the muon anomalous magnetic moment g-2 and its direct BNL measurement is shown to reach conservatively 4.1σ while standard methods used under the same conditions yield 3.5σ.

  1. Improved estimate for the muon g-2 using VMD constraints

    Science.gov (United States)

    Benayoun, M.

    2012-04-01

    The muon anomalous magnetic moment aμ and the hadronic vacuum polarization (HVP) are examined using data analyzed within the framework of a suitably broken HLS model. The analysis relies on all available scan data samples and leaves aside the existing ISR data. The framework provided by our broken HLS model allows for improved estimates of the contributions to aμ from the e+e- annihilation cross sections into π+π-,π0γ,ηγ,π+π-π0,K+K-,K0K up to slightly above the ϕ meson mass. Within this framework, the information provided by the τ±→π±π0ν decay and by the radiative decays (VPγ and Pγγ) of light flavor mesons play as strong constraints on the model parameters. The discrepancy between the theoretical estimate of the muon anomalous magnetic moment g-2 and its direct BNL measurement is shown to reach conservatively 4.1σ while standard methods used under the same conditions yield 3.5σ.

  2. Improving the fundamentals of care for older people in the acute hospital setting: facilitating practice improvement using a Knowledge Translation Toolkit.

    Science.gov (United States)

    Wiechula, Rick; Kitson, Alison; Marcoionni, Danni; Page, Tammy; Zeitz, Kathryn; Silverston, Heidi

    2009-12-01

    This paper reports on a structured facilitation program where seven interdisciplinary teams conducted projects aimed at improving the care of the older person in the acute sector. Aims  To develop and implement a structured intervention known as the Knowledge Translation (KT) Toolkit to improve the fundamentals of care for the older person in the acute care sector. Three hypotheses were tested: (i) frontline staff can be facilitated to use existing quality improvement tools and techniques and other resources (the KT Toolkit) in order to improve care of older people in the acute hospital setting; (ii) fundamental aspects of care for older people in the acute hospital setting can be improved through the introduction and use of specific evidence-based guidelines by frontline staff; and (iii) innovations can be introduced and improvements made to care within a 12-month cycle/timeframe with appropriate facilitation. Methods  Using realistic evaluation methodology the impact of a structured facilitation program (the KT Toolkit) was assessed with the aim of providing a deeper understanding of how a range of tools, techniques and strategies may be used by clinicians to improve care. The intervention comprised three elements: the facilitation team recruited for specific knowledge, skills and expertise in KT, evidence-based practice and quality and safety; the facilitation, including a structured program of education, ongoing support and communication; and finally the components of the toolkit including elements already used within the study organisation. Results  Small improvements in care were shown. The results for the individual projects varied from clarifying issues of concern and planning ongoing activities, to changing existing practices, to improving actual patient outcomes such as reducing functional decline. More importantly the study described how teams of clinicians can be facilitated using a structured program to conduct practice improvement activities

  3. The Improved Estimation of Ratio of Two Population Proportions

    Science.gov (United States)

    Solanki, Ramkrishna S.; Singh, Housila P.

    2016-01-01

    In this article, first we obtained the correct mean square error expression of Gupta and Shabbir's linear weighted estimator of the ratio of two population proportions. Later we suggested the general class of ratio estimators of two population proportions. The usual ratio estimator, Wynn-type estimator, Singh, Singh, and Kaur difference-type…

  4. Developing a performance data suite to facilitate lean improvement in a chemotherapy day unit.

    Science.gov (United States)

    Lingaratnam, Senthil; Murray, Danielle; Carle, Amber; Kirsa, Sue W; Paterson, Rebecca; Rischin, Danny

    2013-07-01

    A multidisciplinary team from the Peter MacCallum Cancer Centre in Melbourne, Australia, developed a performance data suite to support a service improvement project based on lean manufacturing principles in its 19-chair chemotherapy day unit (CDU) and cytosuite chemotherapy production facility. The aims of the project were to reduce patient wait time and improve equity of access to the CDU. A project team consisting of a pharmacist and CDU nurse supported the management team for 10 months in engaging staff and customers to identify waste in processes, analyze root causes, eliminate non-value-adding steps, reduce variation, and level workloads to improve quality and flow. Process mapping, staff and patient tracking and opinion surveys, medical record audits, and interrogation of electronic treatment records were undertaken. This project delivered a 38% reduction in median wait time on the day (from 32 to 20 minutes; P product manufactured within 10 minutes of appointment times by 29% (from 47% to 76%; P lean improvement methodology provided a robust framework for improved understanding and management of complex system constraints within a CDU, resulting in improved access to treatment and reduced waiting times on the day.

  5. IMPROVEMENT OF MATHEMATICAL MODELS FOR ESTIMATION OF TRAIN DYNAMICS

    Directory of Open Access Journals (Sweden)

    L. V. Ursulyak

    2017-12-01

    Full Text Available Purpose. Using scientific publications the paper analyzes the mathematical models developed in Ukraine, CIS countries and abroad for theoretical studies of train dynamics and also shows the urgency of their further improvement. Methodology. Information base of the research was official full-text and abstract databases, scientific works of domestic and foreign scientists, professional periodicals, materials of scientific and practical conferences, methodological materials of ministries and departments. Analysis of publications on existing mathematical models used to solve a wide range of problems associated with the train dynamics study shows the expediency of their application. Findings. The results of these studies were used in: 1 design of new types of draft gears and air distributors; 2 development of methods for controlling the movement of conventional and connected trains; 3 creation of appropriate process flow diagrams; 4 development of energy-saving methods of train driving; 5 revision of the Construction Codes and Regulations (SNiP ΙΙ-39.76; 6 when selecting the parameters of the autonomous automatic control system, created in DNURT, for an auxiliary locomotive that is part of a connected train; 7 when creating computer simulators for the training of locomotive drivers; 8 assessment of the vehicle dynamic indices characterizing traffic safety. Scientists around the world conduct numerical experiments related to estimation of train dynamics using mathematical models that need to be constantly improved. Originality. The authors presented the main theoretical postulates that allowed them to develop the existing mathematical models for solving problems related to the train dynamics. The analysis of scientific articles published in Ukraine, CIS countries and abroad allows us to determine the most relevant areas of application of mathematical models. Practicalvalue. The practical value of the results obtained lies in the scientific validity

  6. Improved injection needles facilitate germline transformation of the buckeye butterfly Junonia coenia.

    Science.gov (United States)

    Beaudette, Kahlia; Hughes, Tia M; Marcus, Jeffrey M

    2014-01-01

    Germline transformation with transposon vectors is an important tool for insect genetics, but progress in developing transformation protocols for butterflies has been limited by high post-injection ova mortality. Here we present an improved glass injection needle design for injecting butterfly ova that increases survival in three Nymphalid butterfly species. Using the needles to genetically transform the common buckeye butterfly Junonia coenia, the hatch rate for injected Junonia ova was 21.7%, the transformation rate was 3%, and the overall experimental efficiency was 0.327%, a substantial improvement over previous results in other butterfly species. Improved needle design and a higher efficiency of transformation should permit the deployment of transposon-based genetic tools in a broad range of less fecund lepidopteran species.

  7. Improved image registration by sparse patch-based deformation estimation.

    Science.gov (United States)

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Lee, Seong-Whan; Shen, Dinggang

    2015-01-15

    Despite intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation toward the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) for each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) a small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients; and (4) we

  8. Triiodothyronine facilitates weaning from extracorporeal membrane oxygenation by improved mitochondrial substrate utilization.

    Science.gov (United States)

    Files, Matthew D; Kajimoto, Masaki; O'Kelly Priddy, Colleen M; Ledee, Dolena R; Xu, Chun; Des Rosiers, Christine; Isern, Nancy; Portman, Michael A

    2014-03-20

    Extracorporeal membrane oxygenation (ECMO) provides a bridge to recovery after myocardial injury in infants and children, yet morbidity and mortality remain high. Weaning from the circuit requires adequate cardiac contractile function, which can be impaired by metabolic disturbances induced either by ischemia-reperfusion and/or by ECMO. We tested the hypothesis that although ECMO partially ameliorates metabolic abnormalities induced by ischemia-reperfusion, these abnormalities persist or recur with weaning. We also determined if thyroid hormone supplementation (triiodothyronine) during ECMO improves oxidative metabolism and cardiac function. Neonatal piglets underwent transient coronary ischemia to induce cardiac injury then were separated into 4 groups based on loading status. Piglets without coronary ischemia served as controls. We infused into the left coronary artery [2-(13)C]pyruvate and [(13)C6, (15)N]l-leucine to evaluate oxidative metabolism by gas chromatography-mass spectroscopy and nuclear magnetic resonance methods. ECMO improved survival, increased oxidative substrate contribution through pyruvate dehydrogenase, reduced succinate and fumarate accumulation, and ameliorated ATP depletion induced by ischemia. The functional and metabolic benefit of ECMO was lost with weaning, yet triiodothyronine supplementation during ECMO restored function, increased relative pyruvate dehydrogenase flux, reduced succinate and fumarate, and preserved ATP stores. Although ECMO provides metabolic rest by decreasing energy demand, metabolic impairments persist, and are exacerbated with weaning. Treating ECMO-induced thyroid depression with triiodothyronine improves substrate flux, myocardial oxidative capacity and cardiac contractile function. This translational model suggests that metabolic targeting can improve weaning.

  9. Improving Video Game Development: Facilitating Heterogeneous Team Collaboration through Flexible Software Processes

    Science.gov (United States)

    Musil, Juergen; Schweda, Angelika; Winkler, Dietmar; Biffl, Stefan

    Based on our observations of Austrian video game software development (VGSD) practices we identified a lack of systematic processes/method support and inefficient collaboration between various involved disciplines, i.e. engineers and artists. VGSD includes heterogeneous disciplines, e.g. creative arts, game/content design, and software. Nevertheless, improving team collaboration and process support is an ongoing challenge to enable a comprehensive view on game development projects. Lessons learned from software engineering practices can help game developers to increase game development processes within a heterogeneous environment. Based on a state of the practice survey in the Austrian games industry, this paper presents (a) first results with focus on process/method support and (b) suggests a candidate flexible process approach based on Scrum to improve VGSD and team collaboration. Results showed (a) a trend to highly flexible software processes involving various disciplines and (b) identified the suggested flexible process approach as feasible and useful for project application.

  10. Estimation of air quality improvement at road and street intersections

    Energy Technology Data Exchange (ETDEWEB)

    Hoeglund, P.G. [Royal Inst. of Technology, Stockholm (Sweden). Traffic and Transport Planning

    1995-12-31

    There has always been a very great problem to quantify the detrimental exhaust air pollution related to the traffic flow, especially at road and street intersections. Until now model calculations have been developed mainly for the links between the intersections. In an attempt to remedy this situation the author has developed a method of estimating emissions on the micro level from motor vehicles at intersections as a help for infrastructural design related to improved environmental conditions. Very parsimonious knowledge exists regarding the deceleration and acceleration patterns at road- and street intersections. Not many surveys are done neither in Sweden nor within other countries. Evidently, the need for knowledge regarding deceleration and acceleration behaviour on the micro level has until now not been given priority. In traffic safety related research studies have been done describing the drivers` deceleration and acceleration behaviour and the vehicles` braking performance. Those results give deceleration data for extreme situations and are not useful for describing normal decelerations and accelerations at road- and street intersections. Environment related problems within the traffic flow analysis are now accentuating the need for the studying of special deceleration and acceleration behaviours in combination with an alternative design of the road and street infrastructure. There is a big difference in different vehicles` amount of emitted exhaust pollutions during the passing of intersections depending on the vehicles` speed levels related to their deceleration and acceleration levels. (author)

  11. Cost estimation for slope stability improvement in Muara Enim

    Science.gov (United States)

    Juliantina, Ika; Sutejo, Yulindasari; Adhitya, Bimo Brata; Sari, Nurul Permata; Kurniawan, Reffanda

    2017-11-01

    Case study area of SP. Sugihwaras-Baturaja is typologically specified in the C-zone type because the area is included in the foot of the mountain with a slope of 0 % to 20 %. Generally, the factors that cause landslide in Muara Enim Regency due to the influence of soil/rock, water factor, geological factors, and human activities. Slope improvement on KM.273 + 642-KM.273 + 774 along 132 m using soil nailing with 19 mm diameter tendon iron and an angle of 20o and a 75 mm shotcrete thickness, a K-250 concrete grouting material. Cost modeling (y) soil nailing based on 4 variables are X1 = length, X2 = horizontal distance, X3 = safety factor (SF), and X4 = time. Nine variations were used as multiple linear regression equations and analyzed with SPSS.16.0 program. Based on the SPSS output, then attempt the classical assumption and feasibility test model which produced the model that is Cost = (1,512,062 + 194,354 length-1,649,135 distance + 187,831 SF + 54,864 time) million Rupiah. The budget plan includes preparatory work, drainage system, soil nailing, and shotcrete. An efficient cost estimate of 8 m length nail, 1.5 m installation distance, safety factor (SF) = 1.742 and a 30 day processing time resulted in a fee of Rp. 2,566,313,000.00 (Two billion five hundred sixty six million three hundred thirteen thousand rupiah).

  12. Estimation of air quality improvement at road and street intersections

    Energy Technology Data Exchange (ETDEWEB)

    Hoeglund, P G [Royal Inst. of Technology, Stockholm (Sweden). Traffic and Transport Planning

    1996-12-31

    There has always been a very great problem to quantify the detrimental exhaust air pollution related to the traffic flow, especially at road and street intersections. Until now model calculations have been developed mainly for the links between the intersections. In an attempt to remedy this situation the author has developed a method of estimating emissions on the micro level from motor vehicles at intersections as a help for infrastructural design related to improved environmental conditions. Very parsimonious knowledge exists regarding the deceleration and acceleration patterns at road- and street intersections. Not many surveys are done neither in Sweden nor within other countries. Evidently, the need for knowledge regarding deceleration and acceleration behaviour on the micro level has until now not been given priority. In traffic safety related research studies have been done describing the drivers` deceleration and acceleration behaviour and the vehicles` braking performance. Those results give deceleration data for extreme situations and are not useful for describing normal decelerations and accelerations at road- and street intersections. Environment related problems within the traffic flow analysis are now accentuating the need for the studying of special deceleration and acceleration behaviours in combination with an alternative design of the road and street infrastructure. There is a big difference in different vehicles` amount of emitted exhaust pollutions during the passing of intersections depending on the vehicles` speed levels related to their deceleration and acceleration levels. (author)

  13. A Qualitative Study Exploring Facilitators for Improved Health Behaviors and Health Behavior Programs: Mental Health Service Users’ Perspectives

    Directory of Open Access Journals (Sweden)

    Candida Graham

    2014-01-01

    Full Text Available Objective. Mental health service users experience high rates of cardiometabolic disorders and have a 20–25% shorter life expectancy than the general population from such disorders. Clinician-led health behavior programs have shown moderate improvements, for mental health service users, in managing aspects of cardiometabolic disorders. This study sought to potentially enhance health initiatives by exploring (1 facilitators that help mental health service users engage in better health behaviors and (2 the types of health programs mental health service users want to develop. Methods. A qualitative study utilizing focus groups was conducted with 37 mental health service users attending a psychosocial rehabilitation center, in Northern British Columbia, Canada. Results. Four major facilitator themes were identified: (1 factors of empowerment, self-value, and personal growth; (2 the need for social support; (3 pragmatic aspects of motivation and planning; and (4 access. Participants believed that engaging with programs of physical activity, nutrition, creativity, and illness support would motivate them to live more healthily. Conclusions and Implications for Practice. Being able to contribute to health behavior programs, feeling valued and able to experience personal growth are vital factors to engage mental health service users in health programs. Clinicians and health care policy makers need to account for these considerations to improve success of health improvement initiatives for this population.

  14. A Computable Definition of Sepsis Facilitates Screening and Performance Improvement Tracking.

    Science.gov (United States)

    Alessi, Lauren J; Warmus, Holly R; Schaffner, Erin K; Kantawala, Sajel; Carcillo, Joseph; Rosen, Johanna; Horvat, Christopher M

    2018-03-01

    Sepsis kills almost 5,000 children annually, accounting for 16% of pediatric health care spending in the United States. We sought to identify sepsis within the Electronic Health Record (EHR) of a quaternary children's hospital to characterize disease incidence, improve recognition and response, and track performance metrics. Methods are organized in a plan-do-study-act cycle. During the "plan" phase, electronic definitions of sepsis (blood culture and antibiotic within 24 hours) and septic shock (sepsis plus vasoactive medication) were created to establish benchmark data and track progress with statistical process control. The performance of a screening tool was evaluated in the emergency department. During the "do" phase, a novel inpatient workflow is being piloted, which involves regular sepsis screening by nurses using the tool, and a regimented response to high risk patients. Screening tool use in the emergency department reduced time to antibiotics (Fig. 1). Of the 6,159 admissions, EHR definitions identified 1,433 (23.3%) between July and December 2016 with sepsis, of which 159 (11.1%) had septic shock. Hospital mortality for all sepsis patients was 2.2% and 15.7% for septic shock (Table 1). These findings approximate epidemiologic studies of sepsis and severe sepsis, which report a prevalence range of 0.45-8.2% and mortality range of 8.2-25% (Table 2). 1-5 . Implementation of a sepsis screening tool is associated with improved performance. The prevalence of sepsis conditions identified with electronic definitions approximates the epidemiologic landscape characterized by other point-prevalence and administrative studies, providing face validity to this approach, and proving useful for tracking performance improvement.

  15. Zero-tension lysimeters: An improved design to monitor colloid-facilitated contaminant transport in the vadose zone

    International Nuclear Information System (INIS)

    Thompson, M.L.; Scharf, R.L.; Shang, C.

    1995-01-01

    There is increasing evidence that mobile colloids facilitate the long-distance transport of contaminants. The mobility of fine particles and macromolecules has been linked to the movement of actinides, organic contaminants, and heavy metals through soil. Direct evidence for colloid mobility includes the presence of humic materials in deep aquifers as well as coatings of accumulated clay, organic matter, or sesquioxides on particle or aggregate surfaces in subsoil horizons of many soils. The potential for colloid-facilitated transport of contaminants from hazardous-waste sites requires adequate monitoring before, during, and after in-situ remediation treatments. Zero-tension lysimeters (ZTLs) are especially appropriate for sampling water as it moves through saturated soil, although some unsaturated flow events may be sampled as well. Because no ceramic barrier or fiberglass wick is involved to maintain tension on the water (as is the case with other lysimeters), particles suspended in the water as well as dissolved species may be sampled with ZTLs. In this report, a ZTL design is proposed that is more suitable for monitoring colloid-facilitated contaminant migration. The improved design consists of a cylinder made of polycarbonate or polytetrafluoroethylene (PTFE) that is placed below undisturbed soil material. In many soils, a hydraulically powered tube may be used to extract an undisturbed core of soil before placement of the lysimeter. In those cases, the design has significant advantages over conventional designs with respect to simplicity and speed of installation. Therefore, it will allow colloid-facilitated transport of contaminants to be monitored at more locations at a given site

  16. Social networks improve leaderless group navigation by facilitating long-distance communication

    Directory of Open Access Journals (Sweden)

    Nikolai W. F. BODE, A. Jamie WOOD, Daniel W. FRANKS

    2012-04-01

    Full Text Available Group navigation is of great importance for many animals, such as migrating flocks of birds or shoals of fish. One theory states that group membership can improve navigational accuracy compared to limited or less accurate individual navigational ability in groups without leaders (“Many-wrongs principle”. Here, we simulate leaderless group navigation that includes social connections as preferential interactions between individuals. Our results suggest that underlying social networks can reduce navigational errors of groups and increase group cohesion. We use network summary statistics, in particular network motifs, to study which characteristics of networks lead to these improvements. It is networks in which preferences between individuals are not clustered, but spread evenly across the group that are advantageous in group navigation by effectively enhancing long-distance information exchange within groups. We suggest that our work predicts a base-line for the type of social structure we might expect to find in group-living animals that navigate without leaders [Current Zoology 58 (2: 329-341, 2012].

  17. Organizational coherence in health care organizations: conceptual guidance to facilitate quality improvement and organizational change.

    Science.gov (United States)

    McAlearney, Ann Scheck; Terris, Darcey; Hardacre, Jeanne; Spurgeon, Peter; Brown, Claire; Baumgart, Andre; Nyström, Monica E

    2014-01-01

    We sought to improve our understanding of how health care quality improvement (QI) methods and innovations could be efficiently and effectively translated between settings to reduce persistent gaps in health care quality both within and across countries. We aimed to examine whether we could identify a core set of organizational cultural attributes, independent of context and setting, which might be associated with success in implementing and sustaining QI systems in health care organizations. We convened an international group of investigators to explore the issues of organizational culture and QI in different health care contexts and settings. This group met in person 3 times and held a series of conference calls to discuss emerging ideas over 2 years. Investigators also conducted pilot studies in their home countries to examine the applicability of our conceptual model. We suggest that organizational coherence may be a critical element of QI efforts in health care organizations and propose that there are 3 key components of organizational coherence: (1) people, (2) processes, and (3) perspectives. Our work suggests that the concept of organizational coherence embraces both culture and context and can thus help guide both researchers and practitioners in efforts to enhance health care QI efforts, regardless of organizational type, location, or context.

  18. From crossbreeding to biotechnology-facilitated improvement of banana and plantain.

    Science.gov (United States)

    Ortiz, Rodomiro; Swennen, Rony

    2014-01-01

    The annual harvest of banana and plantain (Musa spp.) is approximately 145 million tons worldwide. About 85% of this global production comes from small plots and kitchen or backyard gardens from the developing world, and only 15% goes to the export trade. Musa acuminata and Musa balbisiana are the ancestors of several hundreds of parthenocarpic Musa diploid and polyploid cultivars, which show multiple origins through inter- and intra-specific hybridizations from these two wild diploid species. Generating hybrids combining host plant resistance to pathogens and pests, short growth cycles and height, high fruit yield, parthenocarpy, and desired quality from the cultivars remains a challenge for Musa crossbreeding, which started about one century ago in Trinidad. The success of Musa crossbreeding depends on the production of true hybrid seeds in a crop known for its high levels of female sterility, particularly among polyploid cultivars. All banana export cultivars grown today are, however, selections from somatic mutants of the group Cavendish and have a very narrow genetic base, while smallholders in sub-Saharan Africa, tropical Asia and Latin America use some bred-hybrids (mostly cooking types). Musa improvement goals need to shift to address emerging threats because of the changing climate. Innovative cell and molecular biology tools have the potential to enhance the pace and efficiency of genetic improvement in Musa. Micro-propagation has been successful for high throughput of clean planting materials while in vitro seed germination assists in obtaining seedlings after inter-specific and across ploidy hybridization. Flow cytometry protocols are used for checking ploidy among genebank accessions and breeding materials. DNA markers, the genetic maps based on them, and the recent sequencing of the banana genome offer means for gaining more insights in the genetics of the crops and to identifying genes that could lead to accelerating Musa betterment. Likewise, DNA

  19. Improved measurements of RNA structure conservation with generalized centroid estimators

    Directory of Open Access Journals (Sweden)

    Yohei eOkada

    2011-08-01

    Full Text Available Identification of non-protein-coding RNAs (ncRNAs in genomes is acrucial task for not only molecular cell biology but alsobioinformatics. Secondary structures of ncRNAs are employed as a keyfeature of ncRNA analysis since biological functions of ncRNAs aredeeply related to their secondary structures. Although the minimumfree energy (MFE structure of an RNA sequence is regarded as the moststable structure, MFE alone could not be an appropriate measure foridentifying ncRNAs since the free energy is heavily biased by thenucleotide composition. Therefore, instead of MFE itself, severalalternative measures for identifying ncRNAs have been proposed such asthe structure conservation index (SCI and the base pair distance(BPD, both of which employ MFE structures. However, thesemeasurements are unfortunately not suitable for identifying ncRNAs insome cases including the genome-wide search and incur high falsediscovery rate. In this study, we propose improved measurements basedon SCI and BPD, applying generalized centroid estimators toincorporate the robustness against low quality multiple alignments.Our experiments show that our proposed methods achieve higher accuracythan the original SCI and BPD for not only human-curated structuralalignments but also low quality alignments produced by CLUSTALW. Furthermore, the centroid-based SCI on CLUSTAL W alignments is moreaccurate than or comparable with that of the original SCI onstructural alignments generated with RAF, a high quality structuralaligner, for which two-fold expensive computational time is requiredon average. We conclude that our methods are more suitable forgenome-wide alignments which are of low quality from the point of viewon secondary structures than the original SCI and BPD.

  20. Facilitated Nurse Medication-Related Event Reporting to Improve Medication Management Quality and Safety in Intensive Care Units.

    Science.gov (United States)

    Xu, Jie; Reale, Carrie; Slagle, Jason M; Anders, Shilo; Shotwell, Matthew S; Dresselhaus, Timothy; Weinger, Matthew B

    Medication safety presents an ongoing challenge for nurses working in complex, fast-paced, intensive care unit (ICU) environments. Studying ICU nurse's medication management-especially medication-related events (MREs)-provides an approach to analyze and improve medication safety and quality. The goal of this study was to explore the utility of facilitated MRE reporting in identifying system deficiencies and the relationship between MREs and nurses' work in the ICUs. We conducted 124 structured 4-hour observations of nurses in three different ICUs. Each observation included measurement of nurse's moment-to-moment activity and self-reports of workload and negative mood. The observer then obtained MRE reports from the nurse using a structured tool. The MREs were analyzed by three experts. MREs were reported in 35% of observations. The 60 total MREs included four medication errors and seven adverse drug events. Of the 49 remaining MREs, 65% were associated with negative patient impact. Task/process deficiencies were the most common contributory factor for MREs. MRE occurrence was correlated with increased total task volume. MREs also correlated with increased workload, especially during night shifts. Most of these MREs would not be captured by traditional event reporting systems. Facilitated MRE reporting provides a robust information source about potential breakdowns in medication management safety and opportunities for system improvement.

  1. Electrotactile feedback improves performance and facilitates learning in the routine grasping task

    Directory of Open Access Journals (Sweden)

    Milica Isaković

    2016-06-01

    Full Text Available Aim of this study was to investigate the feasibility of electrotactile feedback in closed loop training of force control during the routine grasping task. The feedback was provided using an array electrode and a simple six-level spatial coding, and the experiment was conducted in three amputee subjects. The psychometric tests confirmed that the subjects could perceive and interpret the electrotactile feedback with a high success rate. The subjects performed the routine grasping task comprising 4 blocks of 60 grasping trials. In each trial, the subjects employed feedforward control to close the hand and produce the desired grasping force (four levels. First (baseline and the last (validation session were performed in open loop, while the second and the third session (training included electrotactile feedback. The obtained results confirmed that using the feedback improved the accuracy and precision of the force control. In addition, the subjects performed significantly better in the validation vs. baseline session, therefore suggesting that electrotactile feedback can be used for learning and training of myoelectric control.

  2. Electrotactile Feedback Improves Performance and Facilitates Learning in the Routine Grasping Task.

    Science.gov (United States)

    Isaković, Milica; Belić, Minja; Štrbac, Matija; Popović, Igor; Došen, Strahinja; Farina, Dario; Keller, Thierry

    2016-06-13

    Aim of this study was to investigate the feasibility of electrotactile feedback in closed loop training of force control during the routine grasping task. The feedback was provided using an array electrode and a simple six-level spatial coding, and the experiment was conducted in three amputee subjects. The psychometric tests confirmed that the subjects could perceive and interpret the electrotactile feedback with a high success rate. The subjects performed the routine grasping task comprising 4 blocks of 60 grasping trials. In each trial, the subjects employed feedforward control to close the hand and produce the desired grasping force (four levels). First (baseline) and the last (validation) session were performed in open loop, while the second and the third session (training) included electrotactile feedback. The obtained results confirmed that using the feedback improved the accuracy and precision of the force control. In addition, the subjects performed significantly better in the validation vs. baseline session, therefore suggesting that electrotactile feedback can be used for learning and training of myoelectric control.

  3. A pilot clinical study of Class III surgical patients facilitated by improved accelerated osteogenic orthodontic treatments.

    Science.gov (United States)

    Wu, JiaQi; Jiang, Jiu-Hui; Xu, Li; Liang, Cheng; Bai, YunYang; Zou, Wei

    2015-07-01

    To evaluate if the improved accelerated osteogenic orthodontics (IAOO) procedure could speed Class III surgical patients' preoperative orthodontic treatment duration and, if yes, to what extent. This study was also designed to determine whether or not an IAOO procedure affects the tooth-moving pattern during extraction space closure. The samples in this study consisted of 24 Class III surgical patients. Twelve skeletal Class III surgery patients served as an experimental group (group 1) and the others as a control group (group 2). Before treatment, the maxillary first premolars were removed. For group 1, after the maxillary dental arch was aligned and leveled (T2), IAOO procedures were performed in the maxillary alveolar bone. Except for this IAOO procedure in group 1, all 24 patients experienced similar combined orthodontic and orthognathic treatment. Study casts of the maxillary dentitions were made before orthodontic treatment (T1) and after extraction space closure (T3). All of the casts were laser scanned, and the amount of movement of the maxillary central incisor, canine, and first molar, as well as arch widths, were digitally measured and analyzed by using the three-dimensional model superimposition method. The time durations T3-T2 were significantly reduced in group 1 by 8.65 ± 2.67 months and for T3-T1 were reduced by 6.39 ± 2.00 months (P teeth movement in the sagittal, vertical, and transverse dimensions between the two groups (P > .05). The IAOO can reduce the surgical orthodontic treatment time for the skeletal Class III surgical patient by more than half a year on average. The IAOO procedures do not save anchorage.

  4. Improving cost-effectiveness and facilitating participation of developing countries in international emissions trading

    International Nuclear Information System (INIS)

    Bohm, P.

    2003-01-01

    Cost-effectiveness is a crucial requirement for meaningful agreements on international climate change policy. This is also borne out in the wording of the Framework Convention of Climate Change and, in particular, the Kyoto Protocol (KP), see UNFCCC (1992) and UN (1997). However, the KP - as it stands after COP7 in Marrakech - is not fully cost-effective, although it may eventually turn out to be the only politically feasible, 'most cost-effective', first step in international climate change policy. The successor to the COP7 version of the KP may be a renegotiated protocol, if the COP7 version fails to be ratified by enough countries to enter into force, or it may be the treaty to be designed for a second commitment period. Four dimensions in which cost-effectiveness may be improved in a treaty that succeeds the KP are discussed here. They all relate to international emissions trading (IET) which is likely to be the most significant instrument for attaining cost-effective reductions in aggregate greenhouse gas (GHG) emissions. It is important for a climate treaty to be able to attract as many developing countries to IET as possible and achieve this as soon as possible. This would have to occur at essentially no cost to them. Only with developing countries onboard can the world community get full access to their low-cost options for emission reductions. A first aspect to be discussed here is related to identifying a cost-effective approach to attain that goal (Section 1). Another aspect concerns the role of the Clean Development Mechanism (CDM) in this context (Section 2). A third issue is to evaluate the consequences for cost-effectiveness of introducing a Commitment Period Reserve to limit 'overselling' (Section 3). A final one deals with the increase in flexibility that would follow from allowing not only banking but also borrowing of Assigned Amount Units (AAUs) (Section 4). While the first two issues refer directly to developing countries, the last two will be

  5. Barriers to and facilitators for implementing quality improvements in palliative care - results from a qualitative interview study in Norway.

    Science.gov (United States)

    Sommerbakk, Ragni; Haugen, Dagny Faksvåg; Tjora, Aksel; Kaasa, Stein; Hjermstad, Marianne Jensen

    2016-07-15

    Implementation of quality improvements in palliative care (PC) is challenging, and detailed knowledge about factors that may facilitate or hinder implementation is essential for success. One part of the EU-funded IMPACT project (IMplementation of quality indicators in PAlliative Care sTudy) aiming to increase the knowledge base, was to conduct national studies in PC services. This study aims to identify factors perceived as barriers or facilitators for improving PC in cancer and dementia settings in Norway. Individual, dual-participant and focus group interviews were conducted with 20 employees working in different health care services in Norway: two hospitals, one nursing home, and two local medical centers. Thematic analysis with a combined inductive and theoretical approach was applied. Barriers and facilitators were connected to (1) the innovation (e.g. credibility, advantage, accessibility, attractiveness); (2) the individual professional (e.g. motivation, PC expertise, confidence); (3) the patient (e.g. compliance); (4) the social context (e.g. leadership, culture of change, face-to-face contact); (5) the organizational context (e.g. resources, structures/facilities, expertise); (6) the political and economic context (e.g. policy, legislation, financial arrangements) and (7) the implementation strategy (e.g. educational, meetings, reminders). Four barriers that were particular to PC were identified: the poor general condition of patients in need of PC, symptom assessment tools that were not validated in all patient groups, lack of PC expertise and changes perceived to be at odds with staff's philosophy of care. When planning an improvement project in PC, services should pay particular attention to factors associated with their chosen implementation strategy. Leaders should also involve staff early in the improvement process, ensure that they have the necessary training in PC and that the change is consistent with the staff's philosophy of care. An important

  6. Facilitating wider application of progesterone RIA for improving livestock production in developing countries

    International Nuclear Information System (INIS)

    Oswin Perera, B.M.B.

    2000-01-01

    Full text: Research and development programmes supported by the Joint FAO/IAEA Division on improving livestock production in developing countries have identified three major biological constraints: feeding, breeding management and diseases. Proper breeding management is important in order to achieve optimum economic benefits (through products such as milk, meat and offspring) from an animal during its lifespan. This requires early attainment of puberty, short intervals from calving to conception, high conception rates and low number of matings or artificial inseminations (Als) per conception. The use of radioimmunoassay (RIA) for measuring progesterone in milk of dairy animals or in blood of meat animals, together with recording of data on reproductive events and production parameters, is an indispensable tool that provides information both on problems in breeding management by farmers as well as deficiencies in the Al services provided to them by government, co-operative or private organizations. This allows appropriate strategies and interventions to be adopted to overcome these limitations. Progesterone RIA can also detect animals that have not conceived by Al within 21 days after mating (early non-pregnancy diagnosis or N-PD), and alert farmers to the need to have these animals closely observed for oestrus and re-inseminated at the appropriate time. In order to ensure the sustained use of RIA technology for progesterone measurement in developing Member States, the IAEA has been engaged in the development and transfer of simple, robust and cheap methods of RIA. The system currently being used is based on a direct (non-extraction) method, using a 125 I-progesterone tracer and a solid-phase separation method (antibody coated tubes). In order to ensure wider availability (and lower cost) of the two key reagents required for the assay, the IAEA has initiated a programme to assist Member States to develop the capability to produce these in selected regional or

  7. Improved depth estimation with the light field camera

    Science.gov (United States)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  8. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    Energy Technology Data Exchange (ETDEWEB)

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  9. Improved Variable Window Kernel Estimates of Probability Densities

    OpenAIRE

    Hall, Peter; Hu, Tien Chung; Marron, J. S.

    1995-01-01

    Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results ca...

  10. FEH Local: Improving flood estimates using historical data

    Directory of Open Access Journals (Sweden)

    Prosdocimi Ilaria

    2016-01-01

    Full Text Available The traditional approach to design flood estimation (for example, to derive the 100-year flood is to apply a statistical model to time series of peak river flow measured by gauging stations. Such records are typically not very long, for example in the UK only about 10% of the stations have records that are more than 50 years in length. Along-explored way to augment the data available from a gauging station is to derive information about historical flood events and paleo-floods, which can be obtained from careful exploration of archives, old newspapers, flood marks or other signs of past flooding that are still discernible in the catchment, and the history of settlements. The inclusion of historical data in flood frequency estimation has been shown to substantially reduce the uncertainty around the estimated design events and is likely to provide insight into the rarest events which might have pre-dated the relatively short systematic records. Among other things, the FEH Local project funded by the Environment Agency aims to develop methods to easily incorporate historical information into the standard method of statistical flood frequency estimation in the UK. Different statistical estimation procedures are explored, namely maximum likelihood and partial probability weighted moments, and the strengths and weaknesses of each method are investigated. The project assesses the usefulness of historical data and aims to provide practitioners with useful guidelines to indicate in what circumstances the inclusion of historical data is likely to be beneficial in terms of reducing both the bias and the variability of the estimated flood frequency curves. The guidelines are based on the results of a large Monte Carlo simulation study, in which different estimation procedures and different data availability scenarios are studied. The study provides some indication of the situations under which different estimation procedures might give a better performance.

  11. Towards accessible integrated palliative care: Perspectives of leaders from seven European countries on facilitators, barriers and recommendations for improvement.

    Science.gov (United States)

    den Herder-van der Eerden, Marlieke; Ewert, Benjamin; Hodiamont, Farina; Hesse, Michaela; Hasselaar, Jeroen; Radbruch, Lukas

    2017-01-01

    Literature suggests that integrated palliative care (IPC) increases the quality of care for palliative patients at lower costs. However, knowledge on models encompassing all integration levels for successfully implementing IPC is scarce. The purpose of this paper is to describe the experiences of IPC leaders in seven European countries regarding core elements, facilitators and barriers of IPC implementation and provides recommendations for future policy and practice. A qualitative interview study was conducted between December 2013 and May 2014. In total, 34 IPC leaders in primary and secondary palliative care or public health in Belgium, Germany, Hungary, Ireland, the Netherlands, Spain and the UK were interviewed. Transcripts were analysed using thematic data analysis. IPC implementation efforts involved a multidisciplinary team approach and cross-sectional coordination. Informal professional relationships, basic medical education and general awareness were regarded as facilitators of IPC. Identified barriers included lack of knowledge about when to start palliative care, lack of collaboration and financial structures. Recommendations for improvement included access, patient-centeredness, coordination and cooperation, financing and ICT systems. Although IPC is becoming more common, action has been uneven at different levels. IPC implementation largely remains provisional and informal due to the lack of standardised treatment pathways, legal frameworks and financial incentives to support multilevel integration. In order to make IPC more accessible, palliative care education as well as legal and financial support within national healthcare systems needs to be enhanced.

  12. Benefits, Facilitators, Barriers, and Strategies to Improve Pesticide Protective Behaviors: Insights from Farmworkers in North Carolina Tobacco Fields.

    Science.gov (United States)

    Walton, AnnMarie Lee; LePrevost, Catherine E; Linnan, Laura; Sanchez-Birkhead, Ana; Mooney, Kathi

    2017-06-23

    Pesticide exposure is associated with deleterious health effects. Prior studies suggest Latino farmworkers perceive little control over their occupational health. Using the Health Belief Model as a theoretical guide, we explored the perceptions of Latino farmworkers working in tobacco in North Carolina ( n = 72) about benefits and facilitators of pesticide protective behaviors as well as barriers, and strategies to overcome barriers to their use. Interviews were conducted with participants at farmworker housing during non-work time. Qualitative data were analyzed using ATLAS.ti. Farmworkers recognized pesticide protective behaviors as helping them to not get sick and stay healthy. Farmworkers perceived work experience as facilitating protective behaviors. Wetness in the field was the most commonly cited barrier to protective behavior use. To overcome this barrier, farmworkers suggested use of water-resistant outerwear, as well as packing a change of clothes for mid-day, with space and time to change provided by employers. Examination of the efficacy and feasibility of farmworkers' suggestions for addressing barriers is warranted. Training and behavior modeling by experienced peers may improve behavior adoption and perceived control.

  13. Facilitating improved road safety based on increased knowledge about driving behaviour and profiling sub-groups of drivers

    DEFF Research Database (Denmark)

    Martinussen, Laila Marianne

    The aim of the Ph.D. study presented in this thesis was to facilitate improved road safety through increased understanding of methods used to measure driving behaviour, and through increased knowledge about driving behaviour in sub-groups of drivers. More specifically, the usefulness of the Driver...... with underlying mechanisms of lack of focus, emotional stress, recklessness and confusion, and hence it is highly important to further explore means to making drivers become more focused or attentive when driving, and to deal with emotional responses in traffic like impatience and frustration (Article 1). 2......, indicating that the problem lies in the drivers’ attitudes towards safety (Article 3). 6. It is indicated that rather than viewing safety and risk as two ends of a continuum, safety and risk should be understood as two separate constructs, with different underlying motives. Therefore it is suggested...

  14. Improved Vector Velocity Estimation using Directional Transverse Oscillation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2015-01-01

    A method for estimating vector velocities using transverse oscillation (TO) combined with directional beamforming is presented. Directional Transverse Oscillation (DTO) is selfcalibrating, which increase the estimation accuracy and finds the lateral oscillation period automatically. A normal...... focused field is emitted and the received signals are beamformed in the lateral direction transverse to the ultrasound beam. A lateral oscillation is obtained by having a receive apodization waveform with two separate peaks. The IQ data are obtained by making a Hilbert transform of the directional signal...... transducer with a focal point at 105.6 mm (F#=5) for Vector Flow Imaging (VFI). A 6 mm radius tube in a circulating flow rig was scanned and the parabolic volume flow of 112.7 l/h (peak velocity 0.55 m/s) measured by a Danfoss Magnetic flow meter for reference. Velocity estimates for DTO are found for 32...

  15. Time improvement of photoelectric effect calculation for absorbed dose estimation

    International Nuclear Information System (INIS)

    Massa, J M; Wainschenker, R S; Doorn, J H; Caselli, E E

    2007-01-01

    Ionizing radiation therapy is a very useful tool in cancer treatment. It is very important to determine absorbed dose in human tissue to accomplish an effective treatment. A mathematical model based on affected areas is the most suitable tool to estimate the absorbed dose. Lately, Monte Carlo based techniques have become the most reliable, but they are time expensive. Absorbed dose calculating programs using different strategies have to choose between estimation quality and calculating time. This paper describes an optimized method for the photoelectron polar angle calculation in photoelectric effect, which is significant to estimate deposited energy in human tissue. In the case studies, time cost reduction nearly reached 86%, meaning that the time needed to do the calculation is approximately 1/7 th of the non optimized approach. This has been done keeping precision invariant

  16. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  17. Improving methods estimation of the investment climate of the country

    Directory of Open Access Journals (Sweden)

    E. V. Ryabinin

    2016-01-01

    the most objective assessment of the investment climate in the country in order to build their strategies market functioning. The article describes two methods to obtain an estimate of the investment climate, a fundamental and expertise. Studies have shown that the fundamental method provides the most accurate and objective assessment of, but not all of the investment potential factors can be subjected to mathematical evaluation. The use of expert opinion on the practice of subjectivity difficult to experts, so its use requires special care. In modern economic practice it proved that the investment climate elements directly affect the investment decisions of companies. Improving the investment climate assessment methodology, it allows you to build the most optimal form of cooperation between investors from the host country. In today’s political tensions, this path requires clear cooperation of subjects, both in the domestic and international level. However, now, these measures will avoid the destabilization of Russia’s relations with foreign investors.

  18. Barriers and facilitators of interventions for improving antiretroviral therapy adherence: a systematic review of global qualitative evidence.

    Science.gov (United States)

    Ma, Qingyan; Tso, Lai Sze; Rich, Zachary C; Hall, Brian J; Beanland, Rachel; Li, Haochu; Lackey, Mellanye; Hu, Fengyu; Cai, Weiping; Doherty, Meg; Tucker, Joseph D

    2016-01-01

    Qualitative research on antiretroviral therapy (ART) adherence interventions can provide a deeper understanding of intervention facilitators and barriers. This systematic review aims to synthesize qualitative evidence of interventions for improving ART adherence and to inform patient-centred policymaking. We searched 19 databases to identify studies presenting primary qualitative data on the experiences, attitudes and acceptability of interventions to improve ART adherence among PLHIV and treatment providers. We used thematic synthesis to synthesize qualitative evidence and the CERQual (Confidence in the Evidence from Reviews of Qualitative Research) approach to assess the confidence of review findings. Of 2982 references identified, a total of 31 studies from 17 countries were included. Twelve studies were conducted in high-income countries, 13 in middle-income countries and six in low-income countries. Study populations focused on adults living with HIV (21 studies, n =1025), children living with HIV (two studies, n =46), adolescents living with HIV (four studies, n =70) and pregnant women living with HIV (one study, n =79). Twenty-three studies examined PLHIV perspectives and 13 studies examined healthcare provider perspectives. We identified six themes related to types of interventions, including task shifting, education, mobile phone text messaging, directly observed therapy, medical professional outreach and complex interventions. We also identified five cross-cutting themes, including strengthening social relationships, ensuring confidentiality, empowerment of PLHIV, compensation and integrating religious beliefs into interventions. Our qualitative evidence suggests that strengthening PLHIV social relationships, PLHIV empowerment and developing culturally appropriate interventions may facilitate adherence interventions. Our study indicates that potential barriers are inadequate training and compensation for lay health workers and inadvertent disclosure of

  19. The utah beacon experience: integrating quality improvement, health information technology, and practice facilitation to improve diabetes outcomes in small health care facilities.

    Science.gov (United States)

    Tennison, Janet; Rajeev, Deepthi; Woolsey, Sarah; Black, Jeff; Oostema, Steven J; North, Christie

    2014-01-01

    The Utah Improving Care through Connectivity and Collaboration (IC3) Beacon community (2010-2013) was spearheaded by HealthInsight, a nonprofit, community-based organization. One of the main objectives of IC(3) was to improve health care provided to patients with diabetes in three Utah counties, collaborating with 21 independent smaller clinics and two large health care enterprises. This paper will focus on the use of health information technology (HIT) and practice facilitation to develop and implement new care processes to improve clinic workflow and ultimately improve patients' diabetes outcomes at 21 participating smaller, independent clinics. Early in the project, we learned that most of the 21 clinics did not have the resources needed to successfully implement quality improvement (QI) initiatives. IC(3) helped clinics effectively use data generated from their electronic health records (EHRs) to design and implement interventions to improve patients' diabetes outcomes. This close coupling of HIT, expert practice facilitation, and Learning Collaboratives was found to be especially valuable in clinics with limited resources. Through this process we learned that (1) an extensive readiness assessment improved clinic retention, (2) clinic champions were important for a successful collaboration, and (3) current EHR systems have limited functionality to assist in QI initiatives. In general, smaller, independent clinics lack knowledge and experience with QI and have limited HIT experience to improve patient care using electronic clinical data. Additionally, future projects like IC(3) Beacon will be instrumental in changing clinic culture so that QI is integrated into routine workflow. Our efforts led to significant changes in how practice staff optimized their EHRs to manage and improve diabetes care, while establishing the framework for sustainability. Some of the IC(3) Beacon practices are currently smoothly transitioning to new models of care such as Patient

  20. Assessing Error Correlations in Remote Sensing-Based Estimates of Forest Attributes for Improved Composite Estimation

    Directory of Open Access Journals (Sweden)

    Sarah Ehlers

    2018-04-01

    Full Text Available Today, non-expensive remote sensing (RS data from different sensors and platforms can be obtained at short intervals and be used for assessing several kinds of forest characteristics at the level of plots, stands and landscapes. Methods such as composite estimation and data assimilation can be used for combining the different sources of information to obtain up-to-date and precise estimates of the characteristics of interest. In composite estimation a standard procedure is to assign weights to the different individual estimates inversely proportional to their variance. However, in case the estimates are correlated, the correlations must be considered in assigning weights or otherwise a composite estimator may be inefficient and its variance be underestimated. In this study we assessed the correlation of plot level estimates of forest characteristics from different RS datasets, between assessments using the same type of sensor as well as across different sensors. The RS data evaluated were SPOT-5 multispectral data, 3D airborne laser scanning data, and TanDEM-X interferometric radar data. Studies were made for plot level mean diameter, mean height, and growing stock volume. All data were acquired from a test site dominated by coniferous forest in southern Sweden. We found that the correlation between plot level estimates based on the same type of RS data were positive and strong, whereas the correlations between estimates using different sources of RS data were not as strong, and weaker for mean height than for mean diameter and volume. The implications of such correlations in composite estimation are demonstrated and it is discussed how correlations may affect results from data assimilation procedures.

  1. Technical report for effective estimation and improvement of quality system

    International Nuclear Information System (INIS)

    Kim, Kwan Hyun

    2000-06-01

    This technical report provides the methods on how to improve the Quality System, in R and D part. This report applies on the quality assurance(QA) programmes of the design, fabrication in nuclear projects. The organization having overall responsibility for the nuclear power item design, preservation, fabrication shall be described in this report in each stage of improvement of QA systems

  2. An improved routine for the fast estimate of ion cyclotron heating efficiency in tokamak plasmas

    International Nuclear Information System (INIS)

    Brambilla, M.

    1992-02-01

    The subroutine ICEVAL for the rapid simulation of Ion Cyclotron Heating in tokamak plasmas is based on analytic estimates of the wave behaviour near resonances, and on drastic but reasonable simplifications of the real geometry. The subroutine has been rewritten to improve the model and to facilitate its use as input in transport codes. In the new version the influence of quasilinear minority heating on the damping efficiency is taken into account using the well-known Stix analytic approximation. Among other improvements are: a) the possibility of considering plasmas with more than two ion species; b) inclusion of Landau, Transit Time and collisional damping on the electrons non localised at resonances; c) better models for the antenna spectrum and for the construction of the power deposition profiles. The results of ICEVAL are compared in detail with those of the full-wave code FELICE for the case of Hydrogen minority heating in a Deuterium plasma; except for details which depend on the excitation of global eigenmodes, agreement is excellent. ICEVAL is also used to investigate the enhancement of the absorption efficiency due to quasilinear heating of the minority ions. The effect is a strongly non-linear function of the available power, and decreases rapidly with increasing concentration. For parameters typical of Asdex Upgrade plasmas, about 4 MW are required to produce a significant increase of the single-pass absorption at concentrations between 10 and 20%. (orig.)

  3. A novel ULA-based geometry for improving AOA estimation

    Directory of Open Access Journals (Sweden)

    Akbari Farida

    2011-01-01

    Full Text Available Abstract Due to relatively simple implementation, Uniform Linear Array (ULA is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE of the detected angles descends as the input Signal to Noise Ratio (SNR increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.

  4. A novel ULA-based geometry for improving AOA estimation

    Science.gov (United States)

    Shirvani-Moghaddam, Shahriar; Akbari, Farida

    2011-12-01

    Due to relatively simple implementation, Uniform Linear Array (ULA) is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA) estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA) configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE) of the detected angles descends as the input Signal to Noise Ratio (SNR) increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.

  5. The use of absolute values improves performance of estimation formulae

    DEFF Research Database (Denmark)

    Redal-Baigorri, Belén; Rasmussen, Knud; Heaf, James Goya

    2013-01-01

    BACKGROUND: Estimation of Glomerular Filtration Rate (GFR) by equations such as Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) or Modification of Diet in Renal Disease (MDRD) is usually expressed as a Body Surface Area (BSA) indexed value (ml/min per 1.73 m²). This can have severe cl...

  6. USING COLORS TO IMPROVE PHOTOMETRIC METALLICITY ESTIMATES FOR GALAXIES

    International Nuclear Information System (INIS)

    Sanders, N. E.; Soderberg, A. M.; Levesque, E. M.

    2013-01-01

    There is a well known correlation between the mass and metallicity of star-forming galaxies. Because mass is correlated with luminosity, this relation is often exploited, when spectroscopy is not available, to estimate galaxy metallicities based on single band photometry. However, we show that galaxy color is typically more effective than luminosity as a predictor of metallicity. This is a consequence of the correlation between color and the galaxy mass-to-light ratio and the recently discovered correlation between star formation rate (SFR) and residuals from the mass-metallicity relation. Using Sloan Digital Sky Survey spectroscopy of ∼180, 000 nearby galaxies, we derive 'LZC relations', empirical relations between metallicity (in seven common strong line diagnostics), luminosity, and color (in 10 filter pairs and four methods of photometry). We show that these relations allow photometric metallicity estimates, based on luminosity and a single optical color, that are ∼50% more precise than those made based on luminosity alone; galaxy metallicity can be estimated to within ∼0.05-0.1 dex of the spectroscopically derived value depending on the diagnostic used. Including color information in photometric metallicity estimates also reduces systematic biases for populations skewed toward high or low SFR environments, as we illustrate using the host galaxy of the supernova SN 2010ay. This new tool will lend more statistical power to studies of galaxy populations, such as supernova and gamma-ray burst host environments, in ongoing and future wide-field imaging surveys

  7. An improved COCOMO software cost estimation model | Duke ...

    African Journals Online (AJOL)

    In this paper, we discuss the methodologies adopted previously in software cost estimation using the COnstructive COst MOdels (COCOMOs). From our analysis, COCOMOs produce very high software development efforts, which eventually produce high software development costs. Consequently, we propose its extension, ...

  8. Improving Estimation of Betweenness Centrality for Scale-Free Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bromberger, Seth A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Klymko, Christine F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Henderson, Keith A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pearce, Roger [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sanders, Geoff [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-11-07

    Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriately based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.

  9. Improved Methodology for Benefit Estimation of Preservation Projects

    Science.gov (United States)

    2018-04-01

    This research report presents an improved process for evaluating the benefits and economic tradeoffs associated with a variety of highway preservation projects. It includes a summary of results from a comprehensive phone survey concerning the use and...

  10. Using the Pareto Distribution to Improve Estimates of Topcoded Earnings

    OpenAIRE

    Philip Armour; Richard V. Burkhauser; Jeff Larrimore

    2014-01-01

    Inconsistent censoring in the public-use March Current Population Survey (CPS) limits its usefulness in measuring labor earnings trends. Using Pareto estimation methods with less-censored internal CPS data, we create an enhanced cell-mean series to capture top earnings in the public-use CPS. We find that previous approaches for imputing topcoded earnings systematically understate top earnings. Annual earnings inequality trends since 1963 using our series closely approximate those found by Kop...

  11. Improved stove programs need robust methods to estimate carbon offsets

    OpenAIRE

    Johnson, Michael; Edwards, Rufus; Masera, Omar

    2010-01-01

    Current standard methods result in significant discrepancies in carbon offset accounting compared to approaches based on representative community based subsamples, which provide more realistic assessments at reasonable cost. Perhaps more critically, neither of the currently approved methods incorporates uncertainties inherent in estimates of emission factors or non-renewable fuel usage (fNRB). Since emission factors and fNRB contribute 25% and 47%, respectively, to the overall uncertainty in ...

  12. Improving Relative Combat Power Estimation: The Road to Victory

    Science.gov (United States)

    2014-06-13

    was unthinkable before. Napoleon Bonaparte achieved a superior warfighting system compared to his opponents, which resulted in SOF. Napoleon’s...observations about combat power estimation and force empoloyment, remain valid. Napoleon also offered thoughts about combat power and superiority whe he...force. However, Napoleon did not think one- sidedly about the problem. He also said: “The moral is to the physical as three to one.”11 This dual

  13. An improved method for estimating fatigue life under combined stress

    Czech Academy of Sciences Publication Activity Database

    Balda, Miroslav; Svoboda, Jaroslav; Fröhlich, Vladislav

    2007-01-01

    Roč. 1, č. 1 (2007), s. 1-10 ISSN 1802-680X. [Applied and Computational Mechanics 2007. Nečtiny, 05.11.2007 - 07.11.2007] R&D Projects: GA ČR GA101/05/0199 Institutional research plan: CEZ:AV0Z20760514 Keywords : multiaxial fatigue * life-time estimation * nonlinear least squares Subject RIV: JL - Materials Fatigue, Friction Mechanics

  14. Hawaii Clean Energy Initiative (HCEI) Scenario Analysis: Quantitative Estimates Used to Facilitate Working Group Discussions (2008-2010)

    Energy Technology Data Exchange (ETDEWEB)

    Braccio, R.; Finch, P.; Frazier, R.

    2012-03-01

    This report provides details on the Hawaii Clean Energy Initiative (HCEI) Scenario Analysis to identify potential policy options and evaluate their impact on reaching the 70% HECI goal, present possible pathways to attain the goal based on currently available technology, with an eye to initiatives under way in Hawaii, and provide an 'order-of-magnitude' cost estimate and a jump-start to action that would be adjusted with a better understanding of the technologies and market.

  15. An improved model for estimating pesticide emissions for agricultural LCA

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes; Birkved, Morten; Hauschild, Michael Zwicky

    2011-01-01

    Credible quantification of chemical emissions in the inventory phase of Life Cycle Assessment (LCA) is crucial since chemicals are the dominating cause of the human and ecotoxicity-related environmental impacts in Life Cycle Impact Assessment (LCIA). When applying LCA for assessment of agricultural...... products, off-target pesticide emissions need to be quantified as accurately as possible because of the considerable toxicity effects associated with chemicals designed to have a high impact on biological organisms like for example insects or weed plants. PestLCI was developed to estimate the fractions...

  16. FEH Local: improving flood estimates using historical data

    OpenAIRE

    Prosdocimi, Ilaria; Stewart, Lisa; Faulkner, Duncan; Mitchell, Chrissy

    2016-01-01

    The traditional approach to design flood estimation (for example, to derive the 100-year flood) is to apply a statistical model to time series of peak river flow measured by gauging stations. Such records are typically not very long, for example in the UK only about 10% of the stations have records that are more than 50 years in length. Along-explored way to augment the data available from a gauging station is to derive information about historical flood events and paleo-floods, which can be ...

  17. Are rehabilitation and/or care co-ordination interventions delivered in the community effective in reducing depression, facilitating participation and improving quality of life after stroke?

    Science.gov (United States)

    Graven, Christine; Brock, Kim; Hill, Keith; Joubert, Lynette

    2011-01-01

    To conduct a systematic review to explore the effectiveness of community-based rehabilitation interventions delivered by allied health professionals and/or nursing staff in reducing depression, facilitating participation and improving health-related quality of life (HRQoL) post-inpatient stroke rehabilitation. A search was conducted in the databases of MEDLINE, PEDro, CINAHL and the Cochrane Library. Publications were classified into categories based on the type of the interventions. Best evidence synthesis and meta-analysis were utilised to determine the level of evidence. Fifty-four studies were included in the review, and divided into nine broad intervention categories. Meta-analysis demonstrated significant reduction in depression with exercise interventions (n = 137; effect estimate SMD: -2.03, 95%CI: -3.22, -0.85). Community-based interventions targeting participation and leisure domains showed moderate evidence for improvement in global participation measures and HRQoL. Comprehensive rehabilitation demonstrated limited evidence for depression and participation, and strong evidence for HRQoL. There is limited to moderate evidence supporting some rehabilitation interventions in affecting the outcomes of depression, participation and HRQoL post-stroke. Heterogeneity of the studies made evidence synthesis difficult. Further consideration needs to be given to the type and timing of outcome measures selected to represent the domains of participation and HRQoL.

  18. Improved ocean chlorophyll estimate from remote sensed data: The ...

    African Journals Online (AJOL)

    Gregg and Conkright (2001) who pioneered the use of the blending technique in an attempt to calibrate ocean chlorophyll, expressed the need for further work to be done in order to obtain improved results. One problem faced when using this technique with spatially sparse data, is distortion of the resulting blended field ...

  19. Improvements in BTS estimation of ton-miles

    Science.gov (United States)

    2004-08-01

    Ton-miles (one ton of freight shipped one mile) is the primary physical measure of freight transportation output. This paper describes improved measurements of ton-miles for air, truck, rail, water, and pipeline modes. Each modal measure contains a d...

  20. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    Science.gov (United States)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve

  1. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  2. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  3. Participatory design facilitates Person Centred Nursing in service improvement with older people: a secondary directed content analysis.

    Science.gov (United States)

    Wolstenholme, Daniel; Ross, Helen; Cobb, Mark; Bowen, Simon

    2017-05-01

    To explore, using the example of a project working with older people in an outpatient setting in a large UK NHS Teaching hospital, how the constructs of Person Centred Nursing are reflected in interviews from participants in a Co-design led service improvement project. Person Centred Care and Person Centred Nursing are recognised terms in healthcare. Co-design (sometimes called participatory design) is an approach that seeks to involve all stakeholders in a creative process to deliver the best result, be this a product, technology or in this case a service. Co-design practice shares some of the underpinning philosophy of Person Centred Nursing and potentially has methods to aid in Person Centred Nursing implementation. The research design was a qualitative secondary Directed analysis. Seven interview transcripts from nurses and older people who had participated in a Co-design led improvement project in a large teaching hospital were transcribed and analysed. Two researchers analysed the transcripts for codes derived from McCormack & McCance's Person Centred Nursing Framework. The four most expressed codes were as follows: from the pre-requisites: knowing self; from care processes, engagement, working with patient's beliefs and values and shared Decision-making; and from Expected outcomes, involvement in care. This study describes the Co-design theory and practice that the participants responded to in the interviews and look at how the co-design activity facilitated elements of the Person Centred Nursing framework. This study adds to the rich literature about using emancipatory and transformational approaches to Person Centred Nursing development, and is the first study exploring explicitly the potential contribution of Co-design to this area. Methods from Co-design allow older people to contribute as equals in a practice development project, co-design methods can facilitate nursing staff to engage meaningfully with older participants and develop a shared

  4. Estimating effects of improved drinking water and sanitation on cholera.

    Science.gov (United States)

    Leidner, Andrew J; Adusumilli, Naveen C

    2013-12-01

    Demand for adequate provision of drinking-water and sanitation facilities to promote public health and economic growth is increasing in the rapidly urbanizing countries of the developing world. With a panel of data on Asia and Africa from 1990 to 2008, associations are estimated between the occurrence of cholera outbreaks, the case rates in given outbreaks, the mortality rates associated with cholera and two disease control mechanisms, drinking-water and sanitation services. A statistically significant and negative effect is found between drinking-water services and both cholera case rates as well as cholera-related mortality rates. A relatively weak statistical relationship is found between the occurrence of cholera outbreaks and sanitation services.

  5. Brain-computer interface for alertness estimation and improving

    Science.gov (United States)

    Hramov, Alexander; Maksimenko, Vladimir; Hramova, Marina

    2018-02-01

    Using wavelet analysis of the signals of electrical brain activity (EEG), we study the processes of neural activity, associated with perception of visual stimuli. We demonstrate that the brain can process visual stimuli in two scenarios: (i) perception is characterized by destruction of the alpha-waves and increase in the high-frequency (beta) activity, (ii) the beta-rhythm is not well pronounced, while the alpha-wave energy remains unchanged. The special experiments show that the motivation factor initiates the first scenario, explained by the increasing alertness. Based on the obtained results we build the brain-computer interface and demonstrate how the degree of the alertness can be estimated and controlled in real experiment.

  6. A bias correction for covariance estimators to improve inference with generalized estimating equations that use an unstructured correlation matrix.

    Science.gov (United States)

    Westgate, Philip M

    2013-07-20

    Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.

  7. Digital mobile technology facilitates HIPAA-sensitive perioperative messaging, improves physician-patient communication, and streamlines patient care.

    Science.gov (United States)

    Gordon, Chad R; Rezzadeh, Kameron S; Li, Andrew; Vardanian, Andrew; Zelken, Jonathan; Shores, Jamie T; Sacks, Justin M; Segovia, Andres L; Jarrahy, Reza

    2015-01-01

    %). Satisfaction with the service was high: 94.2 % of users "enjoyed this software" and and 94.2 % of family/friends "felt more connected to their loved ones during surgery." 92.5 % would "recommend their loved ones sign up for this service". Ninety percent of patients who completed the survey reported "an improved hospital experience". Digital communications platforms can facilitate the immediate transfer of HIPAA-compliant data to patients and their designees. Such systems can greatly improve the level of communication between physicians, patients, and patients' families and caregivers. All types of users, including healthcare professionals, patients, and their loved ones, recorded high levels of satisfaction. Based on these observations, we conclude that mobile digital communications platforms represent a way to harness the power of social media to enhance patient care.

  8. Improving outcomes in cancer diagnosis, prevention and control: barriers, facilitators and the need for health literacy in Ibadan Nigeria.

    Science.gov (United States)

    Adedimeji, Adebola A; Lounsbury, David; Popoola, Oluwafemi; Asuzu, Chioma; Lawal, Akinmayowa; Oladoyin, V; Crifase, Cassandra; Agalliu, Ilir; Shankar, Viswanathan; Adebiyi, Akindele

    2017-10-01

    Cancers constitute a significant public health problem in Nigeria. Breast, cervix and prostate cancers are leading causes of cancer-related deaths. Changing diets, lifestyles, HIV/AIDS and macro-structural factors contribute to cancer morbidity and mortality. Poor health information linking cancer risk to individual behaviors, environmental pollutants and structural barriers undermine prevention/control efforts. Studies suggest increasing health literacy and empowering individuals to take preventive action will improve outcomes and mitigate impact on a weak health system. We obtained qualitative data from 80 men, women, and young adults in 11 focus groups to assess beliefs, risk-perceptions, preventive behaviors and perceptions of barriers and facilitators to cancer control in Ibadan, Nigeria and conducted thematic analysis. Participants demonstrated awareness of cancers and mentioned several risk factors related to individual behaviors and the environment. Nonetheless, myths and misconceptions as well as micro, meso and macro level barriers impede prevention and control efforts. Developing and implementing comprehensive context-relevant health literacy interventions in community settings are urgently needed.Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. The verbal facilitation effect: re-reading person descriptions as a system variable to improve identification performance.

    Science.gov (United States)

    Sporer, Siegfried L; Kaminski, Kristina S; Davids, Maike C; McQuiston, Dawn

    2016-11-01

    When witnesses report a crime, police usually ask for a description of the perpetrator. Several studies suggested that verbalising faces leads to a detriment in identification performance (verbal overshadowing effect [VOE]) but the effect has been difficult to replicate. Here, we sought to reverse the VOE by inducing context reinstatement as a system variable through re-reading one's own description before an identification task. Participants (N = 208) watched a video film and were then dismissed (control group), only described the perpetrator, or described and later re-read their own descriptions before identification in either target-present or target-absent lineups after a 2-day or a 5-week delay. Identification accuracy was significantly higher after re-reading (85.0%) than in the no description control group (62.5%) irrespective of target presence. Data were internally replicated using a second target and corroborated by several small meta-analyses. Identification accuracy was related to description quality. Moreover, there was a tendency towards a verbal facilitation effect (VFE) rather than a VOE. Receiver operating characteristic (ROC) curve analyses confirm that our findings are not due to a shift in response bias but truly reflect improvement of recognition performance. Differences in the ecological validity of study paradigms are discussed.

  10. Improved quantum backtracking algorithms using effective resistance estimates

    Science.gov (United States)

    Jarret, Michael; Wan, Kianna

    2018-02-01

    We investigate quantum backtracking algorithms of the type introduced by Montanaro (Montanaro, arXiv:1509.02374). These algorithms explore trees of unknown structure and in certain settings exponentially outperform their classical counterparts. Some of the previous work focused on obtaining a quantum advantage for trees in which a unique marked vertex is promised to exist. We remove this restriction by recharacterizing the problem in terms of the effective resistance of the search space. In this paper, we present a generalization of one of Montanaro's algorithms to trees containing k marked vertices, where k is not necessarily known a priori. Our approach involves using amplitude estimation to determine a near-optimal weighting of a diffusion operator, which can then be applied to prepare a superposition state with support only on marked vertices and ancestors thereof. By repeatedly sampling this state and updating the input vertex, a marked vertex is reached in a logarithmic number of steps. The algorithm thereby achieves the conjectured bound of O ˜(√{T Rmax }) for finding a single marked vertex and O ˜(k √{T Rmax }) for finding all k marked vertices, where T is an upper bound on the tree size and Rmax is the maximum effective resistance encountered by the algorithm. This constitutes a speedup over Montanaro's original procedure in both the case of finding one and the case of finding multiple marked vertices in an arbitrary tree.

  11. IMPROVING THE METHODS OF ESTIMATION OF THE UNIT TRAIN EFFECTIVENESS

    Directory of Open Access Journals (Sweden)

    Dmytro KOZACHENKO

    2016-09-01

    Full Text Available The article presents the results of studies of freight transportation by unit trains. The article is aimed at developing the methods of the efficiency evaluation of unit train dispatch on the basis of full-scale experiments. Duration of the car turnover is a random variable when dispatching the single cars and group cars, as well as when dispatching them as a part of a unit train. The existing methodologies for evaluating the efficiency of unit trains’ make-up are based on the use of calculation methodologies and their results can give significant errors. The work presents a methodology that makes it possible to evaluate the efficiency of unit train shipments based on the processing of results of experimental travels using the methods of mathematical statistics. This approach provides probabilistic estimates of the rolling stock use efficiency for different approaches to the organization of car traffic volumes, as well as establishes the effect for each of the participants in the transportation process.

  12. An improved iron loss estimation for permanent magnet brushless machines

    CERN Document Server

    Fang, D

    1999-01-01

    This paper presents an improved approach for predicting iron losses in permanent magnet brushless machines. The new approach is based on the fundamental concept that eddy current losses are proportional to the square of the time rate of change of flux density. Expressions are derived for predicting hysteresis and eddy current losses in the stator teeth and yoke. The so-called anomalous or excess losses, caused by the induced eddy current concentration around moving magnetic domain walls and neglected in the conventional core loss calculation, are also included in the proposed approach. In addition, the model is also capable of accounting for the stator skewing, if present. The core losses obtained from the proposed approach are compared with those measured on an existing PM motor at several operating speeds, showing very good agreement. (14 refs).

  13. Improved automatic optic nerve radius estimation from high resolution MRI

    Science.gov (United States)

    Harrigan, Robert L.; Smith, Alex K.; Mawn, Louise A.; Smith, Seth A.; Landman, Bennett A.

    2017-02-01

    The optic nerve (ON) is a vital structure in the human visual system and transports all visual information from the retina to the cortex for higher order processing. Due to the lack of redundancy in the visual pathway, measures of ON damage have been shown to correlate well with visual deficits. These measures are typically taken at an arbitrary anatomically defined point along the nerve and do not characterize changes along the length of the ON. We propose a fully automated, three-dimensionally consistent technique building upon a previous independent slice-wise technique to estimate the radius of the ON and surrounding cerebrospinal fluid (CSF) on high-resolution heavily T2-weighted isotropic MRI. We show that by constraining results to be three-dimensionally consistent this technique produces more anatomically viable results. We compare this technique with the previously published slice-wise technique using a short-term reproducibility data set, 10 subjects, follow-up <1 month, and show that the new method is more reproducible in the center of the ON. The center of the ON contains the most accurate imaging because it lacks confounders such as motion and frontal lobe interference. Long-term reproducibility, 5 subjects, follow-up of approximately 11 months, is also investigated with this new technique and shown to be similar to short-term reproducibility, indicating that the ON does not change substantially within 11 months. The increased accuracy of this new technique provides increased power when searching for anatomical changes in ON size amongst patient populations.

  14. Improving Estimated Optical Constants With MSTM and DDSCAT Modeling

    Science.gov (United States)

    Pitman, K. M.; Wolff, M. J.

    2015-12-01

    We present numerical experiments to determine quantitatively the effects of mineral particle clustering on Mars spacecraft spectral signatures and to improve upon the values of refractive indices (optical constants n, k) derived from Mars dust laboratory analog spectra such as those from RELAB and MRO CRISM libraries. Whereas spectral properties for Mars analog minerals and actual Mars soil are dominated by aggregates of particles smaller than the size of martian atmospheric dust, the analytic radiative transfer (RT) solutions used to interpret planetary surfaces assume that individual, well-separated particles dominate the spectral signature. Both in RT models and in the refractive index derivation methods that include analytic RT approximations, spheres are also over-used to represent nonspherical particles. Part of the motivation is that the integrated effect over randomly oriented particles on quantities such as single scattering albedo and phase function are relatively less than for single particles. However, we have seen in previous numerical experiments that when varying the shape and size of individual grains within a cluster, the phase function changes in both magnitude and slope, thus the "relatively less" effect is more significant than one might think. Here we examine the wavelength dependence of the forward scattering parameter with multisphere T-matrix (MSTM) and discrete dipole approximation (DDSCAT) codes that compute light scattering by layers of particles on planetary surfaces to see how albedo is affected and integrate our model results into refractive index calculations to remove uncertainties in approximations and parameters that can lower the accuracy of optical constants. By correcting the single scattering albedo and phase function terms in the refractive index determinations, our data will help to improve the understanding of Mars in identifying, mapping the distributions, and quantifying abundances for these minerals and will address long

  15. Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE

    Science.gov (United States)

    Itai, Akitoshi; Yasukawa, Hiroshi

    This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.

  16. The international food unit: a new measurement aid that can improve portion size estimation.

    Science.gov (United States)

    Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M

    2017-09-12

    Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.

  17. Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances

    Science.gov (United States)

    Stroujkova, A.; Reiter, D. T.; Shumway, R. H.

    2006-12-01

    The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse

  18. Joint estimation over multiple individuals improves behavioural state inference from animal movement data.

    Science.gov (United States)

    Jonsen, Ian

    2016-02-08

    State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.

  19. Applying Insights from Transaction Cost Economics (TCE) to Improve DoD Cost Estimation

    National Research Council Canada - National Science Library

    Angelis, Diana I; Dillard, John; Franck, Raymond; Melese, Francois

    2007-01-01

    The purpose of this report is to explore the possibility of improving DoD cost estimation methods by including explanatory variables that capture the coordination and motivation problems associated with the program...

  20. Option Price Estimates for Water Quality Improvements: A Contingent Valuation Study for the Monongahela River (1985)

    Science.gov (United States)

    This paper presents the findings from a contingent valuation survey designed to estimate the option price bids for the improved recreation resulting from enhanced water quality in the Pennsylvania portion of the Monongahela River.

  1. Nuclear Weapons Sustainment: Improvements Made to Budget Estimates Report, but Opportunities Remain to Further Enhance Transparency

    Science.gov (United States)

    2015-12-01

    Enhance Transparency Report to Congressional Committees December 2015 GAO-16-23 United States Government Accountability Office United...SUSTAINMENT Improvements Made to Budget Estimates Report, but Opportunities Remain to Further Enhance Transparency Why GAO Did This Study DOD and DOE are...modernization plans and (2) complete, transparent information on the methodologies used to develop those estimates. GAO analyzed the departments

  2. Hexicon 2: automated processing of hydrogen-deuterium exchange mass spectrometry data with improved deuteration distribution estimation.

    Science.gov (United States)

    Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L; Hamprecht, Fred A; Winkler, Andreas

    2014-06-01

    Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.

  3. Hexicon 2: Automated Processing of Hydrogen-Deuterium Exchange Mass Spectrometry Data with Improved Deuteration Distribution Estimation

    Science.gov (United States)

    Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L.; Hamprecht, Fred A.; Winkler, Andreas

    2014-06-01

    Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.

  4. Work System Assessment to Facilitate the Dissemination of a Quality Improvement Program for Optimizing Blood Culture Use: A Case Study Using a Human Factors Engineering Approach.

    Science.gov (United States)

    Xie, Anping; Woods-Hill, Charlotte Z; King, Anne F; Enos-Graves, Heather; Ascenzi, Judy; Gurses, Ayse P; Klaus, Sybil A; Fackler, James C; Milstone, Aaron M

    2017-11-20

    Work system assessments can facilitate successful implementation of quality improvement programs. Using a human factors engineering approach, we conducted a work system assessment to facilitate the dissemination of a quality improvement program for optimizing blood culture use in pediatric intensive care units at 2 hospitals. Semistructured face-to-face interviews were conducted with clinicians from Johns Hopkins All Children's Hospital and University of Virginia Medical Center. Interview data were analyzed using qualitative content analysis. Blood culture-ordering practices are influenced by various work system factors, including people, tasks, tools and technologies, the physical environment, organizational conditions, and the external environment. A clinical decision-support tool could facilitate implementation by (1) standardizing blood culture-ordering practices, (2) ensuring that prescribing clinicians review the patient's condition before ordering a blood culture, (3) facilitating critical thinking, and (4) empowering nurses to communicate with physicians and advocate for adherence to blood culture-ordering guidelines. The success of interventions for optimizing blood culture use relies heavily on the local context. A work system analysis using a human factors engineering approach can identify key areas to be addressed for the successful dissemination of quality improvement interventions. © The Author 2017. Published by Oxford University Press on behalf of The Journal of the Pediatric Infectious Diseases Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Initial position estimation method for permanent magnet synchronous motor based on improved pulse voltage injection

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    According to saliency of permanent magnet synchronous motor (PMSM), the information of rotor position is implied in performance of stator inductances due to the magnetic saturation effect. Researches focused on the initial rotor position estimation of PMSM by injecting modulated pulse voltage...... vectors. The relationship between the inductance variations and voltage vector positions was studied. The inductance variation effect on estimation accuracy was studied as well. An improved five-pulses injection method was proposed, to improve the estimation accuracy by choosing optimaized voltage vectors...

  6. Facilitating innovation : an action-oriented approach and participatory methodology to improve innovative social practice in agriculture

    OpenAIRE

    Engel, P.G.H.

    1995-01-01

    This study focuses upon the social organization of innovation. It makes use of insights from knowledge and information systems research, development sociology, management science and applied philosophy and seeks answers to the following questions: What do social actors, individuals and/or organizations, actually do to innovate their practices? How do they organize themselves? Can this be managed or facilitated, and if so, how? The research is exploratory rather than concl...

  7. State of charge estimation of lithium-ion batteries based on an improved parameter identification method

    International Nuclear Information System (INIS)

    Xia, Bizhong; Chen, Chaoren; Tian, Yong; Wang, Mingwang; Sun, Wei; Xu, Zhihui

    2015-01-01

    The SOC (state of charge) is the most important index of the battery management systems. However, it cannot be measured directly with sensors and must be estimated with mathematical techniques. An accurate battery model is crucial to exactly estimate the SOC. In order to improve the model accuracy, this paper presents an improved parameter identification method. Firstly, the concept of polarization depth is proposed based on the analysis of polarization characteristics of the lithium-ion batteries. Then, the nonlinear least square technique is applied to determine the model parameters according to data collected from pulsed discharge experiments. The results show that the proposed method can reduce the model error as compared with the conventional approach. Furthermore, a nonlinear observer presented in the previous work is utilized to verify the validity of the proposed parameter identification method in SOC estimation. Finally, experiments with different levels of discharge current are carried out to investigate the influence of polarization depth on SOC estimation. Experimental results show that the proposed method can improve the SOC estimation accuracy as compared with the conventional approach, especially under the conditions of large discharge current. - Highlights: • The polarization characteristics of lithium-ion batteries are analyzed. • The concept of polarization depth is proposed to improve model accuracy. • A nonlinear least square technique is applied to determine the model parameters. • A nonlinear observer is used as the SOC estimation algorithm. • The validity of the proposed method is verified by experimental results.

  8. Reporting systems in gastrointestinal endoscopy: Requirements and standards facilitating quality improvement: European Society of Gastrointestinal Endoscopy position statement

    NARCIS (Netherlands)

    Bretthauer, Michael; Aabakken, Lars; Dekker, Evelien; Kaminski, Michal F.; Rösch, Thomas; Hultcrantz, Rolf; Suchanek, Stepan; Jover, Rodrigo; Kuipers, Ernst J.; Bisschops, Raf; Spada, Cristiano; Valori, Roland; Domagk, Dirk; Rees, Colin; Rutter, Matthew D.

    2016-01-01

    To develop standards for high quality of gastrointestinal endoscopy, the European Society of Gastrointestinal Endoscopy (ESGE) has established the ESGE Quality Improvement Committee. A prerequisite for quality assurance and improvement for all gastrointestinal endoscopy procedures is

  9. Improving estimates of the prevalence of Female Genital Mutilation/Cutting among migrants in Western countries

    Directory of Open Access Journals (Sweden)

    Livia Elisa Ortensi

    2015-02-01

    Full Text Available Background: Female Genital Mutilation/Cutting (FGM/C is an emerging topic in immigrant countries as a consequence of the increasing proportion of African women in overseas communities. Objective: While the prevalence of FGM/C is routinely measured in practicing countries, the prevalence of the phenomenon in western countries is substantially unknown, as no standardized methods exist yet for immigrant countries. The aim of this paper is to present an improved method of indirect estimation of the prevalence of FGM/C among first generation migrants based on a migrant selection hypothesis. A criterion to assess reliability of indirect estimates is also provided. Methods: The method is based on data from Demographic Health Surveys (DHS and Multiple Indicator Cluster Surveys (MICS. Migrants' Selection Hypothesis is used to correct national prevalence estimates and obtain an improved estimation of prevalence among overseas communities. Results: The application of the selection hypothesis modifies national estimates, usually predicting a lower occurrence of FGM/C among immigrants than in their respective practicing countries. A comparison of direct and indirect estimations confirms that the method correctly predicts the direction of the variation in the expected prevalence and satisfactorily approximates direct estimates. Conclusions: Given its wide applicability, this method would be a useful instrument to estimate FGM/C occurrence among first generation immigrants and provide corresponding support for policies in countries where information from ad hoc surveys is unavailable.

  10. Improving PERSIANN-CCS rain estimation using probabilistic approach and multi-sensors information

    Science.gov (United States)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.; Kirstetter, P.; Hong, Y.

    2016-12-01

    This presentation discusses the recent implemented approaches to improve the rainfall estimation from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System (PERSIANN-CCS). PERSIANN-CCS is an infrared (IR) based algorithm being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to create a precipitation product in 0.1x0.1degree resolution over the chosen domain 50N to 50S every 30 minutes. Although PERSIANN-CCS has a high spatial and temporal resolution, it overestimates or underestimates due to some limitations.PERSIANN-CCS can estimate rainfall based on the extracted information from IR channels at three different temperature threshold levels (220, 235, and 253k). This algorithm relies only on infrared data to estimate rainfall indirectly from this channel which cause missing the rainfall from warm clouds and false estimation for no precipitating cold clouds. In this research the effectiveness of using other channels of GOES satellites such as visible and water vapors has been investigated. By using multi-sensors the precipitation can be estimated based on the extracted information from multiple channels. Also, instead of using the exponential function for estimating rainfall from cloud top temperature, the probabilistic method has been used. Using probability distributions of precipitation rates instead of deterministic values has improved the rainfall estimation for different type of clouds.

  11. An improved fuzzy Kalman filter for state estimation of nonlinear systems

    International Nuclear Information System (INIS)

    Zhou, Z-J; Hu, C-H; Chen, L; Zhang, B-C

    2008-01-01

    The extended fuzzy Kalman filter (EFKF) is developed recently and used for state estimation of the nonlinear systems with uncertainty. Based on extension of the orthogonality principle and the extended fuzzy Kalman filter, an improved fuzzy Kalman filters (IFKF) is proposed in this paper, which is more applicable and can deal with the state estimation of the nonlinear systems better than the EFKF. A simulation study is provided to verify the efficiency of the proposed method

  12. State Estimation of Permanent Magnet Synchronous Motor Using Improved Square Root UKF

    Directory of Open Access Journals (Sweden)

    Bo Xu

    2016-06-01

    Full Text Available This paper focuses on an improved square root unscented Kalman filter (SRUKF and its application for rotor speed and position estimation of permanent magnet synchronous motor (PMSM. The approach, which combines the SRUKF and strong tracking filter, uses the minimal skew simplex transformation to reduce the number of the sigma points, and utilizes the square root filtering to reduce computational errors. The time-varying fading factor and softening factor are introduced to self-adjust the gain matrices and the state forecast covariance square root matrix, which can realize the residuals orthogonality and force the SRUKF to track the real state rapidly. The theoretical analysis of the improved SRUKF and implementation details for PMSM state estimation are examined. The simulation results show that the improved SRUKF has higher nonlinear approximation accuracy, stronger numerical stability and computational efficiency, and it is an effective and powerful tool for PMSM state estimation under the conditions of step response or load disturbance.

  13. Gene expression during blow fly development: improving the precision of age estimates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2011-01-01

    Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.

  14. A Parameter Estimation Method for Nonlinear Systems Based on Improved Boundary Chicken Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Shaolong Chen

    2016-01-01

    Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.

  15. Counteracting estimation bias and social influence to improve the wisdom of crowds.

    Science.gov (United States)

    Kao, Albert B; Berdahl, Andrew M; Hartnett, Andrew T; Lutz, Matthew J; Bak-Coleman, Joseph B; Ioannou, Christos C; Giam, Xingli; Couzin, Iain D

    2018-04-01

    Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds. © 2018 The Author(s).

  16. Adaptive OFDM Radar Waveform Design for Improved Micro-Doppler Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Center for Engineering Science Advanced Research, Computer Science and Mathematics Division

    2014-07-01

    Here we analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a rotating target having multiple scattering centers. The use of a frequency-diverse OFDM signal enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. We characterize the accuracy of micro-Doppler frequency estimation by computing the Cramer-Rao bound (CRB) on the angular-velocity estimate of the target. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations with respect to the signal-to-noise ratios, number of temporal samples, and number of OFDM subcarriers. We also analysed numerically the improvement in estimation accuracy due to the adaptive waveform design. A grid-based maximum likelihood estimation technique is applied to evaluate the corresponding mean-squared error performance.

  17. Improvements to the quality of the estimates of US uranium reserves

    International Nuclear Information System (INIS)

    Nikodem, Z.D.

    1998-01-01

    continuing. Further work is being directed toward improving estimation techniques and analysing production levels obtainable from reserve levels at various cost categories. (author)

  18. Improving RNA-Seq expression estimates by correcting for fragment bias

    Science.gov (United States)

    2011-01-01

    The biochemistry of RNA-Seq library preparation results in cDNA fragments that are not uniformly distributed within the transcripts they represent. This non-uniformity must be accounted for when estimating expression levels, and we show how to perform the needed corrections using a likelihood based approach. We find improvements in expression estimates as measured by correlation with independently performed qRT-PCR and show that correction of bias leads to improved replicability of results across libraries and sequencing technologies. PMID:21410973

  19. Requirements and standards facilitating quality improvement for reporting systems in gastrointestinal endoscopy: European Society of Gastrointestinal Endoscopy (ESGE) Position Statement

    NARCIS (Netherlands)

    Bretthauer, Michael; Aabakken, Lars; Dekker, Evelien; Kaminski, Michal F.; Rösch, Thomas; Hultcrantz, Rolf; Suchanek, Stepan; Jover, Rodrigo; Kuipers, Ernst J.; Bisschops, Raf; Spada, Cristiano; Valori, Roland; Domagk, Dirk; Rees, Colin; Rutter, Matthew D.

    2016-01-01

    To develop standards for high quality in gastrointestinal (GI) endoscopy, the European Society of Gastrointestinal Endoscopy (ESGE) has established the ESGE Quality Improvement Committee. A prerequisite for quality assurance and improvement for all GI endoscopy procedures is state-of-the-art

  20. The Role of Satellite Imagery to Improve Pastureland Estimates in South America

    Science.gov (United States)

    Graesser, J.

    2015-12-01

    Agriculture has changed substantially across the globe over the past half century. While much work has been done to improve spatial-temporal estimates of agricultural changes, we still know more about the extent of row-crop agriculture than livestock-grazed land. The gap between cropland and pastureland estimates exists largely because it is challenging to characterize natural versus grazed grasslands from a remote sensing perspective. However, the impasse of pastureland estimates is set to break, with an increasing number of spaceborne sensors and freely available satellite data. The Landsat satellite archive in particular provides researchers with immense amounts of data to improve pastureland information. Here we focus on South America, where pastureland expansion has been scrutinized for the past few decades. We explore the challenges of estimating pastureland using temporal Landsat imagery and focus on key agricultural countries, regions, and ecosystems. We focus on the suggested shift of pastureland from the Argentine Pampas to northern Argentina, and the mixing of small-scale and large-scale ranching in eastern Paraguay and how it could impact the Chaco forest to the west. Further, the Beni Savannahs of northern Bolivia and the Colombian Llanos—both grassland and savannah regions historically used for livestock grazing—have been hinted at as future areas for cropland expansion. There are certainly environmental concerns with pastureland expansion into forests; but what are the environmental implications when well-managed pasture systems are converted to intensive soybean or palm oil plantation? Tropical, grazed grasslands are important habitats for biodiversity, and pasturelands can mitigate soil erosion when well managed. Thus, we must improve estimates of grazed land before we can make informed policy and conservation decisions. This talk presents insights into pastureland estimates in South America and discusses the feasibility to improve current

  1. The promise of multimedia technology for STI/HIV prevention: frameworks for understanding improved facilitator delivery and participant learning.

    Science.gov (United States)

    Khan, Maria R; Epperson, Matthew W; Gilbert, Louisa; Goddard, Dawn; Hunt, Timothy; Sarfo, Bright; El-Bassel, Nabila

    2012-10-01

    There is increasing excitement about multimedia sexually transmitted infection (STI) and HIV prevention interventions, yet there has been limited discussion of how use of multimedia technology may improve STI/HIV prevention efforts. The purpose of this paper is to describe the mechanisms through which multimedia technology may work to improve the delivery and uptake of intervention material. We present conceptual frameworks describing how multimedia technology may improve intervention delivery by increasing standardization and fidelity to the intervention material and the participant's ability to learn by improving attention, cognition, emotional engagement, skills-building, and uptake of sensitive material about sexual and drug risks. In addition, we describe how the non-multimedia behavioral STI/HIV prevention intervention, Project WORTH, was adapted into a multimedia format for women involved in the criminal justice system and provide examples of how multimedia activities can more effectively target key mediators of behavioral change in this intervention.

  2. Improved Battery Parameter Estimation Method Considering Operating Scenarios for HEV/EV Applications

    Directory of Open Access Journals (Sweden)

    Jufeng Yang

    2016-12-01

    Full Text Available This paper presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted dataset is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.

  3. Estimating Evapotranspiration from an Improved Two-Source Energy Balance Model Using ASTER Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Qifeng Zhuang

    2015-11-01

    Full Text Available Reliably estimating the turbulent fluxes of latent and sensible heat at the Earth’s surface by remote sensing is important for research on the terrestrial hydrological cycle. This paper presents a practical approach for mapping surface energy fluxes using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER images from an improved two-source energy balance (TSEB model. The original TSEB approach may overestimate latent heat flux under vegetative stress conditions, as has also been reported in recent research. We replaced the Priestley-Taylor equation used in the original TSEB model with one that uses plant moisture and temperature constraints based on the PT-JPL model to obtain a more accurate canopy latent heat flux for model solving. The collected ASTER data and field observations employed in this study are over corn fields in arid regions of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER area, China. The results were validated by measurements from eddy covariance (EC systems, and the surface energy flux estimates of the improved TSEB model are similar to the ground truth. A comparison of the results from the original and improved TSEB models indicates that the improved method more accurately estimates the sensible and latent heat fluxes, generating more precise daily evapotranspiration (ET estimate under vegetative stress conditions.

  4. Improved crop residue cover estimates by coupling spectral indices for residue and moisture

    Science.gov (United States)

    Remote sensing assessment of soil residue cover (fR) and tillage intensity will improve our predictions of the impact of agricultural practices and promote sustainable management. Spectral indices for estimating fR are sensitive to soil and residue water content, therefore, the uncertainty of estima...

  5. Water level observations from Unmanned Aerial Vehicles for improving estimates of surface water-groundwater interaction

    DEFF Research Database (Denmark)

    Bandini, Filippo; Butts, Michael; Vammen Jacobsen, Torsten

    2017-01-01

    spatial resolution; ii) spatially continuous profiles along or across the water body; iii) flexible timing of sampling. A semi-synthetic study was conducted to analyse the value of the new UAV-borne datatype for improving hydrological models, in particular estimates of GW (Groundwater)- SW (Surface Water...

  6. Improved variational estimates for the mass gap in the 2-dimensional XY-model

    International Nuclear Information System (INIS)

    Patkos, A.; Hari Dass, N.D.

    1982-07-01

    The variational estimate obtained recently for the mass gap of the 2-dimensional XY-model is improved by extending the treatment to higher powers of the transfer operator. The relativistic dispersion relation for single particle states of low momentum is also verified. (Auth.)

  7. An Improved Estimation Using Polya-Gamma Augmentation for Bayesian Structural Equation Models with Dichotomous Variables

    Science.gov (United States)

    Kim, Seohyun; Lu, Zhenqiu; Cohen, Allan S.

    2018-01-01

    Bayesian algorithms have been used successfully in the social and behavioral sciences to analyze dichotomous data particularly with complex structural equation models. In this study, we investigate the use of the Polya-Gamma data augmentation method with Gibbs sampling to improve estimation of structural equation models with dichotomous variables.…

  8. An improved principal component analysis based region matching method for fringe direction estimation

    Science.gov (United States)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  9. Improved delivery of cardiovascular care (IDOCC through outreach facilitation: study protocol and implementation details of a cluster randomized controlled trial in primary care

    Directory of Open Access Journals (Sweden)

    Akbari Ayub

    2011-09-01

    Full Text Available Abstract Background There is a need to find innovative approaches for translating best practices for chronic disease care into daily primary care practice routines. Primary care plays a crucial role in the prevention and management of cardiovascular disease. There is, however, a substantive care gap, and many challenges exist in implementing evidence-based care. The Improved Delivery of Cardiovascular Care (IDOCC project is a pragmatic trial designed to improve the delivery of evidence-based care for the prevention and management of cardiovascular disease in primary care practices using practice outreach facilitation. Methods The IDOCC project is a stepped-wedge cluster randomized control trial in which Practice Outreach Facilitators work with primary care practices to improve cardiovascular disease prevention and management for patients at highest risk. Primary care practices in a large health region in Eastern Ontario, Canada, were eligible to participate. The intervention consists of regular monthly meetings with the Practice Outreach Facilitator over a one- to two-year period. Starting with audit and feedback, consensus building, and goal setting, the practices are supported in changing practice behavior by incorporating chronic care model elements. These elements include (a evidence-based decision support for providers, (b delivery system redesign for practices, (c enhanced self-management support tools provided to practices to help them engage patients, and (d increased community resource linkages for practices to enhance referral of patients. The primary outcome is a composite score measured at the level of the patient to represent each practice's adherence to evidence-based guidelines for cardiovascular care. Qualitative analysis of the Practice Outreach Facilitators' written narratives of their ongoing practice interactions will be done. These textual analyses will add further insight into understanding critical factors impacting

  10. Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feng, E-mail: fwang@unu.edu [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Huisman, Jaco [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Stevels, Ab [Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Baldé, Cornelis Peter [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Statistics Netherlands, Henri Faasdreef 312, 2492 JP Den Haag (Netherlands)

    2013-11-15

    Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e

  11. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    Science.gov (United States)

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  12. Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis

    International Nuclear Information System (INIS)

    Wang, Feng; Huisman, Jaco; Stevels, Ab; Baldé, Cornelis Peter

    2013-01-01

    Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e

  13. SIMULTANEOUS ESTIMATION OF PHOTOMETRIC REDSHIFTS AND SED PARAMETERS: IMPROVED TECHNIQUES AND A REALISTIC ERROR BUDGET

    International Nuclear Information System (INIS)

    Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric

    2015-01-01

    We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts

  14. Improving slowness estimate stability and visualization using limited sensor pair correlation on seismic arrays

    Science.gov (United States)

    Gibbons, Steven J.; Näsholm, S. P.; Ruigrok, E.; Kværna, T.

    2018-04-01

    Seismic arrays enhance signal detection and parameter estimation by exploiting the time-delays between arriving signals on sensors at nearby locations. Parameter estimates can suffer due to both signal incoherence, with diminished waveform similarity between sensors, and aberration, with time-delays between coherent waveforms poorly represented by the wave-front model. Sensor-to-sensor correlation approaches to parameter estimation have an advantage over direct beamforming approaches in that individual sensor-pairs can be omitted without necessarily omitting entirely the data from each of the sensors involved. Specifically, we can omit correlations between sensors for which signal coherence in an optimal frequency band is anticipated to be poor or for which anomalous time-delays are anticipated. In practice, this usually means omitting correlations between more distant sensors. We present examples from International Monitoring System seismic arrays with poor parameter estimates resulting when classical f-k analysis is performed over the full array aperture. We demonstrate improved estimates and slowness grid displays using correlation beamforming restricted to correlations between sufficiently closely spaced sensors. This limited sensor-pair correlation (LSPC) approach has lower slowness resolution than would ideally be obtained by considering all sensor-pairs. However, this ideal estimate may be unattainable due to incoherence and/or aberration and the LSPC estimate can often exploit all channels, with the associated noise-suppression, while mitigating the complications arising from correlations between very distant sensors. The greatest need for the method is for short-period signals on large aperture arrays although we also demonstrate significant improvement for secondary regional phases on a small aperture array. LSPC can also provide a robust and flexible approach to parameter estimation on three-component seismic arrays.

  15. Spatially Explicit Estimation of Optimal Light Use Efficiency for Improved Satellite Data Driven Ecosystem Productivity Modeling

    Science.gov (United States)

    Madani, N.; Kimball, J. S.; Running, S. W.

    2014-12-01

    Remote sensing based light use efficiency (LUE) models, including the MODIS (MODerate resolution Imaging Spectroradiometer) MOD17 algorithm are commonly used for regional estimation and monitoring of vegetation gross primary production (GPP) and photosynthetic carbon (CO2) uptake. A common model assumption is that plants in a biome matrix operate at their photosynthetic capacity under optimal climatic conditions. A prescribed biome maximum light use efficiency parameter defines the maximum photosynthetic carbon conversion rate under prevailing climate conditions and is a large source of model uncertainty. Here, we used tower (FLUXNET) eddy covariance measurement based carbon flux data for estimating optimal LUE (LUEopt) over a North American domain. LUEopt was first estimated using tower observed daily carbon fluxes, meteorology and satellite (MODIS) observed fraction of photosynthetically active radiation (FPAR). LUEopt was then spatially interpolated over the domain using empirical models derived from independent geospatial data including global plant traits, surface soil moisture, terrain aspect, land cover type and percent tree cover. The derived LUEopt maps were then used as primary inputs to the MOD17 LUE algorithm for regional GPP estimation; these results were evaluated against tower observations and alternate MOD17 GPP estimates determined using Biome-specific LUEopt constants. Estimated LUEopt shows large spatial variability within and among different land cover classes indicated from a sparse North American tower network. Leaf nitrogen content and soil moisture are two important factors explaining LUEopt spatial variability. GPP estimated from spatially explicit LUEopt inputs shows significantly improved model accuracy against independent tower observations (R2 = 0.76; Mean RMSE plant trait information can explain spatial heterogeneity in LUEopt, leading to improved GPP estimates from satellite based LUE models.

  16. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework. PMID:28522983

  17. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  18. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Directory of Open Access Journals (Sweden)

    Ji Chul Kim

    2017-05-01

    Full Text Available Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  19. Protein crystal growth on board Shenzhou 3: a concerted effort improves crystal diffraction quality and facilitates structure determination

    International Nuclear Information System (INIS)

    Han, Y.; Cang, H.-X.; Zhou, J.-X.; Wang, Y.-P.; Bi, R.-C.; Colelesage, J.; Delbaere, L.T.J.; Nahoum, V.; Shi, R.; Zhou, M.; Zhu, D.-W.; Lin, S.-X.

    2004-01-01

    The crystallization of 16 proteins was carried out using 60 wells on board Shenzhou 3 in 2002. Although the mission was only 7 days, careful and concerted planning at all stages made it possible to obtain crystals of improved quality compared to their ground controls for some of the proteins. Significantly improved resolutions were obtained from diffracted crystals of 4 proteins. A complete data set from a space crystal of the PEP carboxykinase yielded significantly higher resolution (1.46 A vs. 1.87 A), I/sigma (22.4 vs. 15.5), and a lower average temperature factor (29.2 A 2 vs. 42.9 A 2 ) than the best ground-based control crystal. The 3-D structure of the enzyme is well improved with significant ligand density. It has been postulated that the reduced convection and absence of macromolecule sedimentation under microgravity have advantages/benefits for protein crystal growth. Improvements in experimental design for protein crystal growth in microgravity are ongoing

  20. Functionalization of graphene oxide nanostructures improves photoluminescence and facilitates their use as optical probes in preclinical imaging

    Science.gov (United States)

    Prabhakar, Neeraj; Näreoja, Tuomas; von Haartman, Eva; Şen Karaman, Didem; Burikov, Sergey A.; Dolenko, Tatiana A.; Deguchi, Takahiro; Mamaeva, Veronika; Hänninen, Pekka E.; Vlasov, Igor I.; Shenderova, Olga A.; Rosenholm, Jessica M.

    2015-06-01

    Recently reported photoluminescent nanographene oxides (nGOs), i.e. nanographene oxidised with a sulfuric/nitric acid mixture (SNOx method), have tuneable photoluminescence and are scalable, simple and fast to produce optical probes. This material belongs to the vast class of photoluminescent carbon nanostructures, including carbon dots, nanodiamonds (NDs), graphene quantum dots (GQDs), all of which demonstrate a variety of properties that are attractive for biomedical imaging such as low toxicity and stable photoluminescence. In this study, the nGOs were organically surface-modified with poly(ethylene glycol)-poly(ethylene imine) (PEG-PEI) copolymers tagged with folic acid as the affinity ligand for cancer cells expressing folate receptors. The functionalization enhanced both the cellular uptake and quantum efficiency of the photoluminescence as compared to non-modified nGOs. The nGOs exhibited an excitation dependent photoluminescence that facilitated their detection with a wide range of microscope configurations. The functionalized nGOs were non-toxic, they were retained in the stained cell population over a period of 8 days and they were distributed equally between daughter cells. We have evaluated their applicability in in vitro and in vivo (chicken embryo CAM) models to visualize and track migratory cancer cells. The good biocompatibility and easy detection of the functionalized nGOs suggest that they could address the limitations faced with quantum dots and organic fluorophores in long-term in vivo biomedical imaging.Recently reported photoluminescent nanographene oxides (nGOs), i.e. nanographene oxidised with a sulfuric/nitric acid mixture (SNOx method), have tuneable photoluminescence and are scalable, simple and fast to produce optical probes. This material belongs to the vast class of photoluminescent carbon nanostructures, including carbon dots, nanodiamonds (NDs), graphene quantum dots (GQDs), all of which demonstrate a variety of properties that are

  1. DOA and Noncircular Phase Estimation of Noncircular Signal via an Improved Noncircular Rotational Invariance Propagator Method

    Directory of Open Access Journals (Sweden)

    Xueqiang Chen

    2015-01-01

    Full Text Available We consider the computationally efficient direction-of-arrival (DOA and noncircular (NC phase estimation problem of noncircular signal for uniform linear array. The key idea is to apply the noncircular propagator method (NC-PM which does not require eigenvalue decomposition (EVD of the covariance matrix or singular value decomposition (SVD of the received data. Noncircular rotational invariance propagator method (NC-RI-PM avoids spectral peak searching in PM and can obtain the closed-form solution of DOA, so it has lower computational complexity. An improved NC-RI-PM algorithm of noncircular signal for uniform linear array is proposed to estimate the elevation angles and noncircular phases with automatic pairing. We reconstruct the extended array output by combining the array output and its conjugated counterpart. Our algorithm fully uses the extended array elements in the improved propagator matrix to estimate the elevation angles and noncircular phases by utilizing the rotational invariance property between subarrays. Compared with NC-RI-PM, the proposed algorithm has better angle estimation performance and much lower computational load. The computational complexity of the proposed algorithm is analyzed. We also derive the variance of estimation error and Cramer-Rao bound (CRB of noncircular signal for uniform linear array. Finally, simulation results are presented to demonstrate the effectiveness of our algorithm.

  2. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    Science.gov (United States)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  3. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Directory of Open Access Journals (Sweden)

    Oleksandr Makeyev

    2016-06-01

    Full Text Available Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1-polar electrode with n rings using the (4n + 1-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2 and quadripolar (n = 3 electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.

  4. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Science.gov (United States)

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  5. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    Directory of Open Access Journals (Sweden)

    Janet L. Rachlow

    2013-08-01

    Full Text Available United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1 if a current population size was given, (2 if a measure of uncertainty or variance was associated with current estimates of population size and (3 if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  6. An improvement of satellite-based algorithm for gross primary production estimation optimized over Korea

    Science.gov (United States)

    Pi, Kyoung-Jin; Han, Kyung-Soo; Kim, In-Hwan; Kim, Sang-Il; Lee, Min-Ji

    2011-11-01

    Monitoring the global gross primary production (GPP) is relevant to understanding the global carbon cycle and evaluating the effects of interannual climate variation on food and fiber production. GPP, the flux of carbon into ecosystems via photosynthetic assimilation, is an important variable in the global carbon cycle and a key process in land surface-atmosphere interactions. The Moderate-resolution Imaging Spectroradiometer (MODIS) is one of the primary global monitoring sensors. MODIS GPP has some of the problems that have been proven in several studies. Therefore this study was to solve the regional mismatch that occurs when using the MODIS GPP global product over Korea. To solve this problem, we estimated each of the GPP component variables separately to improve the GPP estimates. We compared our GPP estimates with validation GPP data to assess their accuracy. For all sites, the correlation was close with high significance (R2 = 0.8164, RMSE = 0.6126 g.C.m-2.d-1, bias = -0.0271 g.C.m-2.d-1). We also compared our results to those of other models. The component variables tended to be either over- or under-estimated when compared to those in other studies over the Korean peninsula, although the estimated GPP was better. The results of this study will likely improve carbon cycle modeling by capturing finer patterns with an integrated method of remote sensing. Keywords: VEGETATION, Gross Primary Production, MODIS.

  7. Vibration Suppression for Improving the Estimation of Kinematic Parameters on Industrial Robots

    Directory of Open Access Journals (Sweden)

    David Alejandro Elvira-Ortiz

    2016-01-01

    Full Text Available Vibration is a phenomenon that is present on every industrial system such as CNC machines and industrial robots. Moreover, sensors used to estimate angular position of a joint in an industrial robot are severely affected by vibrations and lead to wrong estimations. This paper proposes a methodology for improving the estimation of kinematic parameters on industrial robots through a proper suppression of the vibration components present on signals acquired from two primary sensors: accelerometer and gyroscope. A Kalman filter is responsible for the filtering of spurious vibration. Additionally, a sensor fusion technique is used to merge information from both sensors and improve the results obtained using each sensor separately. The methodology is implemented in a proprietary hardware signal processor and tested in an ABB IRB 140 industrial robot, first by analyzing the motion profile of only one joint and then by estimating the path tracking of two welding tasks: one rectangular and another one circular. Results from this work prove that the sensor fusion technique accompanied by proper suppression of vibrations delivers better estimation than other proposed techniques.

  8. Mixing Matrix Estimation of Underdetermined Blind Source Separation Based on Data Field and Improved FCM Clustering

    Directory of Open Access Journals (Sweden)

    Qiang Guo

    2018-01-01

    Full Text Available In modern electronic warfare, multiple input multiple output (MIMO radar has become an important tool for electronic reconnaissance and intelligence transmission because of its anti-stealth, high resolution, low intercept and anti-destruction characteristics. As a common MIMO radar signal, discrete frequency coding waveform (DFCW has a serious overlap of both time and frequency, so it cannot be directly used in the current radar signal separation problems. The existing fuzzy clustering algorithms have problems in initial value selection, low convergence rate and local extreme values which will lead to the low accuracy of the mixing matrix estimation. Consequently, a novel mixing matrix estimation algorithm based on data field and improved fuzzy C-means (FCM clustering is proposed. First of all, the sparsity and linear clustering characteristics of the time–frequency domain MIMO radar signals are enhanced by using the single-source principal value of complex angular detection. Secondly, the data field uses the potential energy information to analyze the particle distribution, thus design a new clustering number selection scheme. Then the particle swarm optimization algorithm is introduced to improve the iterative clustering process of FCM, and finally get the estimated value of the mixing matrix. The simulation results show that the proposed algorithm improves both the estimation accuracy and the robustness of the mixing matrix.

  9. Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge

    Science.gov (United States)

    Jaiswal, K.S.; Wald, D.J.

    2012-01-01

    We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.

  10. EFFECT OF PROPRIOCEPTIVE NEUROMUSCULAR FACILITATION (PNF IN IMPROVING SENSORIMOTOR FUNCTION IN PATIENTS WITH DIABETIC NEUROPATHY AFFECTING LOWER LIMBS

    Directory of Open Access Journals (Sweden)

    Kamaljeet Singh

    2016-06-01

    Full Text Available Background: Diabetic Mellitus is a group of metabolic disease characterized by hyperglycaemia resulting from defects in insulin secretion, insulin action or both. Distal Sensorimotor Polyneuropathy is the most common complication of diabetes which mainly affects the lower limbs. Most of the studies aimed at individually increasing muscle strength or sensation but not on overall performance enhancements of the diabetic lower limbs. The evidence supporting the effectiveness of PNF in diabetic neuropathic patients is scarce. Methods: 30 patients, with age between 50 to 70 years, diagnosed with Diabetic Sensorimotor Polyneuropathy (DSP were selected from the department of Medicine and department of Neurosurgery Guru Gobind Singh Medical College and Hospital. Patients were evaluated at the beginning and at the end of the intervention using Diabetic Neuropathy Examination scores. Patients received 3 sets of exercises one hour/day with 3 days/week for 3 months. Each set of exercises consists of 5 repetitions of PNF patterns (alternate day and techniques. Results: D1 & D2 patterns of PNF are effective in improving both motor and sensory functions of diabetic patients with neuropathic symptoms. Improvement in muscle strength, reflex and sensations occurred to a greater extent after the treatment of three months in these subjects. This study shows that PNF patterns were effective at enhancing sensorimotor problems of lower limbs. Conclusion: This study concluded that PNF is found to be effective in improving sensorimotor functions of diabetic neuropathic patients affecting lower limbs.

  11. Respondent driven sampling: determinants of recruitment and a method to improve point estimation.

    Directory of Open Access Journals (Sweden)

    Nicky McCreesh

    Full Text Available Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview.Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group.Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status.Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of this new method is required.

  12. Improved covariance matrix estimation in spectrally inhomogeneous sea clutter with application to adaptive small boat detection.

    CSIR Research Space (South Africa)

    Herselman, PL

    2008-09-01

    Full Text Available and that is necessary to set the threshold χt as a function of the steering vector Doppler fd. Improvements to the estimation technique are suggested and evaluated where a more localised M is estimated using either frequency agility or the immediate time history... of frequency, calculated as NIM2(fd) = E{z(fd)2}/E2{z(fd)} , (3) where z(fd) is the power spectral density at fd. This is often used to quantify the Rayleigh-likeness of the envelope 0 5 10 15 −500 −250 0 250 500 Doppler frequency [Hz ] NIM2Time [s...

  13. Caregivers' responses to an intervention to improve young child feeding behaviors in rural Bangladesh: a mixed method study of the facilitators and barriers to change.

    Science.gov (United States)

    Affleck, William; Pelto, Gretel

    2012-08-01

    Behavior change communications regarding child feeding have met with mixed success. The present study analyzes responses of 34 Bangladeshi caregivers seven months after they received a responsive feeding intervention. The intervention communicated and demonstrated five feeding interactions: hand-washing, self-feeding, verbal responsivity, managing refusals non-forcefully, and dietary diversity. Seventeen caregivers who adopted key behaviors addressed by the intervention and 17 who did not were compared in terms of socio-demographic variables, but more importantly in terms of their recall of the messages, their reported practice, and reported facilitators and barriers. Both those who changed and those who did not reported similar facilitators and barriers to practicing the new behaviors; there was also no difference in recall or in socio-demographic variables. Key themes identified through a constant comparative analysis helped to focus on common features of the lives of caregivers that made it easy or difficult to perform the practices. Some of these were household constraints such as poverty, shortage of time in which to complete chores, and avoiding waste and messiness; others related to the child's demands. Many caregivers misinterpreted instructions about talking to one's child in response to signals, as opposed to more common forms of supervision. Facilitators such as the child's evident pleasure and the caregiver's satisfaction did not always outweigh the barriers. Recommendations for improving interventions include helping caregivers solve problems tied to barriers and including more family members in the intervention. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. MODIS Data Assimilation in the CROPGRO model for improving soybean yield estimations

    Science.gov (United States)

    Richetti, J.; Monsivais-Huertero, A.; Ahmad, I.; Judge, J.

    2017-12-01

    Soybean is one of the main agricultural commodities in the world. Thus, having better estimates of its agricultural production is important. Improving the soybean crop models in Brazil is crucial for better understanding of the soybean market and enhancing decision making, because Brazil is the second largest soybean producer in the world, Parana state is responsible for almost 20% of it, and by itself would be the fourth greatest soybean producer in the world. Data assimilation techniques provide a method to improve spatio-temporal continuity of crops through integration of remotely sensed observations and crop growth models. This study aims to use MODIS EVI to improve DSSAT-CROPGRO soybean yield estimations in the Parana state, southern Brazil. The method uses the Ensemble Kalman filter which assimilates MODIS Terra and Aqua combined products (MOD13Q1 and MYD13Q1) into the CROPGRO model to improve the agricultural production estimates through update of light interception data over time. Expected results will be validated with monitored commercial farms during the period of 2013-2014.

  15. New approaches to improve a WCDMA SIR estimator by employing different post-processing stages

    Directory of Open Access Journals (Sweden)

    Amnart Chaichoet

    2008-09-01

    Full Text Available For effective control of transmission power in WCDMA mobile systems, a good estimate of signal-to-interference ratio (SIR is needed. Conventionally, an adaptive SIR estimator employs a moving average (MA filter (Yoon et al., 2002 to encounter fading channel distortion. However, the resulting estimate seems to have high estimation error due to fluctuation in the channel variation. In this paper, an additional post-processing stage is proposed to improve the estimation accuracy by reducing the variation of the estimate. Four variations of post-processing stages, namely 1 a moving average (MA postfilter,2 an exponential moving average (EMA post-filter, 3 an IIR post-filter and 4 least-mean-squared (LMS adaptive post-filter, are proposed and their optimal performance in terms of root-mean-square error (RMSE are then compared by simulation. The results show the best comparable performance when the MA and LMS post-filter are used. However, the MA post-filter requires a lookup table of filter order for optimal performance at different channel conditions, while the LMS post-filter can be used conveniently without a lookup table.

  16. Improved target detection and bearing estimation utilizing fast orthogonal search for real-time spectral analysis

    International Nuclear Information System (INIS)

    Osman, Abdalla; El-Sheimy, Naser; Nourledin, Aboelamgd; Theriault, Jim; Campbell, Scott

    2009-01-01

    The problem of target detection and tracking in the ocean environment has attracted considerable attention due to its importance in military and civilian applications. Sonobuoys are one of the capable passive sonar systems used in underwater target detection. Target detection and bearing estimation are mainly obtained through spectral analysis of received signals. The frequency resolution introduced by current techniques is limited which affects the accuracy of target detection and bearing estimation at a relatively low signal-to-noise ratio (SNR). This research investigates the development of a bearing estimation method using fast orthogonal search (FOS) for enhanced spectral estimation. FOS is employed in this research in order to improve both target detection and bearing estimation in the case of low SNR inputs. The proposed methods were tested using simulated data developed for two different scenarios under different underwater environmental conditions. The results show that the proposed method is capable of enhancing the accuracy for target detection as well as bearing estimation especially in cases of a very low SNR

  17. Estimating the cost of improving quality in electricity distribution: A parametric distance function approach

    International Nuclear Information System (INIS)

    Coelli, Tim J.; Gautier, Axel; Perelman, Sergio; Saplacan-Pop, Roxana

    2013-01-01

    The quality of electricity distribution is being more and more scrutinized by regulatory authorities, with explicit reward and penalty schemes based on quality targets having been introduced in many countries. It is then of prime importance to know the cost of improving the quality for a distribution system operator. In this paper, we focus on one dimension of quality, the continuity of supply, and we estimated the cost of preventing power outages. For that, we make use of the parametric distance function approach, assuming that outages enter in the firm production set as an input, an imperfect substitute for maintenance activities and capital investment. This allows us to identify the sources of technical inefficiency and the underlying trade-off faced by operators between quality and other inputs and costs. For this purpose, we use panel data on 92 electricity distribution units operated by ERDF (Electricité de France - Réseau Distribution) in the 2003–2005 financial years. Assuming a multi-output multi-input translog technology, we estimate that the cost of preventing one interruption is equal to 10.7€ for an average DSO. Furthermore, as one would expect, marginal quality improvements tend to be more expensive as quality itself improves. - Highlights: ► We estimate the implicit cost of outages for the main distribution company in France. ► For this purpose, we make use of a parametric distance function approach. ► Marginal quality improvements tend to be more expensive as quality itself improves. ► The cost of preventing one interruption varies from 1.8 € to 69.2 € (2005 prices). ► We estimate that, in average, it lays 33% above the regulated price of quality.

  18. Winter Crop Mapping for Improving Crop Production Estimates in Argentina Using Moderation Resolution Satellite Imagery

    Science.gov (United States)

    Humber, M. L.; Copati, E.; Sanchez, A.; Sahajpal, R.; Puricelli, E.; Becker-Reshef, I.

    2017-12-01

    Accurate crop production data is fundamental for reducing uncertainly and volatility in the domestic and international agricultural markets. The Agricultural Estimates Department of the Buenos Aires Grain Exchange has worked since 2000 on the estimation of different crop production data. With this information, the Grain Exchange helps different actors of the agricultural chain, such as producers, traders, seed companies, market analyst, policy makers, into their day to day decision making. Since 2015/16 season, the Grain Exchange has worked on the development of a new earth observations-based method to identify winter crop planted area at a regional scale with the aim of improving crop production estimates. The objective of this new methodology is to create a reliable winter crop mask at moderate spatial resolution using Landsat-8 imagery by exploiting bi-temporal differences in the phenological stages of winter crops as compared to other landcover types. In collaboration with the University of Maryland, the map has been validated by photointerpretation of a stratified statistically random sample of independent ground truth data in the four largest producing provinces of Argentina: Buenos Aires, Cordoba, La Pampa, and Santa Fe. In situ measurements were also used to further investigate conditions in the Buenos Aires province. Preliminary results indicate that while there are some avenues for improvement, overall the classification accuracy of the cropland and non-cropland classes are sufficient to improve downstream production estimates. Continuing research will focus on improving the methodology for winter crop mapping exercises on a yearly basis as well as improving the sampling methodology to optimize collection of validation data in the future.

  19. Implementation of a virtual learning from discrepancy meeting: a method to improve attendance and facilitate shared learning from radiological error

    International Nuclear Information System (INIS)

    Carlton Jones, A.L.; Roddie, M.E.

    2016-01-01

    Aim: To assess the effect on radiologist participation in learning from discrepancy meetings (LDMs) in a multisite radiology department by establishing virtual LDMs using OsiriX (Pixmeo). Materials and methods: Sets of anonymised discrepancy cases were added to an OsiriX database available for viewing on iMacs in all radiology reporting rooms. Radiologists were given a 3-week period to review the cases and send their feedback to the LDM convenor. Group learning points and consensus feedback were added to each case before it was moved to a permanent digital LDM library. Participation was recorded and compared with that from the previous 4 years of conventional LDMs. Radiologist feedback comparing the two types of LDM was collected using an anonymous online questionnaire. Results: Numbers of radiologists attending increased significantly from a mean of 12±2.9 for the conventional LDM to 32.7±7 for the virtual LDM (p<0.0001) and the percentage of radiologists achieving the UK standard of participation in at least 50% of LDMs annually rose from an average of 18% to 68%. The number of cases submitted per meeting rose significantly from an average of 11.1±3 for conventional LDMs to 15.9±5.9 for virtual LDMs (p<0.0097). Analysis of 35 returned questionnaires showed that radiologists welcomed being able to review cases at a time and place of their choosing and at their own pace. Conclusion: Introduction of virtual LDMs in a multisite radiology department improved radiologist participation in shared learning from radiological discrepancy and increased the number of submitted cases. - Highlights: • Learning from error is an important way to improve patient safety. • Consultant attendance at learning from discrepancy meetings (LDMs) was persistently poor in a large, multisite Trust. • Introduction of a ‘virtual’ LDM improved consultant participation and increased the number of cases submitted.

  20. Improving hepatitis B birth dose in rural Lao People's Democratic Republic through the use of mobile phones to facilitate communication.

    Science.gov (United States)

    Xeuatvongsa, Anonh; Datta, Siddhartha Sankar; Moturi, Edna; Wannemuehler, Kathleen; Philakong, Phanmanisone; Vongxay, Viengnakhone; Vilayvone, Vansy; Patel, Minal K

    2016-11-11

    Hepatitis B vaccine birth dose (HepB-BD) was introduced in Lao People's Democratic Republic to prevent perinatal hepatitis B virus transmission in 2008; high coverage is challenging since only 38% of births occur in a health facility. Healthcare workers report being unaware of home births and thus unable to conduct timely postnatal care (PNC) home visits. A quasi-experimental pilot study was conducted wherein mobile phones and phone credits were provided to village health volunteers (VHV) and healthcare workers (HCWs) to assess whether this could improve HepB-BD administration, as well as birth notification and increase home visits. From April to September 2014, VHVs and HCWs in four selected intervention districts were trained, supervised, received outreach per diem for conducting home visits, and received mobile phones and phone credits. In three comparison districts, VHVs and HCWs were trained, supervised, and received outreach per diem for conducting home visits. A post-study survey compared HepB-BD coverage among children born during the study and children born one year before. HCWs and VHVs were interviewed about the study. Among intervention districts, 463 study children and 406 pre-study children were enrolled in the survey; in comparison districts, 347 study children and 309 pre-study children were enrolled. In both arms, there was a significant improvement in the proportion of children reportedly receiving a PNC home visit (intervention p<0.0001, comparison p=0.04). The median difference in village level HepB-BD coverage (study cohort minus pre-study cohort), was 57% (interquartile range [IQR] 32-88%, p<0.0001) in intervention districts, compared with 20% (IQR 0-50%, p<0.0001) in comparison districts. The improvement in the intervention districts was greater than in the comparison districts (p=0.0009). Our findings suggest that the provision of phones and phone credits might be one important factor for increasing coverage. However, reasons for improvement

  1. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method

    Directory of Open Access Journals (Sweden)

    Jingyang Fu

    2018-04-01

    Full Text Available Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB, which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP. Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL time to first fix (TTFF in PPP ambiguity resolution (AR as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have

  2. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  3. Pre- and post-processing filters for improvement of blood velocity estimation

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2000-01-01

    with different signal-to-noise ratios (SNR). The exact extent of the vessel and the true velocities are thereby known. Velocity estimates were obtained by employing Kasai's autocorrelator on the data. The post-processing filter was used on the computed 2D velocity map. An improvement of the RMS error...... velocity in the vessels. Post-processing is beneficial to obtain an image that minimizes the variation, and present the important information to the clinicians. Applying the theory of fluid mechanics introduces restrictions on the variations possible in a flow field. Neighboring estimates in time and space...... should be highly correlated, since transitions should occur smoothly. This idea is the basis of the algorithm developed in this study. From Bayesian image processing theory an a posteriori probability distribution for the velocity field is computed based on constraints on smoothness. An estimate...

  4. Improved Shape Parameter Estimation in Pareto Distributed Clutter with Neural Networks

    Directory of Open Access Journals (Sweden)

    José Raúl Machado-Fernández

    2016-12-01

    Full Text Available The main problem faced by naval radars is the elimination of the clutter input which is a distortion signal appearing mixed with target reflections. Recently, the Pareto distribution has been related to sea clutter measurements suggesting that it may provide a better fit than other traditional distributions. The authors propose a new method for estimating the Pareto shape parameter based on artificial neural networks. The solution achieves a precise estimation of the parameter, having a low computational cost, and outperforming the classic method which uses Maximum Likelihood Estimates (MLE. The presented scheme contributes to the development of the NATE detector for Pareto clutter, which uses the knowledge of clutter statistics for improving the stability of the detection, among other applications.

  5. Peak Measurement for Vancomycin AUC Estimation in Obese Adults Improves Precision and Lowers Bias.

    Science.gov (United States)

    Pai, Manjunath P; Hong, Joseph; Krop, Lynne

    2017-04-01

    Vancomycin area under the curve (AUC) estimates may be skewed in obese adults due to weight-dependent pharmacokinetic parameters. We demonstrate that peak and trough measurements reduce bias and improve the precision of vancomycin AUC estimates in obese adults ( n = 75) and validate this in an independent cohort ( n = 31). The precision and mean percent bias of Bayesian vancomycin AUC estimates are comparable between covariate-dependent ( R 2 = 0.774, 3.55%) and covariate-independent ( R 2 = 0.804, 3.28%) models when peaks and troughs are measured but not when measurements are restricted to troughs only ( R 2 = 0.557, 15.5%). Copyright © 2017 American Society for Microbiology.

  6. Estimation of improved resolution soil moisture in vegetated areas using passive AMSR-E data

    Science.gov (United States)

    Moradizadeh, Mina; Saradjian, Mohammad R.

    2018-03-01

    Microwave remote sensing provides a unique capability for soil parameter retrievals. Therefore, various soil parameters estimation models have been developed using brightness temperature (BT) measured by passive microwave sensors. Due to the low resolution of satellite microwave radiometer data, the main goal of this study is to develop a downscaling approach to improve the spatial resolution of soil moisture estimates with the use of higher resolution visible/infrared sensor data. Accordingly, after the soil parameters have been obtained using Simultaneous Land Parameters Retrieval Model algorithm, the downscaling method has been applied to the soil moisture estimations that have been validated against in situ soil moisture data. Advance Microwave Scanning Radiometer-EOS BT data in Soil Moisture Experiment 2003 region in the south and north of Oklahoma have been used to this end. Results illustrated that the soil moisture variability is effectively captured at 5 km spatial scales without a significant degradation of the accuracy.

  7. Consultants Group Meeting on Improvement of Codling Moth SIT to Facilitate Expansion of Field Application. Working Material

    International Nuclear Information System (INIS)

    2000-01-01

    SIT currently has only limited application in Lepidoptera control. Prospects for improvement of the technique however are good, and the species with the best immediate prospect is the codling moth (Cydia pomonella). Codling moth is the key pest of most apple and pear orchards in the world and the cause of intensive insecticide use during the whole fruiting season. As a result of increasing development of insecticide resistance in codling moth, the banning of essential insecticides, as well as public concerns about the environment and food safety, the Subprogramme continues to receive enquiries from a number of countries as to the applicability of SIT as a suppression method for this species. SIT is currently used as part of areawide codling moth control in British Columbia, Canada and in the border area with Washington State, USA. The SIT can be integrated with a number of other techniques, including mating disruption as in the trial in Washington State. The Canadian programme is co-funded by growers, local and national government. The programme is proving effective at controlling the moth in an environmental friendly way. Currently the programme is only financially attractive with government subsidy although in view of the replacement of insecticide use with SIT, growers will be able to access the rapidly growing and very lucrative market for organic fruit. A new CRP is proposed with the objective of improving the efficiency of all stages of the SIT for codling moth. This will cover reducing the cost of production, product and process quality control, genetic sexing, strain compatibility and field monitoring among others.

  8. Improving real-time estimation of heavy-to-extreme precipitation using rain gauge data via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Seo, Dong-Jun; Siddique, Ridwan; Zhang, Yu; Kim, Dongsoo

    2014-11-01

    A new technique for gauge-only precipitation analysis for improved estimation of heavy-to-extreme precipitation is described and evaluated. The technique is based on a novel extension of classical optimal linear estimation theory in which, in addition to error variance, Type-II conditional bias (CB) is explicitly minimized. When cast in the form of well-known kriging, the methodology yields a new kriging estimator, referred to as CB-penalized kriging (CBPK). CBPK, however, tends to yield negative estimates in areas of no or light precipitation. To address this, an extension of CBPK, referred to herein as extended conditional bias penalized kriging (ECBPK), has been developed which combines the CBPK estimate with a trivial estimate of zero precipitation. To evaluate ECBPK, we carried out real-world and synthetic experiments in which ECBPK and the gauge-only precipitation analysis procedure used in the NWS's Multisensor Precipitation Estimator (MPE) were compared for estimation of point precipitation and mean areal precipitation (MAP), respectively. The results indicate that ECBPK improves hourly gauge-only estimation of heavy-to-extreme precipitation significantly. The improvement is particularly large for estimation of MAP for a range of combinations of basin size and rain gauge network density. This paper describes the technique, summarizes the results and shares ideas for future research.

  9. Soft Sensor of Vehicle State Estimation Based on the Kernel Principal Component and Improved Neural Network

    Directory of Open Access Journals (Sweden)

    Haorui Liu

    2016-01-01

    Full Text Available In the car control systems, it is hard to measure some key vehicle states directly and accurately when running on the road and the cost of the measurement is high as well. To address these problems, a vehicle state estimation method based on the kernel principal component analysis and the improved Elman neural network is proposed. Combining with nonlinear vehicle model of three degrees of freedom (3 DOF, longitudinal, lateral, and yaw motion, this paper applies the method to the soft sensor of the vehicle states. The simulation results of the double lane change tested by Matlab/SIMULINK cosimulation prove the KPCA-IENN algorithm (kernel principal component algorithm and improved Elman neural network to be quick and precise when tracking the vehicle states within the nonlinear area. This algorithm method can meet the software performance requirements of the vehicle states estimation in precision, tracking speed, noise suppression, and other aspects.

  10. Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes

    Science.gov (United States)

    Ding, Quan; Besio, Walter G.

    2015-01-01

    Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n. PMID:26693200

  11. Improved estimation of receptor density and binding rate constants using a single tracer injection and displacement

    International Nuclear Information System (INIS)

    Syrota, A.; Delforge, J.; Mazoyer, B.M.

    1988-01-01

    The possibility of improving receptor model parameter estimation using a displacement experiment in which an excess of an unlabeled ligand (J) is injected after a delay (t D ) following injection of trace amounts of the β + - labeled ligand (J*) is investigated. The effects of varying t D and J/J* on parameter uncertainties are studied in the case of 11 C-MQNB binding to myocardial acetycholine receptor using parameters identified in a dog experiment

  12. An improved PNGV modeling and SOC estimation for lithium iron phosphate batteries

    Science.gov (United States)

    Li, Peng

    2017-11-01

    Because lithium iron phosphate battery has many advantages, it has been used more and more widely in the field of electric vehicle. The lithium iron phosphate battery, presents the improved PNGV model, and the batteries charge discharge characteristics and pulse charge discharge experiments, identification of parameters of the battery model by interpolation and least square fitting method, to achieve a more accurate modeling of lithium iron phosphate battery, and the extended Calman filter algorithm (EKF) is completed state nuclear power battery (SOC) estimate.

  13. Estimating Typhoon Rainfall over Sea from SSM/I Satellite Data Using an Improved Genetic Programming

    Science.gov (United States)

    Yeh, K.; Wei, H.; Chen, L.; Liu, G.

    2010-12-01

    Estimating Typhoon Rainfall over Sea from SSM/I Satellite Data Using an Improved Genetic Programming Keh-Chia Yeha, Hsiao-Ping Weia,d, Li Chenb, and Gin-Rong Liuc a Department of Civil Engineering, National Chiao Tung University, Hsinchu, Taiwan, 300, R.O.C. b Department of Civil Engineering and Engineering Informatics, Chung Hua University, Hsinchu, Taiwan, 300, R.O.C. c Center for Space and Remote Sensing Research, National Central University, Tao-Yuan, Taiwan, 320, R.O.C. d National Science and Technology Center for Disaster Reduction, Taipei County, Taiwan, 231, R.O.C. Abstract This paper proposes an improved multi-run genetic programming (GP) and applies it to predict the rainfall using meteorological satellite data. GP is a well-known evolutionary programming and data mining method, used to automatically discover the complex relationships among nonlinear systems. The main advantage of GP is to optimize appropriate types of function and their associated coefficients simultaneously. This study makes an improvement to enhance escape ability from local optimums during the optimization procedure. The GP continuously runs several times by replacing the terminal nodes at the next run with the best solution at the current run. The current novel model improves GP, obtaining a highly nonlinear mathematical equation to estimate the rainfall. In the case study, this improved GP described above combining with SSM/I satellite data is employed to establish a suitable method for estimating rainfall at sea surface during typhoon periods. These estimated rainfalls are then verified with the data from four rainfall stations located at Peng-Jia-Yu, Don-Gji-Dao, Lan-Yu, and Green Island, which are four small islands around Taiwan. From the results, the improved GP can generate sophisticated and accurate nonlinear mathematical equation through two-run learning procedures which outperforms the traditional multiple linear regression, empirical equations and back-propagated network

  14. Reversible dual inhibitor against G9a and DNMT1 improves human iPSC derivation enhancing MET and facilitating transcription factor engagement to the genome.

    Directory of Open Access Journals (Sweden)

    Juan Roberto Rodriguez-Madoz

    Full Text Available The combination of defined factors with small molecules targeting epigenetic factors is a strategy that has been shown to enhance optimal derivation of iPSCs and could be used for disease modelling, high throughput screenings and/or regenerative medicine applications. In this study, we showed that a new first-in-class reversible dual G9a/DNMT1 inhibitor compound (CM272 improves the efficiency of human cell reprogramming and iPSC generation from primary cells of healthy donors and patient samples, using both integrative and non-integrative methods. Moreover, CM272 facilitates the generation of human iPSC with only two factors allowing the removal of the most potent oncogenic factor cMYC. Furthermore, we demonstrated that mechanistically, treatment with CM272 induces heterochromatin relaxation, facilitates the engagement of OCT4 and SOX2 transcription factors to OSKM refractory binding regions that are required for iPSC establishment, and enhances mesenchymal to epithelial transition during the early phase of cell reprogramming. Thus, the use of this new G9a/DNMT reversible dual inhibitor compound may represent an interesting alternative for improving cell reprogramming and human iPSC derivation for many different applications while providing interesting insights into reprogramming mechanisms.

  15. Reversible dual inhibitor against G9a and DNMT1 improves human iPSC derivation enhancing MET and facilitating transcription factor engagement to the genome.

    Science.gov (United States)

    Rodriguez-Madoz, Juan Roberto; San Jose-Eneriz, Edurne; Rabal, Obdulia; Zapata-Linares, Natalia; Miranda, Estibaliz; Rodriguez, Saray; Porciuncula, Angelo; Vilas-Zornoza, Amaia; Garate, Leire; Segura, Victor; Guruceaga, Elizabeth; Agirre, Xabier; Oyarzabal, Julen; Prosper, Felipe

    2017-01-01

    The combination of defined factors with small molecules targeting epigenetic factors is a strategy that has been shown to enhance optimal derivation of iPSCs and could be used for disease modelling, high throughput screenings and/or regenerative medicine applications. In this study, we showed that a new first-in-class reversible dual G9a/DNMT1 inhibitor compound (CM272) improves the efficiency of human cell reprogramming and iPSC generation from primary cells of healthy donors and patient samples, using both integrative and non-integrative methods. Moreover, CM272 facilitates the generation of human iPSC with only two factors allowing the removal of the most potent oncogenic factor cMYC. Furthermore, we demonstrated that mechanistically, treatment with CM272 induces heterochromatin relaxation, facilitates the engagement of OCT4 and SOX2 transcription factors to OSKM refractory binding regions that are required for iPSC establishment, and enhances mesenchymal to epithelial transition during the early phase of cell reprogramming. Thus, the use of this new G9a/DNMT reversible dual inhibitor compound may represent an interesting alternative for improving cell reprogramming and human iPSC derivation for many different applications while providing interesting insights into reprogramming mechanisms.

  16. Improvements of the ALICE high level trigger for LHC Run 2 to facilitate online reconstruction, QA, and calibration

    Energy Technology Data Exchange (ETDEWEB)

    Rohr, David [Frankfurt Institute for Advanced Studies, Frankfurt (Germany); Collaboration: ALICE-Collaboration

    2016-07-01

    ALICE is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. Its main goal is the study of matter under extreme pressure and temperature as produced in heavy ion collisions at LHC. The ALICE High Level Trigger (HLT) is an online compute farm of around 200 nodes that performs a real time event reconstruction of the data delivered by the ALICE detectors. The HLT employs a fast FPGA based cluster finder algorithm as well as a GPU based track reconstruction algorithm and it is designed to process the maximum data rate expected from the ALICE detectors in real time. We present new features of the HLT for LHC Run 2 that started in 2015. A new fast standalone track reconstruction algorithm for the Inner Tracking System (ITS) enables the HLT to compute and report to LHC the luminous region of the interactions in real time. We employ a new dynamically reconfigurable histogram component that allows the visualization of characteristics of the online reconstruction using the full set of events measured by the detectors. This improves our monitoring and QA capabilities. During Run 2, we plan to deploy online calibration, starting with the calibration of the TPC (Time Projection Chamber) detector's drift time. First proof of concept tests were successfully performed using data-replay on our development cluster and during the heavy ion period at the end of 2015.

  17. Experimental and Analytical Studies on Improved Feedforward ML Estimation Based on LS-SVR

    Directory of Open Access Journals (Sweden)

    Xueqian Liu

    2013-01-01

    Full Text Available Maximum likelihood (ML algorithm is the most common and effective parameter estimation method. However, when dealing with small sample and low signal-to-noise ratio (SNR, threshold effects are resulted and estimation performance degrades greatly. It is proved that support vector machine (SVM is suitable for small sample. Consequently, we employ the linear relationship between least squares support vector regression (LS-SVR’s inputs and outputs and regard LS-SVR process as a time-varying linear filter to increase input SNR of received signals and decrease the threshold value of mean square error (MSE curve. Furthermore, it is verified that by taking single-tone sinusoidal frequency estimation, for example, and integrating data analysis and experimental validation, if LS-SVR’s parameters are set appropriately, not only can the LS-SVR process ensure the single-tone sinusoid and additive white Gaussian noise (AWGN channel characteristics of original signals well, but it can also improves the frequency estimation performance. During experimental simulations, LS-SVR process is applied to two common and representative single-tone sinusoidal ML frequency estimation algorithms, the DFT-based frequency-domain periodogram (FDP and phase-based Kay ones. And the threshold values of their MSE curves are decreased by 0.3 dB and 1.2 dB, respectively, which obviously exhibit the advantage of the proposed algorithm.

  18. Improving accuracy of portion-size estimations through a stimulus equivalence paradigm.

    Science.gov (United States)

    Hausman, Nicole L; Borrero, John C; Fisher, Alyssa; Kahng, SungWoo

    2014-01-01

    The prevalence of obesity continues to increase in the United States (Gordon-Larsen, The, & Adair, 2010). Obesity can be attributed, in part, to overconsumption of energy-dense foods. Given that overeating plays a role in the development of obesity, interventions that teach individuals to identify and consume appropriate portion sizes are warranted. Specifically, interventions that teach individuals to estimate portion sizes correctly without the use of aids may be critical to the success of nutrition education programs. The current study evaluated the use of a stimulus equivalence paradigm to teach 9 undergraduate students to estimate portion size accurately. Results suggested that the stimulus equivalence paradigm was effective in teaching participants to make accurate portion size estimations without aids, and improved accuracy was observed in maintenance sessions that were conducted 1 week after training. Furthermore, 5 of 7 participants estimated the target portion size of novel foods during extension sessions. These data extend existing research on teaching accurate portion-size estimations and may be applicable to populations who seek treatment (e.g., overweight or obese children and adults) to teach healthier eating habits. © Society for the Experimental Analysis of Behavior.

  19. An improved method to estimate reflectance parameters for high dynamic range imaging

    Science.gov (United States)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  20. In Vitro Maturation of a Humanized Shark VNAR Domain to Improve Its Biophysical Properties to Facilitate Clinical Development

    Directory of Open Access Journals (Sweden)

    John Steven

    2017-10-01

    Full Text Available Molecular engineering to increase the percentage identity to common human immunoglobulin sequences of non-human therapeutic antibodies and scaffolds has become standard practice. This strategy is often used to reduce undesirable immunogenic responses, accelerating the clinical development of candidate domains. The first humanized shark variable domain (VNAR was reported by Kovalenko and colleagues and used the anti-human serum albumin (HSA domain, clone E06, as a model to construct a number of humanized versions including huE06v1.10. This study extends this work by using huE06v1.10 as a template to isolate domains with improved biophysical properties and reduced antigenicity. Random mutagenesis was conducted on huE06v1.10 followed by refinement of clones through an off-rate ranking-based selection on target antigen. Many of these next-generation binders retained high affinity for target, together with good species cross-reactivity. Lead domains were assessed for any tendency to dimerize, tolerance to N- and C-terminal fusions, affinity, stability, and relative antigenicity in human dendritic cell assays. Functionality of candidate clones was verified in vivo through the extension of serum half-life in a typical drug format. From these analyses the domain, BA11, exhibited negligible antigenicity, high stability and high affinity for mouse, rat, and HSA. When these attributes were combined with demonstrable functionality in a rat model of PK, the BA11 clone was established as our clinical candidate.

  1. Using clinical indicators to facilitate quality improvement via the accreditation process: an adaptive study into the control relationship.

    Science.gov (United States)

    Chuang, Sheuwen; Howley, Peter P; Hancock, Stephen

    2013-07-01

    The aim of the study was to determine accreditation surveyors' and hospitals' use and perceived usefulness of clinical indicator reports and the potential to establish the control relationship between the accreditation and reporting systems. The control relationship refers to instructional directives, arising from appropriately designed methods and efforts towards using clinical indicators, which provide a directed moderating, balancing and best outcome for the connected systems. Web-based questionnaire survey. Australian Council on Healthcare Standards' (ACHS) accreditation and clinical indicator programmes. Seventy-three of 306 surveyors responded. Half used the reports always/most of the time. Five key messages were revealed: (i) report use was related to availability before on-site investigation; (ii) report use was associated with the use of non-ACHS reports; (iii) a clinical indicator set's perceived usefulness was associated with its reporting volume across hospitals; (iv) simpler measures and visual summaries in reports were rated the most useful; (v) reports were deemed to be suitable for the quality and safety objectives of the key groups of interested parties (hospitals' senior executive and management officers, clinicians, quality managers and surveyors). Implementing the control relationship between the reporting and accreditation systems is a promising expectation. Redesigning processes to ensure reports are available in pre-survey packages and refined education of surveyors and hospitals on how to better utilize the reports will support the relationship. Additional studies on the systems' theory-based model of the accreditation and reporting system are warranted to establish the control relationship, building integrated system-wide relationships with sustainable and improved outcomes.

  2. Barriers and facilitators to evidence based care of type 2 diabetes patients: experiences of general practitioners participating to a quality improvement program

    Directory of Open Access Journals (Sweden)

    Hannes Karen

    2009-07-01

    Full Text Available Abstract Objective To evaluate the barriers and facilitators to high-quality diabetes care as experienced by general practitioners (GPs who participated in an 18-month quality improvement program (QIP. This QIP was implemented to promote compliance with international guidelines. Methods Twenty out of the 120 participating GPs in the QIP underwent semi-structured interviews that focused on three questions: 'Which changes did you implement or did you observe in the quality of diabetes care during your participation in the QIP?' 'According to your experience, what induced these changes?' and 'What difficulties did you experience in making the changes?' Results Most GPs reported that enhanced knowledge, improved motivation, and a greater sense of responsibility were the key factors that led to greater compliance with diabetes care guidelines and consequent improvements in diabetes care. Other factors were improved communication with patients and consulting specialists and reliance on diabetes nurse educators. Some GPs were reluctant to collaborate with specialists, and especially with diabetes educators and dieticians. Others blamed poor compliance with the guidelines on lack of time. Most interviewees reported that a considerable minority of patients were unwilling to change their lifestyles. Conclusion Qualitative research nested in an experimental trial may clarify the improvements that a QIP may bring about in a general practice, provide insight into GPs' approach to diabetes care and reveal the program's limits. Implementation of a QIP encounters an array of cognitive, motivational, and relational obstacles that are embedded in a patient-healthcare provider relationship.

  3. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights.

    Directory of Open Access Journals (Sweden)

    Patrick Habecker

    Full Text Available Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations via traditional survey tools such as telephone or mail surveys--by asking a representative sample to estimate the number of people they know who are members of such a "hidden" subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation "trimming" to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.

  4. Improved Forest Biomass and Carbon Estimations Using Texture Measures from WorldView-2 Satellite Data

    Directory of Open Access Journals (Sweden)

    Sandra Eckert

    2012-03-01

    Full Text Available Accurate estimation of aboveground biomass and carbon stock has gained importance in the context of the United Nations Framework Convention on Climate Change (UNFCCC and the Kyoto Protocol. In order to develop improved forest stratum–specific aboveground biomass and carbon estimation models for humid rainforest in northeast Madagascar, this study analyzed texture measures derived from WorldView-2 satellite data. A forest inventory was conducted to develop stratum-specific allometric equations for dry biomass. On this basis, carbon was calculated by applying a conversion factor. After satellite data preprocessing, vegetation indices, principal components, and texture measures were calculated. The strength of their relationships with the stratum-specific plot data was analyzed using Pearson’s correlation. Biomass and carbon estimation models were developed by performing stepwise multiple linear regression. Pearson’s correlation coefficients revealed that (a texture measures correlated more with biomass and carbon than spectral parameters, and (b correlations were stronger for degraded forest than for non-degraded forest. For degraded forest, the texture measures of Correlation, Angular Second Moment, and Contrast, derived from the red band, contributed to the best estimation model, which explained 84% of the variability in the field data (relative RMSE = 6.8%. For non-degraded forest, the vegetation index EVI and the texture measures of Variance, Mean, and Correlation, derived from the newly introduced coastal blue band, both NIR bands, and the red band, contributed to the best model, which explained 81% of the variability in the field data (relative RMSE = 11.8%. These results indicate that estimation of tropical rainforest biomass/carbon, based on very high resolution satellite data, can be improved by (a developing and applying forest stratum–specific models, and (b including textural information in addition to spectral information.

  5. A Hybrid of Optical Remote Sensing and Hydrological Modeling Improves Water Balance Estimation

    Science.gov (United States)

    Gleason, Colin J.; Wada, Yoshihide; Wang, Jida

    2018-01-01

    Declining gauging infrastructure and fractious water politics have decreased available information about river flows globally. Remote sensing and water balance modeling are frequently cited as potential solutions, but these techniques largely rely on these same in-decline gauge data to make accurate discharge estimates. A different approach is therefore needed, and we here combine remotely sensed discharge estimates made via at-many-stations hydraulic geometry (AMHG) and the PCR-GLOBWB hydrological model to estimate discharge over the Lower Nile. Specifically, we first estimate initial discharges from 87 Landsat images and AMHG (1984-2015), and then use these flow estimates to tune the model, all without using gauge data. The resulting tuned modeled hydrograph shows a large improvement in flow magnitude: validation of the tuned monthly hydrograph against a historical gauge (1978-1984) yields an RMSE of 439 m3/s (40.8%). By contrast, the original simulation had an order-of-magnitude flow error. This improvement is substantial but not perfect: tuned flows have a 1-2 month wet season lag and a negative base flow bias. Accounting for this 2 month lag yields a hydrograph RMSE of 270 m3/s (25.7%). Thus, our results coupling physical models and remote sensing is a promising first step and proof of concept toward future modeling of ungauged flows, especially as developments in cloud computing for remote sensing make our method easily applicable to any basin. Finally, we purposefully do not offer prescriptive solutions for Nile management, and rather hope that the methods demonstrated herein can prove useful to river stakeholders in managing their own water.

  6. Improving satellite-based post-fire evapotranspiration estimates in semi-arid regions

    Science.gov (United States)

    Poon, P.; Kinoshita, A. M.

    2017-12-01

    Climate change and anthropogenic factors contribute to the increased frequency, duration, and size of wildfires, which can alter ecosystem and hydrological processes. The loss of vegetation canopy and ground cover reduces interception and alters evapotranspiration (ET) dynamics in riparian areas, which can impact rainfall-runoff partitioning. Previous research evaluated the spatial and temporal trends of ET based on burn severity and observed an annual decrease of 120 mm on average for three years after fire. Building upon these results, this research focuses on the Coyote Fire in San Diego, California (USA), which burned a total of 76 km2 in 2003 to calibrate and improve satellite-based ET estimates in semi-arid regions affected by wildfire. The current work utilizes satellite-based products and techniques such as the Google Earth Engine Application programming interface (API). Various ET models (ie. Operational Simplified Surface Energy Balance Model (SSEBop)) are compared to the latent heat flux from two AmeriFlux eddy covariance towers, Sky Oaks Young (US-SO3), and Old Stand (US-SO2), from 2000 - 2015. The Old Stand tower has a low burn severity and the Young Stand tower has a moderate to high burn severity. Both towers are used to validate spatial ET estimates. Furthermore, variables and indices, such as Enhanced Vegetation Index (EVI), Normalized Difference Moisture Index (NDMI), and the Normalized Burn Ratio (NBR) are utilized to evaluate satellite-based ET through a multivariate statistical analysis at both sites. This point-scale study will able to improve ET estimates in spatially diverse regions. Results from this research will contribute to the development of a post-wildfire ET model for semi-arid regions. Accurate estimates of post-fire ET will provide a better representation of vegetation and hydrologic recovery, which can be used to improve hydrologic models and predictions.

  7. Collaborative Project: Building improved optimized parameter estimation algorithms to improve methane and nitrogen fluxes in a climate model

    Energy Technology Data Exchange (ETDEWEB)

    Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)

    2016-11-29

    Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogen balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in

  8. High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya.

    Science.gov (United States)

    Jia, Peng; Anderson, John D; Leitner, Michael; Rheingans, Richard

    2016-01-01

    Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals. The surveyed households in 397 clusters from 2008-2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation. The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing. There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally where there are areas of greater needs when

  9. High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya.

    Directory of Open Access Journals (Sweden)

    Peng Jia

    Full Text Available Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals.The surveyed households in 397 clusters from 2008-2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation.The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing.There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally where there are areas of

  10. When celibacy matters: incorporating non-breeders improves demographic parameter estimates.

    Science.gov (United States)

    Pardo, Deborah; Weimerskirch, Henri; Barbraud, Christophe

    2013-01-01

    In long-lived species only a fraction of a population breeds at a given time. Non-breeders can represent more than half of adult individuals, calling in doubt the relevance of estimating demographic parameters from the sole breeders. Here we demonstrate the importance of considering observable non-breeders to estimate reliable demographic traits: survival, return, breeding, hatching and fledging probabilities. We study the long-lived quasi-biennial breeding wandering albatross (Diomedea exulans). In this species, the breeding cycle lasts almost a year and birds that succeed a given year tend to skip the next breeding occasion while birds that fail tend to breed again the following year. Most non-breeders remain unobservable at sea, but still a substantial number of observable non-breeders (ONB) was identified on breeding sites. Using multi-state capture-mark-recapture analyses, we used several measures to compare the performance of demographic estimates between models incorporating or ignoring ONB: bias (difference in mean), precision (difference is standard deviation) and accuracy (both differences in mean and standard deviation). Our results highlight that ignoring ONB leads to bias and loss of accuracy on breeding probability and survival estimates. These effects are even stronger when studied in an age-dependent framework. Biases on breeding probabilities and survival increased with age leading to overestimation of survival at old age and thus actuarial senescence and underestimation of reproductive senescence. We believe our study sheds new light on the difficulties of estimating demographic parameters in species/taxa where a significant part of the population does not breed every year. Taking into account ONB appeared important to improve demographic parameter estimates, models of population dynamics and evolutionary conclusions regarding senescence within and across taxa.

  11. Improving the Carbon Dioxide Emission Estimates from the Combustion of Fossil Fuels in California

    Energy Technology Data Exchange (ETDEWEB)

    de la Rue du Can, Stephane; Wenzel, Tom; Price, Lynn

    2008-08-13

    Central to any study of climate change is the development of an emission inventory that identifies and quantifies the State's primary anthropogenic sources and sinks of greenhouse gas (GHG) emissions. CO2 emissions from fossil fuel combustion accounted for 80 percent of California GHG emissions (CARB, 2007a). Even though these CO2 emissions are well characterized in the existing state inventory, there still exist significant sources of uncertainties regarding their accuracy. This report evaluates the CO2 emissions accounting based on the California Energy Balance database (CALEB) developed by Lawrence Berkeley National Laboratory (LBNL), in terms of what improvements are needed and where uncertainties lie. The estimated uncertainty for total CO2 emissions ranges between -21 and +37 million metric tons (Mt), or -6percent and +11percent of total CO2 emissions. The report also identifies where improvements are needed for the upcoming updates of CALEB. However, it is worth noting that the California Air Resources Board (CARB) GHG inventory did not use CALEB data for all combustion estimates. Therefore the range in uncertainty estimated in this report does not apply to the CARB's GHG inventory. As much as possible, additional data sources used by CARB in the development of its GHG inventory are summarized in this report for consideration in future updates to CALEB.

  12. Improving Global Gross Primary Productivity Estimates by Computing Optimum Light Use Efficiencies Using Flux Tower Data

    Science.gov (United States)

    Madani, Nima; Kimball, John S.; Running, Steven W.

    2017-11-01

    In the light use efficiency (LUE) approach of estimating the gross primary productivity (GPP), plant productivity is linearly related to absorbed photosynthetically active radiation assuming that plants absorb and convert solar energy into biomass within a maximum LUE (LUEmax) rate, which is assumed to vary conservatively within a given biome type. However, it has been shown that photosynthetic efficiency can vary within biomes. In this study, we used 149 global CO2 flux towers to derive the optimum LUE (LUEopt) under prevailing climate conditions for each tower location, stratified according to model training and test sites. Unlike LUEmax, LUEopt varies according to heterogeneous landscape characteristics and species traits. The LUEopt data showed large spatial variability within and between biome types, so that a simple biome classification explained only 29% of LUEopt variability over 95 global tower training sites. The use of explanatory variables in a mixed effect regression model explained 62.2% of the spatial variability in tower LUEopt data. The resulting regression model was used for global extrapolation of the LUEopt data and GPP estimation. The GPP estimated using the new LUEopt map showed significant improvement relative to global tower data, including a 15% R2 increase and 34% root-mean-square error reduction relative to baseline GPP calculations derived from biome-specific LUEmax constants. The new global LUEopt map is expected to improve the performance of LUE-based GPP algorithms for better assessment and monitoring of global terrestrial productivity and carbon dynamics.

  13. A test for Improvement of high resolution Quantitative Precipitation Estimation for localized heavy precipitation events

    Science.gov (United States)

    Lee, Jung-Hoon; Roh, Joon-Woo; Park, Jeong-Gyun

    2017-04-01

    Accurate estimation of precipitation is one of the most difficult and significant tasks in the area of weather diagnostic and forecasting. In the Korean Peninsula, heavy precipitations are caused by various physical mechanisms, which are affected by shortwave trough, quasi-stationary moisture convergence zone among varying air masses, and a direct/indirect effect of tropical cyclone. In addition to, various geographical and topographical elements make production of temporal and spatial distribution of precipitation is very complicated. Especially, localized heavy rainfall events in South Korea generally arise from mesoscale convective systems embedded in these synoptic scale disturbances. In weather radar data with high temporal and spatial resolution, accurate estimation of rain rate from radar reflectivity data is too difficult. Z-R relationship (Marshal and Palmer 1948) have adapted representatively. In addition to, several methods such as support vector machine (SVM), neural network, Fuzzy logic, Kriging were utilized in order to improve the accuracy of rain rate. These methods show the different quantitative precipitation estimation (QPE) and the performances of accuracy are different for heavy precipitation cases. In this study, in order to improve the accuracy of QPE for localized heavy precipitation, ensemble method for Z-R relationship and various techniques was tested. This QPE ensemble method was developed by a concept based on utilizing each advantage of precipitation calibration methods. The ensemble members were produced for a combination of different Z-R coefficient and calibration method.

  14. An improvement of wind velocity estimation from radar Doppler spectra in the upper mesosphere

    Directory of Open Access Journals (Sweden)

    S. Takeda

    2001-08-01

    Full Text Available We have developed a new parameter estimation method for Doppler wind spectra in the mesosphere observed with an MST radar such as the MU radar in the DBS (Doppler Beam Swinging mode. Off-line incoherent integration of the Doppler spectra is carried out with a new algorithm excluding contamination by strong meteor echoes. At the same time, initial values on a least square fitting of the Gaussian function are derived using a larger number of integration of the spectra for a longer time and for multiple heights. As a result, a significant improvement has been achieved with the probability of a successful fitting and parameter estimation above 80 km. The top height for the wind estimation has been improved to around 95 km. A comparison between the MU radar and the High Resolution Doppler Imager (HRDI on the UARS satellite is shown and the capability of the new method for a validation of a future satellite mission is suggested.Key words. Meteorology and atmospheric dynamics (middle atmosphere dynamics – Radio science (remote sensing; signal processing

  15. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    Science.gov (United States)

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  16. Can we improve C IV-based single epoch black hole mass estimations?

    Science.gov (United States)

    Mejía-Restrepo, J. E.; Trakhtenbrot, B.; Lira, P.; Netzer, H.

    2018-05-01

    In large optical surveys at high redshifts (z > 2), the C IV broad emission line is the most practical alternative to estimate the mass (MBH) of active super-massive black holes (SMBHs). However, mass determinations obtained with this line are known to be highly uncertain. In this work we use the Sloan Digital Sky Survey Data Release 7 and 12 quasar catalogues to statistically test three alternative methods put forward in the literature to improve C IV-based MBH estimations. These methods are constructed from correlations between the ratio of the C IV line-width to the low ionization line-widths (Hα, Hβ and Mg II) and several other properties of rest-frame UV emission lines. Our analysis suggests that these correction methods are of limited applicability, mostly because all of them depend on correlations that are driven by the linewidth of the C IV profile itself and not by an interconnection between the linewidth of the C IV line with the linewidth of the low ionization lines. Our results show that optical C IV-based mass estimates at high redshift cannot be a proper replacement for estimates based on IR spectroscopy of low ionization lines like Hα, Hβ and Mg II.

  17. The Pose Estimation of Mobile Robot Based on Improved Point Cloud Registration

    Directory of Open Access Journals (Sweden)

    Yanzi Miao

    2016-03-01

    Full Text Available Due to GPS restrictions, an inertial sensor is usually used to estimate the location of indoor mobile robots. However, it is difficult to achieve high-accuracy localization and control by inertial sensors alone. In this paper, a new method is proposed to estimate an indoor mobile robot pose with six degrees of freedom based on an improved 3D-Normal Distributions Transform algorithm (3D-NDT. First, point cloud data are captured by a Kinect sensor and segmented according to the distance to the robot. After the segmentation, the input point cloud data are processed by the Approximate Voxel Grid Filter algorithm in different sized voxel grids. Second, the initial registration and precise registration are performed respectively according to the distance to the sensor. The most distant point cloud data use the 3D-Normal Distributions Transform algorithm (3D-NDT with large-sized voxel grids for initial registration, based on the transformation matrix from the odometry method. The closest point cloud data use the 3D-NDT algorithm with small-sized voxel grids for precise registration. After the registrations above, a final transformation matrix is obtained and coordinated. Based on this transformation matrix, the pose estimation problem of the indoor mobile robot is solved. Test results show that this method can obtain accurate robot pose estimation and has better robustness.

  18. Facilitating Transfers

    DEFF Research Database (Denmark)

    Kjær, Poul F.

    to specific logics of temporalisation and spatial expansion of a diverse set of social processes in relation to, for example, the economy, politics, science and the mass media. On this background, the paper will more concretely develop a conceptual framework for classifying different contextual orders...... that the essential functional and normative purpose of regulatory governance is to facilitate, stabilise and justify the transfer of condensed social components (such as economic capital and products, political decisions, legal judgements, religious beliefs and scientific knowledge) from one social contexts...

  19. Challenges and Facilitators to Promoting a Healthy Food Environment and Communicating Effectively with Parents to Improve Food Behaviors of School Children.

    Science.gov (United States)

    Luesse, Hiershenee B; Paul, Rachel; Gray, Heewon L; Koch, Pamela; Contento, Isobel; Marsick, Victoria

    2018-02-14

    Background Childhood obesity is a major public health concern and families play an important role. Improving strategies to reach parents and directing tailored nutrition education to them is needed. Purpose To investigate the challenges and facilitators to promoting a healthy environment at home and to identify communication preferences to inform intervention strategies for effectively reaching low-income urban minority families. Procedure Semi-structured focus group interviews were conducted with four groups involving 16 low-income urban parents (94% female; 88% Hispanic/Latino, 12% African American) of elementary school children. Interviews were transcribed and analyzed applying Social Cognitive Theory and using in-vivo coding. Main Findings The most common barriers to parents providing healthy foods to their children were accommodating child preferences and familial opposition. Parents showed intentionality to engage in healthy behaviors, and often shared procedural knowledge for reaching health goals. The analyses of desired communication channels yielded major preferences: tailored information, information provided through multiple mediums, appropriate duration/frequency of messages, and presented from a voice of authority. Conclusion and Implication While parents expressed desires to be healthy, the home food environment presented substantial challenges. Multi-media supports such as workshops, flyers, and text messaging may be useful to facilitate the sharing of information to minimize the tensions between intentionality and reaching desired goals to be healthy. Some parents thought that information received through text messaging could be easily shared and would act as a voice of authority to support child behavior change.

  20. Improving control and estimation for distributed parameter systems utilizing mobile actuator-sensor network.

    Science.gov (United States)

    Mu, Wenying; Cui, Baotong; Li, Wen; Jiang, Zhengxian

    2014-07-01

    This paper proposes a scheme for non-collocated moving actuating and sensing devices which is unitized for improving performance in distributed parameter systems. By Lyapunov stability theorem, each moving actuator/sensor agent velocity is obtained. To enhance state estimation of a spatially distributes process, two kinds of filters with consensus terms which penalize the disagreement of the estimates are considered. Both filters can result in the well-posedness of the collective dynamics of state errors and can converge to the plant state. Numerical simulations demonstrate that the effectiveness of such a moving actuator-sensor network in enhancing system performance and the consensus filters converge faster to the plant state when consensus terms are included. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Color quality improvement of reconstructed images in color digital holography using speckle method and spectral estimation

    Science.gov (United States)

    Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa

    2018-05-01

    In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.

  2. Improved Atmospheric Correction Over the Indian Subcontinent Using Fast Radiative Transfer and Optimal Estimation

    Science.gov (United States)

    Natraj, V.; Thompson, D. R.; Mathur, A. K.; Babu, K. N.; Kindel, B. C.; Massie, S. T.; Green, R. O.; Bhattacharya, B. K.

    2017-12-01

    Remote Visible / ShortWave InfraRed (VSWIR) spectroscopy, typified by the Next-Generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG), is a powerful tool to map the composition, health, and biodiversity of Earth's terrestrial and aquatic ecosystems. These studies must first estimate surface reflectance, removing the atmospheric effects of absorption and scattering by water vapor and aerosols. Since atmospheric state varies spatiotemporally, and is insufficiently constrained by climatological models, it is important to estimate it directly from the VSWIR data. However, water vapor and aerosol estimation is a significant ongoing challenge for existing atmospheric correction models. Conventional VSWIR atmospheric correction methods evolved from multi-band approaches and do not fully utilize the rich spectroscopic data available. We use spectrally resolved (line-by-line) radiative transfer calculations, coupled with optimal estimation theory, to demonstrate improved accuracy of surface retrievals. These spectroscopic techniques are already pervasive in atmospheric remote sounding disciplines but have not yet been applied to imaging spectroscopy. Our analysis employs a variety of scenes from the recent AVIRIS-NG India campaign, which spans various climes, elevation changes, a wide range of biomes and diverse aerosol scenarios. A key aspect of our approach is joint estimation of surface and aerosol parameters, which allows assessment of aerosol distortion effects using spectral shapes across the entire measured interval from 380-2500 nm. We expect that this method would outperform band ratio approaches, and enable evaluation of subtle aerosol parameters where in situ reference data is not available, or for extreme aerosol loadings, as is observed in the India scenarios. The results are validated using existing in-situ reference spectra, reflectance measurements from assigned partners in India, and objective spectral quality metrics for scenes without any

  3. Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm

    Science.gov (United States)

    Sun, Haisheng; Xu, Rui; Chen, Huaping

    2018-04-01

    To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.

  4. Rolipram improves facilitation of contextual fear extinction in the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine-induced mouse model of Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Ken-ichi Kinoshita

    2017-05-01

    Full Text Available Cognitive impairment often occurs in Parkinson's disease (PD, but the mechanism of onset remains unknown. Recently, we reported that PD model mice produced by 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP show facilitation of hippocampal memory extinction, which may be the cause of cognitive impairment in PD. When we examined the cAMP/CREB signaling in the hippocampus, decreased levels of cAMP and phosphorylated CREB were observed in the dentate gyrus (DG of MPTP-treated mice. Administration of rolipram improved the memory deficits with concomitant recovery of cAMP and phosphorylated CREB levels, suggesting that reduced cAMP/CREB signaling in the DG leads to cognitive impairment in MPTP-treated mice.

  5. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices.

    Science.gov (United States)

    Westgate, Philip M

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.

  6. Improving precipitation estimates over the western United States using GOES-R precipitation data

    Science.gov (United States)

    Karbalaee, N.; Kirstetter, P. E.; Gourley, J. J.

    2017-12-01

    Satellite remote sensing data with fine spatial and temporal resolution are widely used for precipitation estimation for different applications such as hydrological modeling, storm prediction, and flash flood monitoring. The Geostationary Operational Environmental Satellites-R series (GOES-R) is the next generation of environmental satellites that provides hydrologic, atmospheric, and climatic information every 30 seconds over the western hemisphere. The high-resolution and low-latency of GOES-R observations is essential for the monitoring and prediction of floods, specifically in the Western United States where the vantage point of space can complement the degraded weather radar coverage of the NEXRAD network. The GOES-R rainfall rate algorithm will yield deterministic quantitative precipitation estimates (QPE). Accounting for inherent uncertainties will further advance the GOES-R QPEs since with quantifiable error bars, the rainfall estimates can be more readily fused with ground radar products. On the ground, the high-resolution NEXRAD-based precipitation estimation from the Multi-Radar/Multi-Sensor (MRMS) system, which is now operational in the National Weather Service (NWS), is challenged due to a lack of suitable coverage of operational weather radars over complex terrain. Distribution of QPE uncertainties associated with the GOES-R deterministic retrievals are derived and analyzed using MRMS over regions with good radar coverage. They will be merged with MRMS-based probabilistic QPEs developed to advance multisensor QPE integration. This research aims at improving precipitation estimation over the CONUS by combining the observations from GOES-R and MRMS to provide consistent, accurate and fine resolution precipitation rates with uncertainties over the CONUS.

  7. A study on the improved DTC method for estimations of radionuclide activity in radwaste containers

    International Nuclear Information System (INIS)

    Kang, Sang Hee; Hwang, Ki Ha; Lee, Sang Chul; Lee, Kun Jai; Kim, Tae Wook; Kim, Kyoung Deok; Herr, Young Hoi; Song, Myung Jae

    2004-01-01

    Disposal of rad waste containers requires the assessment of the radioactive contents of each container. Some containers can not be assessed by the γ nuclide analyzer because of time constraint and economical burden. One alternative method, dose to curie conversion (DTC) method can provide an estimate of the container activity. This study evaluates the impact of voids, the chemical composition and density of the material and the distribution of the source related to surface dose rate and the development of the improved DTC method for more accurate assessment

  8. Can accelerometry data improve estimates of heart rate variability from wrist pulse PPG sensors?*

    Science.gov (United States)

    Kos, Maciej; Li, Xuan; Khaghani-Far, Iman; Gordon, Christine M.; Pavel, Misha; Jimison Member, Holly B.

    2018-01-01

    A key prerequisite for precision medicine is the ability to assess metrics of human behavior objectively, unobtrusively and continuously. This capability serves as a framework for the optimization of tailored, just-in-time precision health interventions. Mobile unobtrusive physiological sensors, an important prerequisite for realizing this vision, show promise in implementing this quality of physiological data collection. However, first we must trust the collected data. In this paper, we present a novel approach to improving heart rate estimates from wrist pulse photoplethysmography (PPG) sensors. We also discuss the impact of sensor movement on the veracity of collected heart rate data. PMID:29060185

  9. Can accelerometry data improve estimates of heart rate variability from wrist pulse PPG sensors?

    Science.gov (United States)

    Kos, Maciej; Xuan Li; Khaghani-Far, Iman; Gordon, Christine M; Pavel, Misha; Jimison, Holly B

    2017-07-01

    A key prerequisite for precision medicine is the ability to assess metrics of human behavior objectively, unobtrusively and continuously. This capability serves as a framework for the optimization of tailored, just-in-time precision health interventions. Mobile unobtrusive physiological sensors, an important prerequisite for realizing this vision, show promise in implementing this quality of physiological data collection. However, first we must trust the collected data. In this paper, we present a novel approach to improving heart rate estimates from wrist pulse photoplethysmography (PPG) sensors. We also discuss the impact of sensor movement on the veracity of collected heart rate data.

  10. A Patient Advocate to facilitate access and improve communication, care, and outcomes in adults with moderate or severe asthma: Rationale, design, and methods of a randomized controlled trial

    Science.gov (United States)

    Apter, Andrea J.; Morales, Knashawn H.; Han, Xiaoyan; Perez, Luzmercy; Huang, Jingru; Ndicu, Grace; Localio, Anna; Nardi, Alyssa; Klusaritz, Heather; Rogers, Marisa; Phillips, Alexis; Cidav, Zuleyha; Schwartz, J. Sanford

    2017-01-01

    Few interventions to improve asthma outcomes have targeted low-income minority adults. Even fewer have focused on the real-world practice where care is delivered. We adapted a patient navigator, here called a Patient Advocate (PA), a term preferred by patients, to facilitate and maintain access to chronic care for adults with moderate or severe asthma and prevalent co-morbidities recruited from clinics serving low-income urban neighborhoods. We describe the planning, design, methodology (informed by patient and provider focus groups), baseline results, and challenges of an ongoing randomized controlled trial of 312 adults of a PA intervention implemented in a variety of practices. The PA coaches, models, and assists participants with preparations for a visit with the asthma clinician; attends the visit with permission of participant and provider; and confirms participants’ understanding of what transpired at the visit. The PA facilitates scheduling, obtaining insurance coverage, overcoming patients’ unique social and administrative barriers to carrying out medical advice and transfer of information between providers and patients. PA activities are individualized, take account of comorbidities, and are generalizable to other chronic diseases. PAs are recent college graduates interested in health-related careers, research experience, working with patients, and generally have the same race/ethnicity distribution as potential participants. We test whether the PA intervention, compared to usual care, is associated with improved and sustained asthma control and other asthma outcomes (prednisone bursts, ED visits, hospitalizations, quality of life, FEV1) relative to baseline. Mediators and moderators of the PA-asthma outcome relationship are examined along with the intervention’s cost-effectiveness. PMID:28315481

  11. Impact of regression methods on improved effects of soil structure on soil water retention estimates

    Science.gov (United States)

    Nguyen, Phuong Minh; De Pue, Jan; Le, Khoa Van; Cornelis, Wim

    2015-06-01

    Increasing the accuracy of pedotransfer functions (PTFs), an indirect method for predicting non-readily available soil features such as soil water retention characteristics (SWRC), is of crucial importance for large scale agro-hydrological modeling. Adding significant predictors (i.e., soil structure), and implementing more flexible regression algorithms are among the main strategies of PTFs improvement. The aim of this study was to investigate whether the improved effect of categorical soil structure information on estimating soil-water content at various matric potentials, which has been reported in literature, could be enduringly captured by regression techniques other than the usually applied linear regression. Two data mining techniques, i.e., Support Vector Machines (SVM), and k-Nearest Neighbors (kNN), which have been recently introduced as promising tools for PTF development, were utilized to test if the incorporation of soil structure will improve PTF's accuracy under a context of rather limited training data. The results show that incorporating descriptive soil structure information, i.e., massive, structured and structureless, as grouping criterion can improve the accuracy of PTFs derived by SVM approach in the range of matric potential of -6 to -33 kPa (average RMSE decreased up to 0.005 m3 m-3 after grouping, depending on matric potentials). The improvement was primarily attributed to the outperformance of SVM-PTFs calibrated on structureless soils. No improvement was obtained with kNN technique, at least not in our study in which the data set became limited in size after grouping. Since there is an impact of regression techniques on the improved effect of incorporating qualitative soil structure information, selecting a proper technique will help to maximize the combined influence of flexible regression algorithms and soil structure information on PTF accuracy.

  12. Improved frame-based estimation of head motion in PET brain imaging

    International Nuclear Information System (INIS)

    Mukherjee, J. M.; Lindsay, C.; King, M. A.; Licho, R.; Mukherjee, A.; Olivier, P.; Shao, L.

    2016-01-01

    Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is

  13. Improved frame-based estimation of head motion in PET brain imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mukherjee, J. M., E-mail: joyeeta.mitra@umassmed.edu; Lindsay, C.; King, M. A.; Licho, R. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Mukherjee, A. [Aware, Inc., Bedford, Massachusetts 01730 (United States); Olivier, P. [Philips Medical Systems, Cleveland, Ohio 44143 (United States); Shao, L. [ViewRay, Oakwood Village, Ohio 44146 (United States)

    2016-05-15

    Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is

  14. Intelligent Models Performance Improvement Based on Wavelet Algorithm and Logarithmic Transformations in Suspended Sediment Estimation

    Directory of Open Access Journals (Sweden)

    R. Hajiabadi

    2016-10-01

    Full Text Available Introduction One reason for the complexity of hydrological phenomena prediction, especially time series is existence of features such as trend, noise and high-frequency oscillations. These complex features, especially noise, can be detected or removed by preprocessing. Appropriate preprocessing causes estimation of these phenomena become easier. Preprocessing in the data driven models such as artificial neural network, gene expression programming, support vector machine, is more effective because the quality of data in these models is important. Present study, by considering diagnosing and data transformation as two different preprocessing, tries to improve the results of intelligent models. In this study two different intelligent models, Artificial Neural Network and Gene Expression Programming, are applied to estimation of daily suspended sediment load. Wavelet transforms and logarithmic transformation is used for diagnosing and data transformation, respectively. Finally, the impacts of preprocessing on the results of intelligent models are evaluated. Materials and Methods In this study, Gene Expression Programming and Artificial Neural Network are used as intelligent models for suspended sediment load estimation, then the impacts of diagnosing and logarithmic transformations approaches as data preprocessor are evaluated and compared to the result improvement. Two different logarithmic transforms are considered in this research, LN and LOG. Wavelet transformation is used to time series denoising. In order to denoising by wavelet transforms, first, time series can be decomposed at one level (Approximation part and detail part and second, high-frequency part (detail will be removed as noise. According to the ability of gene expression programming and artificial neural network to analysis nonlinear systems; daily values of suspended sediment load of the Skunk River in USA, during a 5-year period, are investigated and then estimated.4 years of

  15. Facilitating participation

    DEFF Research Database (Denmark)

    Skøtt, Bo

    2018-01-01

    the resulting need for a redefinition of library competence. In doing this, I primarily address the first two questions from Chapter 1 and how they relate to the public’s informal, leisure-time activities in a networked society. In particular, I focus on the skills of reflexive self-perception and informed...... opinion formation. Further, I point out the significance which these informal leisure-time activities have for public library staff’s cultural dissemination skills. In this way, I take on the question of the skills required for facilitating the learning of a participatory public (cf. Chapter 1......), exemplifying with the competence required of library staff. My discussion will proceed by way of a literature review. In the next section, I shall explain how and what sources were chosen and section three and four present the theoretical framework and how the applied theories are related. In the fifth section...

  16. Facilitating Transfers

    DEFF Research Database (Denmark)

    Kjær, Poul F.

    2018-01-01

    Departing from the paradox that globalisation has implied an increase, rather than a decrease, in contextual diversity, this paper re-assesses the function, normative purpose and location of Regulatory Governance Frameworks in world society. Drawing on insights from sociology of law and world...... society studies, the argument advanced is that Regulatory Governance Frameworks are oriented towards facilitating transfers of condensed social components, such as economic capital and products, legal acts, political decisions and scientific knowledge, from one legally-constituted normative order, i.......e. contextual setting, to another. Against this background, it is suggested that Regulatory Governance Frameworks can be understood as schemes which act as ‘rites of passage’ aimed at providing legal stabilisation to social processes characterised by liminality, i.e ambiguity, hybridity and in-betweenness....

  17. Improved estimates of external gamma dose rates in the environs of Hinkley Point Power Station

    International Nuclear Information System (INIS)

    Macdonald, H.F.; Thompson, I.M.G.

    1988-07-01

    The dominant source of external gamma dose rates at centres of population within a few kilometres of Hinkley Point Power Station is the routine discharge of 41-Ar from the 'A' station magnox reactors. Earlier estimates of the 41-Ar radiation dose rates were based upon measured discharge rates, combined with calculations using standard plume dispersion and cloud-gamma integration models. This report presents improved dose estimates derived from environmental gamma dose rate measurements made at distances up to about 1 km from the site, thus minimising the degree of extrapolation introduced in estimating dose rates at locations up to a few kilometres from the site. In addition, results from associated chemical tracer measurements and wind tunnel simulations covering distances up to about 4 km from the station are outlined. These provide information on the spatial distribution of the 41-Ar plume during the initial stages of its dispersion, including effects due to plume buoyancy and momentum and behaviour under light wind conditions. In addition to supporting the methodology used for the 41-Ar dose calculations, this information is also of generic interest in the treatment of a range of operational and accidental releases from nuclear power station sites and will assist in the development and validation of existing environmental models. (author)

  18. Improved regression models for ventilation estimation based on chest and abdomen movements

    International Nuclear Information System (INIS)

    Liu, Shaopeng; Gao, Robert; He, Qingbo; Staudenmayer, John; Freedson, Patty

    2012-01-01

    Non-invasive estimation of minute ventilation is important for quantifying the intensity of physical activity of individuals. In this paper, several improved regression models are presented, based on the measurement of chest and abdomen movements from sensor belts worn by subjects (n = 50) engaged in 14 types of physical activity. Five linear models involving a combination of 11 features were developed, and the effects of different model training approaches and window sizes for computing the features were investigated. The performance of the models was evaluated using experimental data collected during the physical activity protocol. The predicted minute ventilation was compared to the criterion ventilation measured using a bidirectional digital volume transducer housed in a respiratory gas exchange system. The results indicate that the inclusion of breathing frequency and the use of percentile points instead of interdecile ranges over a 60 s window size reduced error by about 43%, when applied to the classical two-degrees-of-freedom model. The mean percentage error of the minute ventilation estimated for all the activities was below 7.5%, verifying reasonably good performance of the models and the applicability of the wearable sensing system for minute ventilation estimation during physical activity. (paper)

  19. Improving Hip-Worn Accelerometer Estimates of Sitting Using Machine Learning Methods.

    Science.gov (United States)

    Kerr, Jacqueline; Carlson, Jordan; Godbole, Suneeta; Cadmus-Bertram, Lisa; Bellettiere, John; Hartman, Sheri

    2018-02-13

    To improve estimates of sitting time from hip worn accelerometers used in large cohort studies by employing machine learning methods developed on free living activPAL data. Thirty breast cancer survivors concurrently wore a hip worn accelerometer and a thigh worn activPAL for 7 days. A random forest classifier, trained on the activPAL data, was employed to detect sitting, standing and sit-stand transitions in 5 second windows in the hip worn accelerometer. The classifier estimates were compared to the standard accelerometer cut point and significant differences across different bout lengths were investigated using mixed effect models. Overall, the algorithm predicted the postures with moderate accuracy (stepping 77%, standing 63%, sitting 67%, sit to stand 52% and stand to sit 51%). Daily level analyses indicated that errors in transition estimates were only occurring during sitting bouts of 2 minutes or less. The standard cut point was significantly different from the activPAL across all bout lengths, overestimating short bouts and underestimating long bouts. This is among the first algorithms for sitting and standing for hip worn accelerometer data to be trained from entirely free living activPAL data. The new algorithm detected prolonged sitting which has been shown to be most detrimental to health. Further validation and training in larger cohorts is warranted.This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  20. An Improved Weise’s Rule for Efficient Estimation of Stand Quadratic Mean Diameter

    Directory of Open Access Journals (Sweden)

    Róbert Sedmák

    2015-07-01

    Full Text Available The main objective of this study was to explore the accuracy of Weise’s rule of thumb applied to an estimation of the quadratic mean diameter of a forest stand. Virtual stands of European beech (Fagus sylvatica L. across a range of structure types were stochastically generated and random sampling was simulated. We compared the bias and accuracy of stand quadratic mean diameter estimates, employing different ranks of measured stems from a set of the 10 trees nearest to the sampling point. We proposed several modifications of the original Weise’s rule based on the measurement and averaging of two different ranks centered to a target rank. In accordance with the original formulation of the empirical rule, we recommend the application of the measurement of the 6th stem in rank corresponding to the 55% sample percentile of diameter distribution, irrespective of mean diameter size and degree of diameter dispersion. The study also revealed that the application of appropriate two-measurement modifications of Weise’s method, the 4th and 8th ranks or 3rd and 9th ranks averaged to the 6th central rank, should be preferred over the classic one-measurement estimation. The modified versions are characterised by an improved accuracy (about 25% without statistically significant bias and measurement costs comparable to the classic Weise method.

  1. Improved Water Consumption Estimates of Black Locust Plantations in China’s Loess Plateau

    Directory of Open Access Journals (Sweden)

    Kai Schwärzel

    2018-04-01

    Full Text Available Black locust (Robinia pseudoacacia L. is a major tree species in China’s large-scale afforestation. Despite its significance, black locust is underrepresented in sap flow literature; moreover, the published water consumption data might be biased. We applied two field methods to estimate water consumption of black locust during the growing seasons in 2012 and 2013. The application of Granier’s original sap flow method produced a very low transpiration rate (0.08 mm d−1 while the soil water balance method yielded a much higher rate (1.4 mm d−1. A dye experiment to determine the active sapwood area showed that only the outermost annual ring is responsible for conducting water, which was not considered in many previous studies. Moreover, an in situ calibration experiment was conducted to improve the reliability of Granier’s method. Validation showed a good agreement in estimates of the transpiration rate between the different methods. It is known from many studies that black locust plantations contribute to the significant decline of discharge in the Yellow River basin. Our estimate of tree transpiration at stand scale confirms these results. This study provides a basis for and advances the argument for the development of more sustainable forest management strategies, which better balance forest-related ecosystem services such as soil conservation and water supply.

  2. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    Science.gov (United States)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  3. Value of Clean Water Resources: Estimating the Water Quality Improvement in Metro Manila, Philippines

    Directory of Open Access Journals (Sweden)

    Shokhrukh-Mirzo Jalilov

    2017-12-01

    Full Text Available While having many positive impacts, a tremendous economic performance and rapid industrial expansion over the last decades in the Philippines has had negative effects that have resulted in unfavorable hydrological and ecological changes in most urban river systems and has created environmental problems. Usually, these effects would not be part of a systematic assessment of urban water benefits. To address the issue, this study investigates the relationship between poor water quality and resident’s willingness to pay (WTP for improved water quality in Metro Manila. By employing a contingent valuation method (CVM, this paper estimates the benefits of the provision of clean water quality (swimmable and fishable in waterbodies of Metro Manila for its residents. Face-to-face interviews were completed with 240 randomly selected residents. Residents expressed a mean WTP of PHP102.44 (USD2.03 for a swimmable water quality (good quality and a mean WTP of PHP102.39 (USD2.03 for fishable water quality (moderate quality. The aggregation of this mean willingness-to-pay value amounted to annual economic benefits from PHP9443 billion to PHP9447 billion (approx. USD190 million per year for all taxpayers in Metro Manila. As expected, these estimates could inform local decision-makers about the benefits of future policy interventions aimed at improving the quality of waterbodies in Metro Manila.

  4. Improvement of radiation dose estimation due to nuclear accidents using deep neural network and GPU

    Energy Technology Data Exchange (ETDEWEB)

    Desterro, Filipe S.M.; Almeida, Adino A.H.; Pereira, Claudio M.N.A., E-mail: filipesantana18@gmail.com, E-mail: adino@ien.gov.br, E-mail: cmcoelho@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    Recently, the use of mobile devices has been proposed for dose assessment during nuclear accidents. The idea is to support field teams, providing an approximated estimation of the dose distribution map in the vicinity of the nuclear power plant (NPP), without needing to be connected to the NPP systems. In order to provide such stand-alone execution, the use of artificial neural networks (ANN) has been proposed in substitution of the complex and time consuming physical models executed by the atmospheric dispersion radionuclide (ADR) system. One limitation observed on such approach is the very time-consuming training of the ANNs. Moreover, if the number of input parameters increases the performance of standard ANNs, like Multilayer-Perceptron (MLP) with backpropagation training, is affected leading to unreasonable training time. To improve learning, allowing better dose estimations, more complex ANN architectures are required. ANNs with many layers (much more than a typical number of layers), referred to as Deep Neural Networks (DNN), for example, have demonstrating to achieve better results. On the other hand, the training of such ANNs is very much slow. In order to allow the use of such DNNs in a reasonable training time, a parallel programming solution, using Graphic Processing Units (GPU) and Computing Unified Device Architecture (CUDA) is proposed. This work focuses on the study of computational technologies for improvement of the ANNs to be used in the mobile application, as well as their training algorithms. (author)

  5. An improved parameter estimation and comparison for soft tissue constitutive models containing an exponential function.

    Science.gov (United States)

    Aggarwal, Ankush

    2017-08-01

    Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.

  6. Improvement of radiation dose estimation due to nuclear accidents using deep neural network and GPU

    International Nuclear Information System (INIS)

    Desterro, Filipe S.M.; Almeida, Adino A.H.; Pereira, Claudio M.N.A.

    2017-01-01

    Recently, the use of mobile devices has been proposed for dose assessment during nuclear accidents. The idea is to support field teams, providing an approximated estimation of the dose distribution map in the vicinity of the nuclear power plant (NPP), without needing to be connected to the NPP systems. In order to provide such stand-alone execution, the use of artificial neural networks (ANN) has been proposed in substitution of the complex and time consuming physical models executed by the atmospheric dispersion radionuclide (ADR) system. One limitation observed on such approach is the very time-consuming training of the ANNs. Moreover, if the number of input parameters increases the performance of standard ANNs, like Multilayer-Perceptron (MLP) with backpropagation training, is affected leading to unreasonable training time. To improve learning, allowing better dose estimations, more complex ANN architectures are required. ANNs with many layers (much more than a typical number of layers), referred to as Deep Neural Networks (DNN), for example, have demonstrating to achieve better results. On the other hand, the training of such ANNs is very much slow. In order to allow the use of such DNNs in a reasonable training time, a parallel programming solution, using Graphic Processing Units (GPU) and Computing Unified Device Architecture (CUDA) is proposed. This work focuses on the study of computational technologies for improvement of the ANNs to be used in the mobile application, as well as their training algorithms. (author)

  7. Estimating the Value of Improved Distributed Photovoltaic Adoption Forecasts for Utility Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Gagnon, Pieter [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barbose, Galen L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Stoll, Brady [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ehlen, Ali [National Renewable Energy Lab. (NREL), Golden, CO (United States); Zuboy, Jarret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mills, Andrew D. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2018-05-15

    Misforecasting the adoption of customer-owned distributed photovoltaics (DPV) can have operational and financial implications for utilities; forecasting capabilities can be improved, but generally at a cost. This paper informs this decision-space by using a suite of models to explore the capacity expansion and operation of the Western Interconnection over a 15-year period across a wide range of DPV growth rates and misforecast severities. The system costs under a misforecast are compared against the costs under a perfect forecast, to quantify the costs of misforecasting. Using a simplified probabilistic method applied to these modeling results, an analyst can make a first-order estimate of the financial benefit of improving a utility’s forecasting capabilities, and thus be better informed about whether to make such an investment. For example, under our base assumptions, a utility with 10 TWh per year of retail electric sales who initially estimates that DPV growth could range from 2% to 7.5% of total generation over the next 15 years could expect total present-value savings of approximately $4 million if they could reduce the severity of misforecasting to within ±25%. Utility resource planners can compare those savings against the costs needed to achieve that level of precision, to guide their decision on whether to make an investment in tools or resources.

  8. Parameter estimation of photovoltaic cells using an improved chaotic whale optimization algorithm

    International Nuclear Information System (INIS)

    Oliva, Diego; Abd El Aziz, Mohamed; Ella Hassanien, Aboul

    2017-01-01

    Highlights: •We modify the whale algorithm using chaotic maps. •We apply a chaotic algorithm to estimate parameter of photovoltaic cells. •We perform a study of chaos in whale algorithm. •Several comparisons and metrics support the experimental results. •We test the method with data from real solar cells. -- Abstract: The using of solar energy has been increased since it is a clean source of energy. In this way, the design of photovoltaic cells has attracted the attention of researchers over the world. There are two main problems in this field: having a useful model to characterize the solar cells and the absence of data about photovoltaic cells. This situation even affects the performance of the photovoltaic modules (panels). The characteristics of the current vs. voltage are used to describe the behavior of solar cells. Considering such values, the design problem involves the solution of the complex non-linear and multi-modal objective functions. Different algorithms have been proposed to identify the parameters of the photovoltaic cells and panels. Most of them commonly fail in finding the optimal solutions. This paper proposes the Chaotic Whale Optimization Algorithm (CWOA) for the parameters estimation of solar cells. The main advantage of the proposed approach is using the chaotic maps to compute and automatically adapt the internal parameters of the optimization algorithm. This situation is beneficial in complex problems, because along the iterative process, the proposed algorithm improves their capabilities to search for the best solution. The modified method is able to optimize complex and multimodal objective functions. For example, the function for the estimation of parameters of solar cells. To illustrate the capabilities of the proposed algorithm in the solar cell design, it is compared with other optimization methods over different datasets. Moreover, the experimental results support the improved performance of the proposed approach

  9. Improvements to TOVS retrievals over sea ice and applications to estimating Arctic energy fluxes

    Science.gov (United States)

    Francis, Jennifer A.

    1994-01-01

    Modeling studies suggest that polar regions play a major role in modulating the Earth's climate and that they may be more sensitive than lower latitudes to climate change. Until recently, however, data from meteorological stations poleward of 70 degs have been sparse, and consequently, our understanding of air-sea-ice interaction processes is relatively poor. Satellite-borne sensors now offer a promising opportunity to observe polar regions and ultimately to improve parameterizations of energy transfer processes in climate models. This study focuses on the application of the TIROS-N operational vertical sounder (TOVS) to sea-ice-covered regions in the nonmelt season. TOVS radiances are processed with the improved initialization inversion ('3I') algorithm, providng estimates of layer-average temperature and moisture, cloud conditions, and surface characteristics at a horizontal resolution of approximately 100 km x 100 km. Although TOVS has flown continuously on polar-orbiting satellites since 1978, its potential has not been realized in high latitudes because the quality of retrievals is often significantly lower over sea ice and snow than over the surfaces. The recent availability of three Arctic data sets has provided an opportunity to validate TOVS retrievals: the first from the Coordinated Eastern Arctic Experiment (CEAREX) in winter 1988/1989, the second from the LeadEx field program in spring 1992, and the third from Russian drifting ice stations. Comparisons with these data reveal deficiencies in TOVS retrievals over sea ice during the cold season; e.g., ice surface temperature is often 5 to 15 K too warm, microwave emissivity is approximately 15% too low at large view angles, clear/cloudy scenes are sometimes misidentified, and low-level inversions are often not captured. In this study, methods to reduce these errors are investigated. Improvements to the ice surface temperature retrieval have reduced rms errors from approximately 7 K to 3 K; correction of

  10. Improvement of Bragg peak shift estimation using dimensionality reduction techniques and predictive linear modeling

    Science.gov (United States)

    Xing, Yafei; Macq, Benoit

    2017-11-01

    With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.

  11. Environmental DNA (eDNA sampling improves occurrence and detection estimates of invasive burmese pythons.

    Directory of Open Access Journals (Sweden)

    Margaret E Hunter

    Full Text Available Environmental DNA (eDNA methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR for the Burmese python (Python molurus bivittatus, Northern African python (P. sebae, boa constrictor (Boa constrictor, and the green (Eunectes murinus and yellow anaconda (E. notaeus. Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive

  12. Environmental DNA (eDNA) sampling improves occurrence and detection estimates of invasive burmese pythons.

    Science.gov (United States)

    Hunter, Margaret E; Oyler-McCance, Sara J; Dorazio, Robert M; Fike, Jennifer A; Smith, Brian J; Hunter, Charles T; Reed, Robert N; Hart, Kristen M

    2015-01-01

    Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors

  13. An Improved Global Wind Resource Estimate for Integrated Assessment Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sullivan, Patrick [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gleason, Michael [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hettinger, Dylan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Heimiller, Donna [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-02-01

    This paper summarizes initial steps to improving the robustness and accuracy of global renewable resource and techno-economic assessments for use in integrated assessment models. We outline a method to construct country-level wind resource supply curves, delineated by resource quality and other parameters. Using mesoscale reanalysis data, we generate estimates for wind quality, both terrestrial and offshore, across the globe. Because not all land or water area is suitable for development, appropriate database layers provide exclusions to reduce the total resource to its technical potential. We expand upon estimates from related studies by: using a globally consistent data source of uniquely detailed wind speed characterizations; assuming a non-constant coefficient of performance for adjusting power curves for altitude; categorizing the distance from resource sites to the electric power grid; and characterizing offshore exclusions on the basis of sea ice concentrations. The product, then, is technical potential by country, classified by resource quality as determined by net capacity factor. Additional classifications dimensions are available, including distance to transmission networks for terrestrial wind and distance to shore and water depth for offshore. We estimate the total global wind generation potential of 560 PWh for terrestrial wind with 90% of resource classified as low-to-mid quality, and 315 PWh for offshore wind with 67% classified as mid-to-high quality. These estimates are based on 3.5 MW composite wind turbines with 90 m hub heights, 0.95 availability, 90% array efficiency, and 5 MW/km2 deployment density in non-excluded areas. We compare the underlying technical assumption and results with other global assessments.

  14. Joint Sentinel-1 and SMAP data assimilation to improve soil moisture estimates

    Science.gov (United States)

    Lievens, H.; Reichle, R. H.; Liu, Q.; De Lannoy, G.; Dunbar, R. S.; Kim, S.; Das, N. N.; Cosh, M. H.; Walker, J. P.; Wagner, W.

    2017-12-01

    SMAP (Soil Moisture Active and Passive) radiometer observations at 40 km resolution are routinely assimilated into the NASA Catchment Land Surface Model (CLSM) to generate the SMAP Level 4 Soil Moisture product. The use of C-band radar backscatter observations from Sentinel-1 has the potential to add value to the radiance assimilation by increasing the level of spatial detail. The specifications of Sentinel-1 are appealing, particularly its high spatial resolution (5 by 20 m in interferometric wide swath mode) and frequent revisit time (6 day repeat cycle for the Sentinel-1A and Sentinel-1B constellation). However, the shorter wavelength of Sentinel-1 observations implies less sensitivity to soil moisture. This study investigates the value of Sentinel-1 data for hydrologic simulations by assimilating the radar observations into CLSM, either separately from or simultaneously with SMAP radiometer observations. To facilitate the assimilation of the radar observations, CLSM is coupled to the water cloud model, simulating the radar backscatter as observed by Sentinel-1. The innovations, i.e. differences between observations and simulations, are converted into increments to the model soil moisture state through an Ensemble Kalman Filter. The assimilation impact is assessed by comparing 3-hourly, 9 km surface and root-zone soil moisture simulations with in situ measurements from 9 km SMAP core validation sites and sparse networks, from May 2015 to 2017. The Sentinel-1 assimilation consistently improves surface soil moisture, whereas root-zone impacts are mostly neutral. Relatively larger improvements are obtained from SMAP assimilation. The joint assimilation of SMAP and Sentinel-1 observations performs best, demonstrating the complementary value of radar and radiometer observations.

  15. Improving agricultural drought monitoring in West Africa using root zone soil moisture estimates derived from NDVI

    Science.gov (United States)

    McNally, A.; Funk, C. C.; Yatheendradas, S.; Michaelsen, J.; Cappelarere, B.; Peters-Lidard, C. D.; Verdin, J. P.

    2012-12-01

    The Famine Early Warning Systems Network (FEWS NET) relies heavily on remotely sensed rainfall and vegetation data to monitor agricultural drought in Sub-Saharan Africa and other places around the world. Analysts use satellite rainfall to calculate rainy season statistics and force crop water accounting models that show how the magnitude and timing of rainfall might lead to above or below average harvest. The Normalized Difference Vegetation Index (NDVI) is also an important indicator of growing season progress and is given more weight over regions where, for example, lack of rain gauges increases error in satellite rainfall estimates. Currently, however, near-real time NDVI is not integrated into a modeling framework that informs growing season predictions. To meet this need for our drought monitoring system a land surface model (LSM) is a critical component. We are currently enhancing the FEWS NET monitoring activities by configuring a custom instance of NASA's Land Information System (LIS) called the FEWS NET Land Data Assimilation System. Using the LIS Noah LSM, in-situ measurements, and remotely sensed data, we focus on the following questions: What is the relationship between NDVI and in-situ soil moisture measurements over the West Africa Sahel? How can we use this relationship to improve modeled water and energy fluxes over the West Africa Sahel? We investigate soil moisture and NDVI cross-correlation in the time and frequency domain to develop a transfer function model to predict soil moisture from NDVI. This work compares sites in southwest Niger, Benin, Burkina Faso, and Mali to test the generality of the transfer function. For several sites with fallow and millet vegetation in the Wankama catchment in southwest Niger we developed a non-parametric frequency response model, using NDVI inputs and soil moisture outputs, that accurately estimates root zone soil moisture (40-70cm). We extend this analysis by developing a low order parametric transfer function

  16. Improved Monte Carlo-perturbation method for estimation of control rod worths in a research reactor

    International Nuclear Information System (INIS)

    Kalcheva, Silva; Koonen, Edgar

    2009-01-01

    A hybrid method dedicated to improve the experimental technique for estimation of control rod worths in a research reactor is presented. The method uses a combination of Monte Carlo technique and perturbation theory. Perturbation method is used to obtain the equation for the relative efficiency of control rod insertion. A series of coefficients, describing the axial absorption profile are used to correct the equation for a composite rod, having a complicated burn-up irradiation history. These coefficients have to be determined - by experiment or by using some theoretical/numerical method. In the present paper they are derived from the macroscopic absorption cross-sections, obtained from detailed Monte Carlo calculations by MCNPX 2.6.F of the axial burn-up profile during control rod life. The method is validated on measurements of control rod worths at the BR2 reactor. Comparison with direct MCNPX evaluations of control rod worths is also presented

  17. Mathematical modeling improves EC50 estimations from classical dose-response curves.

    Science.gov (United States)

    Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar

    2015-03-01

    The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.

  18. Chlorophyll induced fluorescence retrieved from GOME2 for improving gross primary productivity estimates of vegetation

    Science.gov (United States)

    van Leth, Thomas C.; Verstraeten, Willem W.; Sanders, Abram F. J.

    2014-05-01

    Mapping terrestrial chlorophyll fluorescence is a crucial activity to obtain information on the functional status of vegetation and to improve estimates of light-use efficiency (LUE) and global primary productivity (GPP). GPP quantifies carbon fixation by plant ecosystems and is therefore an important parameter for budgeting terrestrial carbon cycles. Satellite remote sensing offers an excellent tool for investigating GPP in a spatially explicit fashion across different scales of observation. The GPP estimates, however, still remain largely uncertain due to biotic and abiotic factors that influence plant production. Sun-induced fluorescence has the ability to enhance our knowledge on how environmentally induced changes affect the LUE. This can be linked to optical derived remote sensing parameters thereby reducing the uncertainty in GPP estimates. Satellite measurements provide a relatively new perspective on global sun-induced fluorescence, enabling us to quantify spatial distributions and changes over time. Techniques have recently been developed to retrieve fluorescence emissions from hyperspectral satellite measurements. We use data from the Global Ozone Monitoring Instrument 2 (GOME2) to infer terrestrial fluorescence. The spectral signatures of three basic components atmospheric: absorption, surface reflectance, and fluorescence radiance are separated using reference measurements of non-fluorescent surfaces (desserts, deep oceans and ice) to solve for the atmospheric absorption. An empirically based principal component analysis (PCA) approach is applied similar to that of Joiner et al. (2013, ACP). Here we show our first global maps of the GOME2 retrievals of chlorophyll fluorescence. First results indicate fluorescence distributions that are similar with that obtained by GOSAT and GOME2 as reported by Joiner et al. (2013, ACP), although we find slightly higher values. In view of optimizing the fluorescence retrieval, we will show the effect of the references

  19. Improvement of force-sensor-based heart rate estimation using multichannel data fusion.

    Science.gov (United States)

    Bruser, Christoph; Kortelainen, Juha M; Winter, Stefan; Tenhunen, Mirja; Parkka, Juha; Leonhardt, Steffen

    2015-01-01

    The aim of this paper is to present and evaluate algorithms for heartbeat interval estimation from multiple spatially distributed force sensors integrated into a bed. Moreover, the benefit of using multichannel systems as opposed to a single sensor is investigated. While it might seem intuitive that multiple channels are superior to a single channel, the main challenge lies in finding suitable methods to actually leverage this potential. To this end, two algorithms for heart rate estimation from multichannel vibration signals are presented and compared against a single-channel sensing solution. The first method operates by analyzing the cepstrum computed from the average spectra of the individual channels, while the second method applies Bayesian fusion to three interval estimators, such as the autocorrelation, which are applied to each channel. This evaluation is based on 28 night-long sleep lab recordings during which an eight-channel polyvinylidene fluoride-based sensor array was used to acquire cardiac vibration signals. The recruited patients suffered from different sleep disorders of varying severity. From the sensor array data, a virtual single-channel signal was also derived for comparison by averaging the channels. The single-channel results achieved a beat-to-beat interval error of 2.2% with a coverage (i.e., percentage of the recording which could be analyzed) of 68.7%. In comparison, the best multichannel results attained a mean error and coverage of 1.0% and 81.0%, respectively. These results present statistically significant improvements of both metrics over the single-channel results (p < 0.05).

  20. An Improved Iterative Fitting Method to Estimate Nocturnal Residual Layer Height

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-08-01

    Full Text Available The planetary boundary layer (PBL is an atmospheric region near the Earth’s surface. It is significant for weather forecasting and for the study of air quality and climate. In this study, the top of nocturnal residual layers—which are what remain of the daytime mixing layer—are estimated by an elastic backscatter Lidar in Wuhan (30.5°N, 114.4°E, a city in Central China. The ideal profile fitting method is widely applied to determine the nocturnal residual layer height (RLH from Lidar data. However, the method is seriously affected by an optical thick layer. Thus, we propose an improved iterative fitting method to eliminate the optical thick layer effect on RLH detection using Lidar. Two typical case studies observed by elastic Lidar are presented to demonstrate the theory and advantage of the proposed method. Results of case analysis indicate that the improved method is more practical and precise than profile-fitting, gradient, and wavelet covariance transform method in terms of nocturnal RLH evaluation under low cloud conditions. Long-term observations of RLH performed with ideal profile fitting and improved methods were carried out in Wuhan from 28 May 2011 to 17 June 2016. Comparisons of Lidar-derived RLHs with the two types of methods verify that the improved solution is practical. Statistical analysis of a six-year Lidar signal was conducted to reveal the monthly average values of nocturnal RLH in Wuhan. A clear RLH monthly cycle with a maximum mean height of about 1.8 km above ground level was observed in August, and a minimum height of about 0.7 km was observed in January. The variation in monthly mean RLH displays an obvious quarterly dependence, which coincides with the annual variation in local surface temperature.

  1. Improvement of image quality using interpolated projection data estimation method in SPECT

    International Nuclear Information System (INIS)

    Takaki, Akihiro; Soma, Tsutomu; Murase, Kenya; Kojima, Akihiro; Asao, Kimie; Kamada, Shinya; Matsumoto, Masanori

    2009-01-01

    General data acquisition for single photon emission computed tomography (SPECT) is performed in 90 or 60 directions, with a coarse pitch of approximately 4-6 deg for a rotation of 360 deg or 180 deg, using a gamma camera. No data between adjacent projections will be sampled under these circumstances. The aim of the study was to develop a method to improve SPECT image quality by generating lacking projection data through interpolation of data obtained with a coarse pitch such as 6 deg. The projection data set at each individual degree in 360 directions was generated by a weighted average interpolation method from the projection data acquired with a coarse sampling angle (interpolated projection data estimation processing method, IPDE method). The IPDE method was applied to the numerical digital phantom data, actual phantom data and clinical brain data with Tc-99m ethyle cysteinate dimer (ECD). All SPECT images were reconstructed by the filtered back-projection method and compared with the original SPECT images. The results confirmed that streak artifacts decreased by apparently increasing a sampling number in SPECT after interpolation and also improved signal-to-noise (S/N) ratio of the root mean square uncertainty value. Furthermore, the normalized mean square error values, compared with standard images, had similar ones after interpolation. Moreover, the contrast and concentration ratios increased their effects after interpolation. These results indicate that effective improvement of image quality can be expected with interpolation. Thus, image quality and the ability to depict images can be improved while maintaining the present acquisition time and image quality. In addition, this can be achieved more effectively than at present even if the acquisition time is reduced. (author)

  2. A model to estimate the cost effectiveness of the indoorenvironment improvements in office work

    Energy Technology Data Exchange (ETDEWEB)

    Seppanen, Olli; Fisk, William J.

    2004-06-01

    Deteriorated indoor climate is commonly related to increases in sick building syndrome symptoms, respiratory illnesses, sick leave, reduced comfort and losses in productivity. The cost of deteriorated indoor climate for the society is high. Some calculations show that the cost is higher than the heating energy costs of the same buildings. Also building-level calculations have shown that many measures taken to improve indoor air quality and climate are cost-effective when the potential monetary savings resulting from an improved indoor climate are included as benefits gained. As an initial step towards systemizing these building level calculations we have developed a conceptual model to estimate the cost-effectiveness of various measures. The model shows the links between the improvements in the indoor environment and the following potential financial benefits: reduced medical care cost, reduced sick leave, better performance of work, lower turn over of employees, and lower cost of building maintenance due to fewer complaints about indoor air quality and climate. The pathways to these potential benefits from changes in building technology and practices go via several human responses to the indoor environment such as infectious diseases, allergies and asthma, sick building syndrome symptoms, perceived air quality, and thermal environment. The model also includes the annual cost of investments, operation costs, and cost savings of improved indoor climate. The conceptual model illustrates how various factors are linked to each other. SBS symptoms are probably the most commonly assessed health responses in IEQ studies and have been linked to several characteristics of buildings and IEQ. While the available evidence indicates that SBS symptoms can affect these outcomes and suspects that such a linkage exists, at present we can not quantify the relationships sufficiently for cost-benefit modeling. New research and analyses of existing data to quantify the financial

  3. Does an individual estimation of halflife improve the results of radioiodine therapy of Graves' disease?

    International Nuclear Information System (INIS)

    Schneider, P.; Koerber, C.; Koerber-Hafner, N.; Haenscheid, H.; Reiners, Chr.

    2002-01-01

    Aim: The impact of our dosimetry concept on radioiodine therapy success in Graves' disease (GD) was analysed. Three questions arised: Did individual estimation of pretherapeutic halflife improve therapeutic success? Did individual dosimetry result in accurate dose calculation? Did antithyroid medication have a measurable influence on therapeutic success under the prevailing conditions? Methods: 126 consecutive patients were treated with 200 Gy I-131 in our therapy ward for GD and followed-up six to nine months after therapy. Success quote was assessed using a standardized protocol and treatment was classified as successful when the patients was eu- or hypothyroid, or unsuccessful when he or she presented with a suppressed TSH-level or in hyperthyroid condition after antithyroid medication withdrawal. Antithyroid medication, activity I-131, dose, concentration of fT 3 and fT 4 , specific delivered dose and halflife were put into a multiple regression model to assess their influence on therapeutic success. In order to assess possible factors disturbing the therapeutic outcome, relevant parameters were analyzed using Logit transformation. Results: Out of 126 patients 84 were classified as successfully treated and 42 (33,3%) as failures. A significant influence on the outcome only was found for thyroid mass. However, therapeutic success appeared to be more distinctly determined by the specific delivered dose using an estimated halflife of 5.5 days (Odds: 10.0, p [de

  4. Calibrated Tully-fisher Relations For Improved Photometric Estimates Of Disk Rotation Velocities

    Science.gov (United States)

    Reyes, Reinabelle; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.

    2011-01-01

    We present calibrated scaling relations (also referred to as Tully-Fisher relations or TFRs) between rotation velocity and photometric quantities-- absolute magnitude, stellar mass, and synthetic magnitude (a linear combination of absolute magnitude and color)-- of disk galaxies at z 0.1. First, we selected a parent disk sample of 170,000 galaxies from SDSS DR7, with redshifts between 0.02 and 0.10 and r band absolute magnitudes between -18.0 and -22.5. Then, we constructed a child disk sample of 189 galaxies that span the parameter space-- in absolute magnitude, color, and disk size-- covered by the parent sample, and for which we have obtained kinematic data. Long-slit spectroscopy were obtained from the Dual Imaging Spectrograph (DIS) at the Apache Point Observatory 3.5 m for 99 galaxies, and from Pizagno et al. (2007) for 95 galaxies (five have repeat observations). We find the best photometric estimator of disk rotation velocity to be a synthetic magnitude with a color correction that is consistent with the Bell et al. (2003) color-based stellar mass ratio. The improved rotation velocity estimates have a wide range of scientific applications, and in particular, in combination with weak lensing measurements, they enable us to constrain the ratio of optical-to-virial velocity in disk galaxies.

  5. Delay Estimator and Improved Proportionate Multi-Delay Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    E. Verteletskaya

    2012-04-01

    Full Text Available This paper pertains to speech and acoustic signal processing, and particularly to a determination of echo path delay and operation of echo cancellers. To cancel long echoes, the number of weights in a conventional adaptive filter must be large. The length of the adaptive filter will directly affect both the degree of accuracy and the convergence speed of the adaptation process. We present a new adaptive structure which is capable to deal with multiple dispersive echo paths. An adaptive filter according to the present invention includes means for storing an impulse response in a memory, the impulse response being indicative of the characteristics of a transmission line. It also includes a delay estimator for detecting ranges of samples within the impulse response having relatively large distribution of echo energy. These ranges of samples are being indicative of echoes on the transmission line. An adaptive filter has a plurality of weighted taps, each of the weighted taps having an associated tap weight value. A tap allocation/control circuit establishes the tap weight values in response to said detecting means so that only taps within the regions of relatively large distributions of echo energy are turned on. Thus, the convergence speed and the degree of estimation in the adaptation process can be improved.

  6. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Directory of Open Access Journals (Sweden)

    Michael F Sloma

    2017-11-01

    Full Text Available Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  7. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Science.gov (United States)

    Sloma, Michael F; Mathews, David H

    2017-11-01

    Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  8. Improving estimation of kinetic parameters in dynamic force spectroscopy using cluster analysis

    Science.gov (United States)

    Yen, Chi-Fu; Sivasankar, Sanjeevi

    2018-03-01

    Dynamic Force Spectroscopy (DFS) is a widely used technique to characterize the dissociation kinetics and interaction energy landscape of receptor-ligand complexes with single-molecule resolution. In an Atomic Force Microscope (AFM)-based DFS experiment, receptor-ligand complexes, sandwiched between an AFM tip and substrate, are ruptured at different stress rates by varying the speed at which the AFM-tip and substrate are pulled away from each other. The rupture events are grouped according to their pulling speeds, and the mean force and loading rate of each group are calculated. These data are subsequently fit to established models, and energy landscape parameters such as the intrinsic off-rate (koff) and the width of the potential energy barrier (xβ) are extracted. However, due to large uncertainties in determining mean forces and loading rates of the groups, errors in the estimated koff and xβ can be substantial. Here, we demonstrate that the accuracy of fitted parameters in a DFS experiment can be dramatically improved by sorting rupture events into groups using cluster analysis instead of sorting them according to their pulling speeds. We test different clustering algorithms including Gaussian mixture, logistic regression, and K-means clustering, under conditions that closely mimic DFS experiments. Using Monte Carlo simulations, we benchmark the performance of these clustering algorithms over a wide range of koff and xβ, under different levels of thermal noise, and as a function of both the number of unbinding events and the number of pulling speeds. Our results demonstrate that cluster analysis, particularly K-means clustering, is very effective in improving the accuracy of parameter estimation, particularly when the number of unbinding events are limited and not well separated into distinct groups. Cluster analysis is easy to implement, and our performance benchmarks serve as a guide in choosing an appropriate method for DFS data analysis.

  9. Incorporating GOES Satellite Photosynthetically Active Radiation (PAR) Retrievals to Improve Biogenic Emission Estimates in Texas

    Science.gov (United States)

    Zhang, Rui; White, Andrew T.; Pour Biazar, Arastoo; McNider, Richard T.; Cohan, Daniel S.

    2018-01-01

    This study examines the influence of insolation and cloud retrieval products from the Geostationary Operational Environmental Satellite (GOES) system on biogenic emission estimates and ozone simulations in Texas. Compared to surface pyranometer observations, satellite-retrieved insolation and photosynthetically active radiation (PAR) values tend to systematically correct the overestimation of downwelling shortwave radiation in the Weather Research and Forecasting (WRF) model. The correlation coefficient increases from 0.93 to 0.97, and the normalized mean error decreases from 36% to 21%. The isoprene and monoterpene emissions estimated by the Model of Emissions of Gases and Aerosols from Nature are on average 20% and 5% less, respectively, when PAR from the direct satellite retrieval is used rather than the control WRF run. The reduction in biogenic emission rates using satellite PAR reduced the predicted maximum daily 8 h ozone concentration by up to 5.3 ppbV over the Dallas-Fort Worth (DFW) region on some days. However, episode average ozone response is less sensitive, with a 0.6 ppbV decrease near DFW and 0.3 ppbV increase over East Texas. The systematic overestimation of isoprene concentrations in a WRF control case is partially corrected by using satellite PAR, which observes more clouds than are simulated by WRF. Further, assimilation of GOES-derived cloud fields in WRF improved CAMx model performance for ground-level ozone over Texas. Additionally, it was found that using satellite PAR improved the model's ability to replicate the spatial pattern of satellite-derived formaldehyde columns and aircraft-observed vertical profiles of isoprene.

  10. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  11. Facilitated Oxygen Chemisorption in Heteroatom-Doped Carbon for Improved Oxygen Reaction Activity in All-Solid-State Zinc-Air Batteries.

    Science.gov (United States)

    Liu, Sisi; Wang, Mengfan; Sun, Xinyi; Xu, Na; Liu, Jie; Wang, Yuzhou; Qian, Tao; Yan, Chenglin

    2018-01-01

    Driven by the intensified demand for energy storage systems with high-power density and safety, all-solid-state zinc-air batteries have drawn extensive attention. However, the electrocatalyst active sites and the underlying mechanisms occurring in zinc-air batteries remain confusing due to the lack of in situ analytical techniques. In this work, the in situ observations, including X-ray diffraction and Raman spectroscopy, of a heteroatom-doped carbon air cathode are reported, in which the chemisorption of oxygen molecules and oxygen-containing intermediates on the carbon material can be facilitated by the electron deficiency caused by heteroatom doping, thus improving the oxygen reaction activity for zinc-air batteries. As expected, solid-state zinc-air batteries equipped with such air cathodes exhibit superior reversibility and durability. This work thus provides a profound understanding of the reaction principles of heteroatom-doped carbon materials in zinc-air batteries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    Science.gov (United States)

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. The use of a policy dialogue to facilitate evidence-informed policy development for improved access to care: the case of the Winnipeg Central Intake Service (WCIS).

    Science.gov (United States)

    Damani, Zaheed; MacKean, Gail; Bohm, Eric; DeMone, Brie; Wright, Brock; Noseworthy, Tom; Holroyd-Leduc, Jayna; Marshall, Deborah A

    2016-10-18

    Policy dialogues are critical for developing responsive, effective, sustainable, evidence-informed policy. Our multidisciplinary team, including researchers, physicians and senior decision-makers, comprehensively evaluated The Winnipeg Central Intake Service, a single-entry model in Winnipeg, Manitoba, to improve patient access to hip/knee replacement surgery. We used the evaluation findings to develop five evidence-informed policy directions to help improve access to scheduled clinical services across Manitoba. Using guiding principles of public participation processes, we hosted a policy roundtable meeting to engage stakeholders and use their input to refine the policy directions. Here, we report on the use and input of a policy roundtable meeting and its role in contributing to the development of evidence-informed policy. Our evidence-informed policy directions focused on formal measurement/monitoring of quality, central intake as a preferred model for service delivery, provincial scope, transparent processes/performance indicators, and patient choice of provider. We held a policy roundtable meeting and used outcomes of facilitated discussions to refine these directions. Individuals from our team and six stakeholder groups across Manitoba participated (n = 44), including patients, family physicians, orthopaedic surgeons, surgical office assistants, Winnipeg Central Intake team, and administrators/managers. We developed evaluation forms to assess the meeting process, and collected decision-maker partners' perspectives on the value of the policy roundtable meeting and use of policy directions to improve access to scheduled clinical services after the meeting, and again 15 months later. We analyzed roundtable and evaluation data using thematic analysis to identify key themes. Four key findings emerged. First, participants supported all policy directions, with revisions and key implementation considerations identified. Second, participants felt the policy roundtable

  14. Soil temperature synchronisation improves estimation of daily variation of ecosystem respiration in Sphagnum peatlands

    Science.gov (United States)

    D'Angelo, Benoît; Gogo, Sébastien; Le Moing, Franck; Jégou, Fabrice; Guimbaud, Christophe; Laggoun, Fatima

    2015-04-01

    comparison was performed using RMSE (goodness-of-fit) and AIC (goodness-of-fit and model complexity) as indicators to assess their relative quality. Both indicators showed a wide variation between sites. However, for each site differences between synchronised and non-synchronised data were larger than the differences between models equations. According to the AIC, models using synchronised data produced better ER estimations than models using non-synchronised data, at all depth. RMSE support this result for all sites for superficial peat layer. In some locations, mainly Frasne, synchronised data at 5 cm depth provide better estimation than air temperature, i.e. 25.0 vs. 26.4 for RMSE and 337.1 vs. 379.8 for AIC, respectively. The equation of the most appropriate model varies between sites, but the differences between them are small. At a daily scale, data synchronisation in Sphagnum peatlands improves ER estimation regardless of the model used. Moreover, to estimate ER flux, the use of synchronised data at 5 cm depth seems the most adequate method.

  15. AN IMPROVED DISTANCE AND MASS ESTIMATE FOR SGR A* FROM A MULTISTAR ORBIT ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Boehle, A.; Ghez, A. M.; Meyer, L.; Yelda, S.; Albers, S.; Martinez, G. D.; Becklin, E. E.; Do, T.; Morris, M. R.; Sitarski, B.; Witzel, G. [UCLA, Department of Physics and Astronomy, Los Angeles, CA 90095 (United States); Schödel, R. [Instituto de Astrofísica de Andalucía (CSIC), Glorieta de la Astronomía S/N, E-18008 Granada (Spain); Lu, J. R. [Institute for Astronomy, University of Hawaii, Honolulu, HI 96822 (United States); Matthews, K., E-mail: aboehle@astro.ucla.edu [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, MC 301-17, Pasadena, CA 91125 (United States)

    2016-10-10

    We present new, more precise measurements of the mass and distance of our Galaxy’s central supermassive black hole, Sgr A*. These results stem from a new analysis that more than doubles the time baseline for astrometry of faint stars orbiting Sgr A*, combining 2 decades of speckle imaging and adaptive optics data. Specifically, we improve our analysis of the speckle images by using information about a star’s orbit from the deep adaptive optics data (2005–2013) to inform the search for the star in the speckle years (1995–2005). When this new analysis technique is combined with the first complete re-reduction of Keck Galactic Center speckle images using speckle holography, we are able to track the short-period star S0-38 ( K -band magnitude = 17, orbital period = 19 yr) through the speckle years. We use the kinematic measurements from speckle holography and adaptive optics to estimate the orbits of S0-38 and S0-2 and thereby improve our constraints of the mass ( M {sub bh}) and distance ( R {sub o} ) of Sgr A*: M {sub bh} = (4.02 ± 0.16 ± 0.04) × 10{sup 6} M {sub ⊙} and 7.86 ± 0.14 ± 0.04 kpc. The uncertainties in M {sub bh} and R {sub o} as determined by the combined orbital fit of S0-2 and S0-38 are improved by a factor of 2 and 2.5, respectively, compared to an orbital fit of S0-2 alone and a factor of ∼2.5 compared to previous results from stellar orbits. This analysis also limits the extended dark mass within 0.01 pc to less than 0.13 × 10{sup 6} M {sub ⊙} at 99.7% confidence, a factor of 3 lower compared to prior work.

  16. Improved ESPRIT Method for Joint Direction-of-Arrival and Frequency Estimation Using Multiple-Delay Output

    Directory of Open Access Journals (Sweden)

    Wang Xudong

    2012-01-01

    Full Text Available An automatic pairing joint direction-of-arrival (DOA and frequency estimation is presented to overcome the unsatisfactory performances of estimation of signal parameter via rotational invariance techniques- (ESPRIT- like algorithm of Wang (2010, which requires an additional pairing. By using multiple-delay output of a uniform linear antenna arrays (ULA, the proposed algorithm can estimate joint angles and frequencies with an improved ESPRIT. Compared with Wang’s ESPRIT algorithm, the angle estimation performance of the proposed algorithm is greatly improved. The frequency estimation performance of the proposed algorithm is same with that of Wang’s ESPRIT algorithm. Furthermore, the proposed algorithm can obtain automatic pairing DOA and frequency parameters, and it has a comparative computational complexity in contrast to Wang’s ESPRIT algorithm. By the way, this proposed algorithm can also work well for nonuniform linear arrays. The useful behavior of this proposed algorithm is verified by simulations.

  17. Improving Satellite Quantitative Precipitation Estimation Using GOES-Retrieved Cloud Optical Depth

    Energy Technology Data Exchange (ETDEWEB)

    Stenz, Ronald; Dong, Xiquan; Xi, Baike; Feng, Zhe; Kuligowski, Robert J.

    2016-02-01

    To address significant gaps in ground-based radar coverage and rain gauge networks in the U.S., geostationary satellite quantitative precipitation estimates (QPEs) such as the Self-Calibrating Multivariate Precipitation Retrievals (SCaMPR) can be used to fill in both the spatial and temporal gaps of ground-based measurements. Additionally, with the launch of GOES-R, the temporal resolution of satellite QPEs may be comparable to that of Weather Service Radar-1988 Doppler (WSR-88D) volume scans as GOES images will be available every five minutes. However, while satellite QPEs have strengths in spatial coverage and temporal resolution, they face limitations particularly during convective events. Deep Convective Systems (DCSs) have large cloud shields with similar brightness temperatures (BTs) over nearly the entire system, but widely varying precipitation rates beneath these clouds. Geostationary satellite QPEs relying on the indirect relationship between BTs and precipitation rates often suffer from large errors because anvil regions (little/no precipitation) cannot be distinguished from rain-cores (heavy precipitation) using only BTs. However, a combination of BTs and optical depth (τ) has been found to reduce overestimates of precipitation in anvil regions (Stenz et al. 2014). A new rain mask algorithm incorporating both τ and BTs has been developed, and its application to the existing SCaMPR algorithm was evaluated. The performance of the modified SCaMPR was evaluated using traditional skill scores and a more detailed analysis of performance in individual DCS components by utilizing the Feng et al. (2012) classification algorithm. SCaMPR estimates with the new rain mask applied benefited from significantly reduced overestimates of precipitation in anvil regions and overall improvements in skill scores.

  18. An improved Q estimation approach: the weighted centroid frequency shift method

    Science.gov (United States)

    Li, Jingnan; Wang, Shangxu; Yang, Dengfeng; Dong, Chunhui; Tao, Yonghui; Zhou, Yatao

    2016-06-01

    Seismic wave propagation in subsurface media suffers from absorption, which can be quantified by the quality factor Q. Accurate estimation of the Q factor is of great importance for the resolution enhancement of seismic data, precise imaging and interpretation, and reservoir prediction and characterization. The centroid frequency shift method (CFS) is currently one of the most commonly used Q estimation methods. However, for seismic data that contain noise, the accuracy and stability of Q extracted using CFS depend on the choice of frequency band. In order to reduce the influence of frequency band choices and obtain Q with greater precision and robustness, we present an improved CFS Q measurement approach—the weighted CFS method (WCFS), which incorporates a Gaussian weighting coefficient into the calculation procedure of the conventional CFS. The basic idea is to enhance the proportion of advantageous frequencies in the amplitude spectrum and reduce the weight of disadvantageous frequencies. In this novel method, we first construct a Gauss function using the centroid frequency and variance of the reference wavelet. Then we employ it as the weighting coefficient for the amplitude spectrum of the original signal. Finally, the conventional CFS is adopted for the weighted amplitude spectrum to extract the Q factor. Numerical tests of noise-free synthetic data demonstrate that the WCFS is feasible and efficient, and produces more accurate results than the conventional CFS. Tests for noisy synthetic data indicate that the new method has better anti-noise capability than the CFS. The application to field vertical seismic profile (VSP) data further demonstrates its validity5.

  19. Better estimation of protein-DNA interaction parameters improve prediction of functional sites

    Directory of Open Access Journals (Sweden)

    O'Flanagan Ruadhan A

    2008-12-01

    Full Text Available Abstract Background Characterizing transcription factor binding motifs is a common bioinformatics task. For transcription factors with variable binding sites, we need to get many suboptimal binding sites in our training dataset to get accurate estimates of free energy penalties for deviating from the consensus DNA sequence. One procedure to do that involves a modified SELEX (Systematic Evolution of Ligands by Exponential Enrichment method designed to produce many such sequences. Results We analyzed low stringency SELEX data for E. coli Catabolic Activator Protein (CAP, and we show here that appropriate quantitative analysis improves our ability to predict in vitro affinity. To obtain large number of sequences required for this analysis we used a SELEX SAGE protocol developed by Roulet et al. The sequences obtained from here were subjected to bioinformatic analysis. The resulting bioinformatic model characterizes the sequence specificity of the protein more accurately than those sequence specificities predicted from previous analysis just by using a few known binding sites available in the literature. The consequences of this increase in accuracy for prediction of in vivo binding sites (and especially functional ones in the E. coli genome are also discussed. We measured the dissociation constants of several putative CAP binding sites by EMSA (Electrophoretic Mobility Shift Assay and compared the affinities to the bioinformatics scores provided by methods like the weight matrix method and QPMEME (Quadratic Programming Method of Energy Matrix Estimation trained on known binding sites as well as on the new sites from SELEX SAGE data. We also checked predicted genome sites for conservation in the related species S. typhimurium. We found that bioinformatics scores based on SELEX SAGE data does better in terms of prediction of physical binding energies as well as in detecting functional sites. Conclusion We think that training binding site detection

  20. Improving terrestrial evaporation estimates over continental Australia through assimilation of SMOS soil moisture

    Science.gov (United States)

    Martens, B.; Miralles, D.; Lievens, H.; Fernández-Prieto, D.; Verhoest, N. E. C.

    2016-06-01

    Terrestrial evaporation is an essential variable in the climate system that links the water, energy and carbon cycles over land. Despite this crucial importance, it remains one of the most uncertain components of the hydrological cycle, mainly due to known difficulties to model the constraints imposed by land water availability on terrestrial evaporation. The main objective of this study is to assimilate satellite soil moisture observations from the Soil Moisture and Ocean Salinity (SMOS) mission into an existing evaporation model. Our over-arching goal is to find an optimal use of satellite soil moisture that can help to improve our understanding of evaporation at continental scales. To this end, the Global Land Evaporation Amsterdam Model (GLEAM) is used to simulate evaporation fields over continental Australia for the period September 2010-December 2013. SMOS soil moisture observations are assimilated using a Newtonian Nudging algorithm in a series of experiments. Model estimates of surface soil moisture and evaporation are validated against soil moisture probe and eddy-covariance measurements, respectively. Finally, an analogous experiment in which Advanced Microwave Scanning Radiometer (AMSR-E) soil moisture is assimilated (instead of SMOS) allows to perform a relative assessment of the quality of both satellite soil moisture products. Results indicate that the modelled soil moisture from GLEAM can be improved through the assimilation of SMOS soil moisture: the average correlation coefficient between in situ measurements and the modelled soil moisture over the complete sample of stations increased from 0.68 to 0.71 and a statistical significant increase in the correlations is achieved for 17 out of the 25 individual stations. Our results also suggest a higher accuracy of the ascending SMOS data compared to the descending data, and overall higher quality of SMOS compared to AMSR-E retrievals over Australia. On the other hand, the effect of soil moisture data

  1. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    Directory of Open Access Journals (Sweden)

    J. F. Newman

    2017-02-01

    Full Text Available Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidars in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine

  2. Improving the estimation of celiac disease sibling risk by non-HLA genes.

    Directory of Open Access Journals (Sweden)

    Valentina Izzo

    Full Text Available Celiac Disease (CD is a polygenic trait, and HLA genes explain less than half of the genetic variation. Through large GWAs more than 40 associated non-HLA genes were identified, but they give a small contribution to the heritability of the disease. The aim of this study is to improve the estimate of the CD risk in siblings, by adding to HLA a small set of non-HLA genes. One-hundred fifty-seven Italian families with a confirmed CD case and at least one other sib and both parents were recruited. Among 249 sibs, 29 developed CD in a 6 year follow-up period. All individuals were typed for HLA and 10 SNPs in non-HLA genes: CCR1/CCR3 (rs6441961, IL12A/SCHIP1 and IL12A (rs17810546 and rs9811792, TAGAP (rs1738074, RGS1 (rs2816316, LPP (rs1464510, OLIG3 (rs2327832, REL (rs842647, IL2/IL21 (rs6822844, SH2B3 (rs3184504. Three associated SNPs (in LPP, REL, and RGS1 genes were identified through the Transmission Disequilibrium Test and a Bayesian approach was used to assign a score (BS to each detected HLA+SNPs genotype combination. We then classified CD sibs as at low or at high risk if their BS was respectively < or ≥ median BS value within each HLA risk group. A larger number (72% of CD sibs showed a BS ≥ the median value and had a more than two fold higher OR than CD sibs with a BS value < the median (O.R = 2.53, p = 0.047. Our HLA+SNPs genotype classification, showed both a higher predictive negative value (95% vs 91% and diagnostic sensitivity (79% vs 45% than the HLA only. In conclusion, the estimate of the CD risk by HLA+SNPs approach, even if not applicable to prevention, could be a precious tool to improve the prediction of the disease in a cohort of first degree relatives, particularly in the low HLA risk groups.

  3. Improving causal inference with a doubly robust estimator that combines propensity score stratification and weighting.

    Science.gov (United States)

    Linden, Ariel

    2017-08-01

    When a randomized controlled trial is not feasible, health researchers typically use observational data and rely on statistical methods to adjust for confounding when estimating treatment effects. These methods generally fall into 3 categories: (1) estimators based on a model for the outcome using conventional regression adjustment; (2) weighted estimators based on the propensity score (ie, a model for the treatment assignment); and (3) "doubly robust" (DR) estimators that model both the outcome and propensity score within the same framework. In this paper, we introduce a new DR estimator that utilizes marginal mean weighting through stratification (MMWS) as the basis for weighted adjustment. This estimator may prove more accurate than treatment effect estimators because MMWS has been shown to be more accurate than other models when the propensity score is misspecified. We therefore compare the performance of this new estimator to other commonly used treatment effects estimators. Monte Carlo simulation is used to compare the DR-MMWS estimator to regression adjustment, 2 weighted estimators based on the propensity score and 2 other DR methods. To assess performance under varied conditions, we vary the level of misspecification of the propensity score model as well as misspecify the outcome model. Overall, DR estimators generally outperform methods that model one or the other components (eg, propensity score or outcome). The DR-MMWS estimator outperforms all other estimators when both the propensity score and outcome models are misspecified and performs equally as well as other DR estimators when only the propensity score is misspecified. Health researchers should consider using DR-MMWS as the principal evaluation strategy in observational studies, as this estimator appears to outperform other estimators in its class. © 2017 John Wiley & Sons, Ltd.

  4. An improved method for permeability estimation of the bioclastic limestone reservoir based on NMR data.

    Science.gov (United States)

    Ge, Xinmin; Fan, Yiren; Liu, Jianyu; Zhang, Li; Han, Yujiao; Xing, Donghui

    2017-10-01

    Permeability is an important parameter in formation evaluation since it controls the fluid transportation of porous rocks. However, it is challengeable to compute the permeability of bioclastic limestone reservoirs by conventional methods linking petrophysical and geophysical data, due to the complex pore distributions. A new method is presented to estimate the permeability based on laboratory and downhole nuclear magnetic resonance (NMR) measurements. We divide the pore space into four intervals by the inflection points between the pore radius and the transversal relaxation time. Relationships between permeability and percentages of different pore intervals are investigated to investigate influential factors on the fluid transportation. Furthermore, an empirical model, which takes into account of the pore size distributions, is presented to compute the permeability. 212 core samples in our case show that the accuracy of permeability calculation is improved from 0.542 (SDR model), 0.507 (TIM model), 0.455 (conventional porosity-permeability regressions) to 0.803. To enhance the precision of downhole application of the new model, we developed a fluid correction algorithm to construct the water spectrum of in-situ NMR data, aiming to eliminate the influence of oil on the magnetization. The result reveals that permeability is positively correlated with percentages of mega-pores and macro-pores, but negatively correlated with the percentage of micro-pores. Poor correlation is observed between permeability and the percentage of meso-pores. NMR magnetizations and T 2 spectrums after the fluid correction agree well with laboratory results for samples saturated with water. Field application indicates that the improved method provides better performance than conventional models such as Schlumberger-Doll Research equation, Timur-Coates equation, and porosity-permeability regressions. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Estimation of cost per severe accident for improvement of accident protection and consequence mitigation strategies

    International Nuclear Information System (INIS)

    Silva, Kampanart; Ishiwatari, Yuki; Takahara, Shogo

    2013-01-01

    To assess the complex situations regarding the severe accidents such as what observed in Fukushima Accident, not only radiation protection aspects but also relevant aspects: health, environmental, economic and societal aspects; must be all included into the consequence assessment. In this study, the authors introduce the “cost per severe accident” as an index to analyze the consequences of severe accidents comprehensively. The cost per severe accident consists of various costs and consequences converted into monetary values. For the purpose of improvement of the accident protection and consequence mitigation strategies, the costs needed to introduce the protective actions, and health and psychological consequences are included in the present study. The evaluations of these costs and consequences were made based on the systematic consequence analysis using level 2 and 3 probabilistic safety assessment (PSA) codes. The accident sequences used in this analysis were taken from the results of level 2 seismic PSA of a virtual 1,100 MWe BWR-5. The doses to the public and the number of people affected were calculated using the level 3 PSA code OSCAAR of Japan Atomic Energy Agency (JAEA). The calculations have been made for 248 meteorological sequences, and the outputs are given as expectation values for various meteorological conditions. Using these outputs, the cost per severe accident is calculated based on the open documents on the Fukushima Accident regarding the cost of protective actions and compensations for psychological harms. Finally, optimized accident protection and consequence mitigation strategies are recommended taking into account the various aspects comprehensively using the cost per severe accident. The authors must emphasize that the aim is not to estimate the accident cost itself but to extend the scope of “risk-informed decision making” for continuous safety improvements of nuclear energy. (author)

  6. An improved method for permeability estimation of the bioclastic limestone reservoir based on NMR data

    Science.gov (United States)

    Ge, Xinmin; Fan, Yiren; Liu, Jianyu; Zhang, Li; Han, Yujiao; Xing, Donghui

    2017-10-01

    Permeability is an important parameter in formation evaluation since it controls the fluid transportation of porous rocks. However, it is challengeable to compute the permeability of bioclastic limestone reservoirs by conventional methods linking petrophysical and geophysical data, due to the complex pore distributions. A new method is presented to estimate the permeability based on laboratory and downhole nuclear magnetic resonance (NMR) measurements. We divide the pore space into four intervals by the inflection points between the pore radius and the transversal relaxation time. Relationships between permeability and percentages of different pore intervals are investigated to investigate influential factors on the fluid transportation. Furthermore, an empirical model, which takes into account of the pore size distributions, is presented to compute the permeability. 212 core samples in our case show that the accuracy of permeability calculation is improved from 0.542 (SDR model), 0.507 (TIM model), 0.455 (conventional porosity-permeability regressions) to 0.803. To enhance the precision of downhole application of the new model, we developed a fluid correction algorithm to construct the water spectrum of in-situ NMR data, aiming to eliminate the influence of oil on the magnetization. The result reveals that permeability is positively correlated with percentages of mega-pores and macro-pores, but negatively correlated with the percentage of micro-pores. Poor correlation is observed between permeability and the percentage of meso-pores. NMR magnetizations and T2 spectrums after the fluid correction agree well with laboratory results for samples saturated with water. Field application indicates that the improved method provides better performance than conventional models such as Schlumberger-Doll Research equation, Timur-Coates equation, and porosity-permeability regressions.

  7. Improved estimation of leak location of pipelines using frequency band variation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Sup [Embedded System Engineering Department, Incheon National University, Incheon (Korea, Republic of); Yoon, Dong Jin [Safety Measurement Center, Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2014-02-15

    Leakage is an important factor to be considered for the management of underground water supply pipelines in a smart water grid system, especially if the pipelines are aged and buried under the pavement or various structures of a highly populated city. Because the exact detection of the location of such leaks in pipelines is essential for their efficient operation, a new methodology for leak location detection based on frequency band variation, windowing filters, and probability is proposed in this paper. Because the exact detection of the leak location depends on the precision of estimation of time delay between sensor signals due to leak noise, some window functions that offer weightings at significant frequencies are applied for calculating the improved cross-correlation function. Experimental results obtained by applying this methodology to an actual buried water supply pipeline, ∼ 253.9 m long and made of cast iron, revealed that the approach of frequency band variation with those windows and probability offers better performance for leak location detection.

  8. Improved Monte Carlo - Perturbation Method For Estimation Of Control Rod Worths In A Research Reactor

    International Nuclear Information System (INIS)

    Kalcheva, Silva; Koonen, Edgar

    2008-01-01

    A hybrid method dedicated to improve the experimental technique for estimation of control rod worths in a research reactor is presented. The method uses a combination of Monte Carlo technique and perturbation theory. The perturbation theory is used to obtain the relation between the relative rod efficiency and the buckling of the reactor with partially inserted rod. A series of coefficients, describing the axial absorption profile are used to correct the buckling for an arbitrary composite rod, having complicated burn up irradiation history. These coefficients have to be determined - by experiment or by using some theoretical/numerical method. In the present paper they are derived from the macroscopic absorption cross sections, obtained from detailed Monte Carlo calculations by MCNPX 2.6.F of the axial burn up profile during control rod life. The method is validated on measurements of control rod worths at the BR2 reactor. Comparison with direct Monte Carlo evaluations of control rod worths is also presented. The uncertainties, arising from the used approximations in the presented hybrid method are discussed. (authors)

  9. Improved estimation of leak location of pipelines using frequency band variation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin

    2014-01-01

    Leakage is an important factor to be considered for the management of underground water supply pipelines in a smart water grid system, especially if the pipelines are aged and buried under the pavement or various structures of a highly populated city. Because the exact detection of the location of such leaks in pipelines is essential for their efficient operation, a new methodology for leak location detection based on frequency band variation, windowing filters, and probability is proposed in this paper. Because the exact detection of the leak location depends on the precision of estimation of time delay between sensor signals due to leak noise, some window functions that offer weightings at significant frequencies are applied for calculating the improved cross-correlation function. Experimental results obtained by applying this methodology to an actual buried water supply pipeline, ∼ 253.9 m long and made of cast iron, revealed that the approach of frequency band variation with those windows and probability offers better performance for leak location detection.

  10. An Improved Method of Pose Estimation for Lighthouse Base Station Extension.

    Science.gov (United States)

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-10-22

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.

  11. Improved method for estimating particle scattering probabilities to finite detectors for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Mickael, M.; Gardner, R.P.; Verghese, K.

    1988-01-01

    An improved method for calculating the total probability of particle scattering within the solid angle subtended by finite detectors is developed, presented, and tested. The limiting polar and azimuthal angles subtended by the detector are measured from the direction that most simplifies their calculation rather than from the incident particle direction. A transformation of the particle scattering probability distribution function (pdf) is made to match the transformation of the direction from which the limiting angles are measured. The particle scattering probability to the detector is estimated by evaluating the integral of the transformed pdf over the range of the limiting angles measured from the preferred direction. A general formula for transforming the particle scattering pdf is derived from basic principles and applied to four important scattering pdf's; namely, isotropic scattering in the Lab system, isotropic neutron scattering in the center-of-mass system, thermal neutron scattering by the free gas model, and gamma-ray Klein-Nishina scattering. Some approximations have been made to these pdf's to enable analytical evaluations of the final integrals. These approximations are shown to be valid over a wide range of energies and for most elements. The particle scattering probability to spherical, planar circular, and right circular cylindrical detectors has been calculated using the new and previously reported direct approach. Results indicate that the new approach is valid and is computationally faster by orders of magnitude

  12. Improving Soil Moisture Estimation through the Joint Assimilation of SMOS and GRACE Satellite Observations

    Science.gov (United States)

    Girotto, Manuela

    2018-01-01

    Observations from recent soil moisture dedicated missions (e.g. SMOS or SMAP) have been used in innovative data assimilation studies to provide global high spatial (i.e., approximately10-40 km) and temporal resolution (i.e., daily) soil moisture profile estimates from microwave brightness temperature observations. These missions are only sensitive to near-surface soil moisture 0-5 cm). In contrast, the Gravity Recovery and Climate Experiment (GRACE) mission provides accurate measurements of the entire vertically integrated terrestrial water storage (TWS) column but, it is characterized by low spatial (i.e., 150,000 km2) and temporal (i.e., monthly) resolutions. Data assimilation studies have shown that GRACE-TWS primarily affects (in absolute terms) deeper moisture storages (i.e., groundwater). In this presentation I will review benefits and drawbacks associated to the assimilation of both types of observations. In particular, I will illustrate the benefits and drawbacks of their joint assimilation for the purpose of improving the entire profile of soil moisture (i.e., surface and deeper water storages).

  13. Estimation of Catchment Transit Time in Fuji River Basin by using an improved Tank model

    Science.gov (United States)

    Wenchao, M.; Yamanaka, T.; Wakiyama, Y.; Wang, P.

    2013-12-01

    As an important parameter that reflects the characteristics of catchments, the catchment transit time (CTT) has been given much more widely attentions especially in recent years. The CTT is defined as the time water spends travelling through a catchment to the stream network [1], and it describes how catchments retain and release water and solutes and thus control geochemical and biogeochemical cycling and contamination persistence [2]. The objectives of the present study are to develop a new approach for estimating CTT without prior information on such TTD functions and to apply it to the Fuji River basin in the Central Japan Alps Region. In this study, an improved Tank model was used to compute mean CTT and TTD functions simultaneously. It involved water fluxes and isotope mass balance. Water storage capacity in the catchment, which strongly affects CTT, is reflected in isotope mass balance more sensitively than in water fluxes. A model calibrated with observed discharge and isotope data is used for virtual age tracer computation to estimate CTT. This model does not only consider the hydrological data and physical process of the research area but also reflects the actual TTD with considering the geological condition, land use and the other catchment-hydrological conditions. For the calibration of the model, we used river discharge record obtained by the Ministry of Land, Infrastructure and Transportation, and are collecting isotope data of precipitation and river waters monthly or semi-weekly. Three sub-catchments (SC1~SC3) in the Fuji River basin was selected to test the model with five layers: the surface layer, upper-soil layer, lower-soil layer, groundwater aquifer layer and bedrock layer (Layer 1- Layer 5). The evaluation of the model output was assessed using Nash-Sutcliffe efficiency (NSE), root mean square error-observations standard deviation ratio (RSR), and percent bias (PBIAS). Using long time-series of discharge records for calibration, the simulated

  14. Improving The Accuracy Of Bluetooth Based Travel Time Estimation Using Low-Level Sensor Data

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Tørholm Christensen, Lars; Krishnan, Rajesh

    2013-01-01

    triggered by a single device. This could lead to location ambiguity and reduced accuracy of travel time estimation. Therefore, the accuracy of travel time estimations by Bluetooth Technology (BT) depends upon how location ambiguity is handled by the estimation method. The issue of multiple detection events...... in the context of travel time estimation by BT has been considered by various researchers. However, treatment of this issue has remained simplistic so far. Most previous studies simply used the first detection event (Enter-Enter) as the best estimate. No systematic analysis for exploring the most accurate method...... of estimating travel time using multiple detection events has been conducted. In this study different aspects of BT detection zone, including size and its impact on the accuracy of travel time estimation, are discussed. Moreover, four alternative methods are applied; namely, Enter-Enter, Leave-Leave, Peak...

  15. Improving artificial forest biomass estimates using afforestation age information from time series Landsat stacks.

    Science.gov (United States)

    Liu, Liangyun; Peng, Dailiang; Wang, Zhihui; Hu, Yong

    2014-11-01

    China maintains the largest artificial forest area in the world. Studying the dynamic variation of forest biomass and carbon stock is important to the sustainable use of forest resources and understanding of the artificial forest carbon budget in China. In this study, we investigated the potential of Landsat time series stacks for aboveground biomass (AGB) estimation in Yulin District, a key region of the Three-North Shelter region of China. Firstly, the afforestation age was successfully retrieved from the Landsat time series stacks in the last 40 years (from 1974 to 2013) and shown to be consistent with the surveyed tree ages, with a root-mean-square error (RMSE) value of 4.32 years and a determination coefficient (R (2)) of 0.824. Then, the AGB regression models were successfully developed by integrating vegetation indices and tree age. The simple ratio vegetation index (SR) is the best candidate of the commonly used vegetation indices for estimating forest AGB, and the forest AGB model was significantly improved using the combination of SR and tree age, with R (2) values from 0.50 to 0.727. Finally, the forest AGB images were mapped at eight epochs from 1985 to 2013 using SR and afforestation age. The total forest AGB in seven counties of Yulin District increased by 20.8 G kg, from 5.8 G kg in 1986 to 26.6 G kg in 2013, a total increase of 360 %. For the persistent forest area since 1974, the forest AGB density increased from 15.72 t/ha in 1986 to 44.53 t/ha in 2013, with an annual rate of about 0.98 t/ha. For the artificial forest planted after 1974, the AGB density increased about 1.03 t/ha a year from 1974 to 2013. The results present a noticeable carbon increment for the planted artificial forest in Yulin District over the last four decades.

  16. Improving Radar Quantitative Precipitation Estimation over Complex Terrain in the San Francisco Bay Area

    Science.gov (United States)

    Cifelli, R.; Chen, H.; Chandrasekar, V.

    2017-12-01

    A recent study by the State of California's Department of Water Resources has emphasized that the San Francisco Bay Area is at risk of catastrophic flooding. Therefore, accurate quantitative precipitation estimation (QPE) and forecast (QPF) are critical for protecting life and property in this region. Compared to rain gauge and meteorological satellite, ground based radar has shown great advantages for high-resolution precipitation observations in both space and time domain. In addition, the polarization diversity shows great potential to characterize precipitation microphysics through identification of different hydrometeor types and their size and shape information. Currently, all the radars comprising the U.S. National Weather Service (NWS) Weather Surveillance Radar-1988 Doppler (WSR-88D) network are operating in dual-polarization mode. Enhancement of QPE is one of the main considerations of the dual-polarization upgrade. The San Francisco Bay Area is covered by two S-band WSR-88D radars, namely, KMUX and KDAX. However, in complex terrain like the Bay Area, it is still challenging to obtain an optimal rainfall algorithm for a given set of dual-polarization measurements. In addition, the accuracy of rain rate estimates is contingent on additional factors such as bright band contamination, vertical profile of reflectivity (VPR) correction, and partial beam blockages. This presentation aims to improve radar QPE for the Bay area using advanced dual-polarization rainfall methodologies. The benefit brought by the dual-polarization upgrade of operational radar network is assessed. In addition, a pilot study of gap fill X-band radar performance is conducted in support of regional QPE system development. This paper also presents a detailed comparison between the dual-polarization radar-derived rainfall products with various operational products including the NSSL's Multi-Radar/Multi-Sensor (MRMS) system. Quantitative evaluation of various rainfall products is achieved

  17. Genetically-barcoded SIV facilitates enumeration of rebound variants and estimation of reactivation rates in nonhuman primates following interruption of suppressive antiretroviral therapy.

    Directory of Open Access Journals (Sweden)

    Christine M Fennessey

    2017-05-01

    viremia. The relative proportions of the rebounding viral clonotypes, spanning a range of 5 logs, were largely preserved over time for each animal. The viral growth rate during recrudescence and the relative abundance of each rebounding clonotype were used to estimate the average frequency of reactivation per animal. Using these parameters, reactivation frequencies were calculated and ranged from 0.33-0.70 events per day, likely representing reactivation from long-lived latently infected cells. The use of SIVmac239M therefore provides a powerful tool to investigate SIV latency and the frequency of viral reactivation after treatment interruption.

  18. The contingent behavior of charter fishing participants on the Chesapeake Bay: Welfare estimates associated with water quality improvements

    Science.gov (United States)

    Poor, P.J.; Breece, M.

    2006-01-01

    Water quality in the Chesapeake Bay has deteriorated over recent years. Historically, fishing has contributed to the region's local economy in terms of commercial and recreational harvests. A contingent behavior model is used to estimate welfare measures for charter fishing participants with regard to a hypothetical improvement in water quality. Using a truncated Poisson count model corrected for endogenous stratification, it was found that charter fishers not only contribute to the local market economy, but they also place positive non-market value on preserving the Bay's water quality. Using two estimates for travels costs it is estimated that the individual consumer surplus is $200 and $117 per trip, and the average individual consumer surplus values for an improvement in water quality is $75 and $44 for two models estimated. ?? 2006 University of Newcastle upon Tyne.

  19. Improved seismic risk estimation for Bucharest, based on multiple hazard scenarios, analytical methods and new techniques

    Science.gov (United States)

    Toma-Danila, Dragos; Florinela Manea, Elena; Ortanza Cioflan, Carmen

    2014-05-01

    Bucharest, capital of Romania (with 1678000 inhabitants in 2011), is one of the most exposed big cities in Europe to seismic damage. The major earthquakes affecting the city have their origin in the Vrancea region. The Vrancea intermediate-depth source generates, statistically, 2-3 shocks with moment magnitude >7.0 per century. Although the focal distance is greater than 170 km, the historical records (from the 1838, 1894, 1908, 1940 and 1977 events) reveal severe effects in the Bucharest area, e.g. intensities IX (MSK) for the case of 1940 event. During the 1977 earthquake, 1420 people were killed and 33 large buildings collapsed. The nowadays building stock is vulnerable both due to construction (material, age) and soil conditions (high amplification, generated within the weak consolidated Quaternary deposits, their thickness is varying 250-500m throughout the city). A number of 373 old buildings, out of 2563, evaluated by experts are more likely to experience severe damage/collapse in the next major earthquake. The total number of residential buildings, in 2011, was 113900. In order to guide the mitigation measures, different studies tried to estimate the seismic risk of Bucharest, in terms of buildings, population or economic damage probability. Unfortunately, most of them were based on incomplete sets of data, whether regarding the hazard or the building stock in detail. However, during the DACEA Project, the National Institute for Earth Physics, together with the Technical University of Civil Engineering Bucharest and NORSAR Institute managed to compile a database for buildings in southern Romania (according to the 1999 census), with 48 associated capacity and fragility curves. Until now, the developed real-time estimation system was not implemented for Bucharest. This paper presents more than an adaptation of this system to Bucharest; first, we analyze the previous seismic risk studies, from a SWOT perspective. This reveals that most of the studies don't use

  20. Might ART Adherence Estimates Be Improved by Combining Biomarker and Self-Report Data?

    Directory of Open Access Journals (Sweden)

    Rebecca Rhead

    Full Text Available As we endeavour to examine rates of viral suppression in PLHIV, reliable data on ART adherence are needed to distinguish between the respective contributions of poor adherence and treatment failure on high viral load. Self-reported data are susceptible to response bias and although biomarker data on drug presence and concentration can provide a superior, alternative method of measurement, complications due to drug-drug interactions and genetic variations can cause some inaccuracies. We investigate the feasibility of combining both biomarker and self-report data to produce a potentially more accurate measure of ART adherence.Data were taken from a large general-population survey in the Manicaland province, Zimbabwe, conducted in 2009-2011. HIV-infected adults who had initiated ART (N = 560 provided self-report data on adherence and dried blood spot samples that were analysed for traces of ART medication. A new three-category measure of ART adherence was constructed, based on biomarker data but using self-report data to adjust for cases with abnormally low and high drug concentrations due to possible drug-drug interactions and genetic factors, and was assessed for plausibility using survey data on socio-demographic correlates.94.3% (528/560 and 92.7% (519/560 of the sample reported faithful adherence to their medication and had traces of ART medication, respectively. The combined measure estimated good evidence of ART adherence at 69% and excellent evidence of adherence at 53%. The regression analysis results showed plausible patterns of ART adherence by socio-demographic status with men and younger participants being more likely to adhere poorly to medication, and higher socio-economic status individuals and those living in more urban locations being more likely to adhere well.Biomarker and self-reported measures of adherence can be combined in a meaningful way to produce a potentially more accurate measure of ART adherence. Results indicate that

  1. Plaque Structural Stress Estimations Improve Prediction of Future Major Adverse Cardiovascular Events After Intracoronary Imaging.

    Science.gov (United States)

    Brown, Adam J; Teng, Zhongzhao; Calvert, Patrick A; Rajani, Nikil K; Hennessy, Orla; Nerlekar, Nitesh; Obaid, Daniel R; Costopoulos, Charis; Huang, Yuan; Hoole, Stephen P; Goddard, Martin; West, Nick E J; Gillard, Jonathan H; Bennett, Martin R

    2016-06-01

    Although plaque rupture is responsible for most myocardial infarctions, few high-risk plaques identified by intracoronary imaging actually result in future major adverse cardiovascular events (MACE). Nonimaging markers of individual plaque behavior are therefore required. Rupture occurs when plaque structural stress (PSS) exceeds material strength. We therefore assessed whether PSS could predict future MACE in high-risk nonculprit lesions identified on virtual-histology intravascular ultrasound. Baseline nonculprit lesion features associated with MACE during long-term follow-up (median: 1115 days) were determined in 170 patients undergoing 3-vessel virtual-histology intravascular ultrasound. MACE was associated with plaque burden ≥70% (hazard ratio: 8.6; 95% confidence interval, 2.5-30.6; P<0.001) and minimal luminal area ≤4 mm(2) (hazard ratio: 6.6; 95% confidence interval, 2.1-20.1; P=0.036), although absolute event rates for high-risk lesions remained <10%. PSS derived from virtual-histology intravascular ultrasound was subsequently estimated in nonculprit lesions responsible for MACE (n=22) versus matched control lesions (n=22). PSS showed marked heterogeneity across and between similar lesions but was significantly increased in MACE lesions at high-risk regions, including plaque burden ≥70% (13.9±11.5 versus 10.2±4.7; P<0.001) and thin-cap fibroatheroma (14.0±8.9 versus 11.6±4.5; P=0.02). Furthermore, PSS improved the ability of virtual-histology intravascular ultrasound to predict MACE in plaques with plaque burden ≥70% (adjusted log-rank, P=0.003) and minimal luminal area ≤4 mm(2) (P=0.002). Plaques responsible for MACE had larger superficial calcium inclusions, which acted to increase PSS (P<0.05). Baseline PSS is increased in plaques responsible for MACE and improves the ability of intracoronary imaging to predict events. Biomechanical modeling may complement plaque imaging for risk stratification of coronary nonculprit lesions. © 2016

  2. Might ART Adherence Estimates Be Improved by Combining Biomarker and Self-Report Data?

    Science.gov (United States)

    Rhead, Rebecca; Masimirembwa, Collen; Cooke, Graham; Takaruza, Albert; Nyamukapa, Constance; Mutsimhi, Cosmas; Gregson, Simon

    2016-01-01

    As we endeavour to examine rates of viral suppression in PLHIV, reliable data on ART adherence are needed to distinguish between the respective contributions of poor adherence and treatment failure on high viral load. Self-reported data are susceptible to response bias and although biomarker data on drug presence and concentration can provide a superior, alternative method of measurement, complications due to drug-drug interactions and genetic variations can cause some inaccuracies. We investigate the feasibility of combining both biomarker and self-report data to produce a potentially more accurate measure of ART adherence. Data were taken from a large general-population survey in the Manicaland province, Zimbabwe, conducted in 2009-2011. HIV-infected adults who had initiated ART (N = 560) provided self-report data on adherence and dried blood spot samples that were analysed for traces of ART medication. A new three-category measure of ART adherence was constructed, based on biomarker data but using self-report data to adjust for cases with abnormally low and high drug concentrations due to possible drug-drug interactions and genetic factors, and was assessed for plausibility using survey data on socio-demographic correlates. 94.3% (528/560) and 92.7% (519/560) of the sample reported faithful adherence to their medication and had traces of ART medication, respectively. The combined measure estimated good evidence of ART adherence at 69% and excellent evidence of adherence at 53%. The regression analysis results showed plausible patterns of ART adherence by socio-demographic status with men and younger participants being more likely to adhere poorly to medication, and higher socio-economic status individuals and those living in more urban locations being more likely to adhere well. Biomarker and self-reported measures of adherence can be combined in a meaningful way to produce a potentially more accurate measure of ART adherence. Results indicate that ART adherence

  3. Synergistic soil moisture observation - an interdisciplinary multi-sensor approach to yield improved estimates across scales

    Science.gov (United States)

    Schrön, M.; Fersch, B.; Jagdhuber, T.

    2017-12-01

    The representative determination of soil moisture across different spatial ranges and scales is still an important challenge in hydrology. While in situ measurements are trusted methods at the profile- or point-scale, cosmic-ray neutron sensors (CRNS) are renowned for providing volume averages for several hectares and tens of decimeters depth. On the other hand, airborne remote-sensing enables the coverage of regional scales, however limited to the top few centimeters of the soil.Common to all of these methods is a challenging data processing part, often requiring calibration with independent data. We investigated the performance and potential of three complementary observational methods for the determination of soil moisture below grassland in an alpine front-range river catchment (Rott, 55 km2) of southern Germany.We employ the TERENO preAlpine soil moisture monitoring network, along with additional soil samples taken throughout the catchment. Spatial soil moisture products have been generated using surveys of a car-mounted mobile CRNS (rover), and an aerial acquisition of the polarimetric synthetic aperture radar (F-SAR) of DLR.The study assesses (1) the viability of the different methods to estimate soil moisture for their respective scales and extents, and (2) how either method could support an improvement of the others. We found that in situ data can provide valuable information to calibrate the CRNS rover and to train the vegetation removal part of the polarimetric SAR (PolSAR) retrieval algorithm. Vegetation correction is mandatory to obtain the sub-canopy soil moisture patterns. While CRNS rover surveys can be used to evaluate the F-SAR product across scales, vegetation-related PolSAR products in turn can support the spatial correction of CRNS products for biomass water. Despite the different physical principles, the synthesis of the methods can provide reasonable soil moisture information by integrating from the plot to the landscape scale. The

  4. Integrating lateral contributions along river reaches to improve SWOT discharge estimates

    Science.gov (United States)

    Beighley, E.; Zhao, Y.; Feng, D.; Fisher, C. K.; Raoufi, R.; Durand, M. T.; David, C. H.; Lee, H.; Boone, A. A.; Cretaux, J. F.

    2016-12-01

    Understanding the potential impacts of climate and land cover change at continental to global scales with a sufficient resolution for community scale planning and management requires an improved representation of the hydrologic cycle that is possible based on existing measurement networks and current Earth system models. The Surface Water and Ocean Topography (SWOT) mission, scheduled to launch in 2021, has the potential to address this challenge by providing measurements of water surface elevation, slope and extent for rivers wider than roughly 50-100 meters at a temporal sampling frequency ranging from days to weeks. The global uniformity and space/time resolution of the proposed SWOT measurements will enable hydrologic discovery, model advancements and new applications addressing the above challenges that are not currently possible or likely even conceivable. One derived data product planned for the SWOT mission is river discharge. Although there are several discharge algorithms that perform well for a range of conditions, this effort is focused on the MetroMan discharge algorithm. For example, in MetroMan, lateral inflow assumptions have been shown to impact performance. Here, the role of lateral inflows on discharge estimate performance is investigated. Preliminary results are presented for the Ohio River Basin. Lateral inflows are quantified for SWOT-observable river reaches using surface and subsurface runoff from North American Land Data Assimilation System (NLDAS) and lateral routing in the Hillslope River Routing (HRR) model. Frequency distributions for the fraction of reach-averaged discharge resulting from lateral inflow are presented. Future efforts will integrate lateral inflow characteristics into the MetroMan discharge algorithm and quantify the potential value of SWOT measurement in flood insurance applications.

  5. Improvement of economic potential estimation methods for enterprise with potential branch clusters use

    Directory of Open Access Journals (Sweden)

    V.Ya. Nusinov

    2017-08-01

    Full Text Available The research determines that the current existing methods of enterprise’s economic potential estimation are based on the use of additive, multiplicative and rating models. It is determined that the existing methods have a row of defects. For example, not all the methods take into account the branch features of the analysis, and also the level of development of the enterprise comparatively with other enterprises. It is suggested to level such defects by an account at the estimation of potential integral level not only by branch features of enterprises activity but also by the intra-account economic clusterization of such enterprises. Scientific works which are connected with the using of clusters for the estimation of economic potential are generalized. According to the results of generalization it is determined that it is possible to distinguish 9 scientific approaches in this direction: the use of natural clusterization of enterprises with the purpose of estimation and increase of region potential; the use of natural clusterization of enterprises with the purpose of estimation and increase of industry potential; use of artificial clusterization of enterprises with the purpose of estimation and increase of region potential; use of artificial clusterization of enterprises with the purpose of estimation and increase of industry potential; the use of artificial clusterization of enterprises with the purpose of clustering potential estimation; the use of artificial clusterization of enterprises with the purpose of estimation of clustering competitiveness potential; the use of natural (artificial clusterization for the estimation of clustering efficiency; the use of natural (artificial clusterization for the increase of level at region (industries development; the use of methods of economic potential of region (industries estimation or its constituents for the construction of the clusters. It is determined that the use of clusterization method in

  6. Improving cluster-based missing value estimation of DNA microarray data.

    Science.gov (United States)

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  7. Trauma quality improvement: The Pietermaritzburg Metropolitan Trauma Service experience with the development of a comprehensive structure to facilitate quality improvement in rural trauma and acute care in KwaZulu-Natal, South Africa.

    Science.gov (United States)

    Clarke, Damian Luiz

    2015-01-03

    Improving the delivery of efficient and effective surgical care in rural South Africa is a mammoth task bedevilled by conflict between the stakeholders, who include rural doctors, surgeons, ancillary staff, researchers, educators and administrators. Management training is not part of most medical school curricula, yet as they progress in their careers, many clinicians are required to manage a health system and find the shift from caring for individual patients to managing a complex system difficult. Conflict arises when management-type interventions are imposed in a top-down manner on surgical staff suspicious of an unfamiliar field of study. Another area of conflict concerns the place of surgical research. Researchers are often accused of not being sufficiently focused on or concerned about the tasks of service delivery. This article provides an overview of management theory and describes a comprehensive management structure that integrates a model for health systems with a strategic planning process, strategic planning tools and appropriate quality metrics, and shows how the Pietermaritzburg Metropolitan Trauma Service in KwaZulu-Natal Province, South Africa, successfully used this structure to facilitate and contextualise a diverse number of quality improvement programmes and research initiatives in the realm of rural acute surgery and trauma. We have found this structure to be useful, and hope that it may be applied to other acute healthcare systems.

  8. Improved estimates of net primary productivity from MODIS satellite data at regional and local scales

    Science.gov (United States)

    Yude Pan; Richard Birdsey; John Hom; Kevin McCullough; Kenneth Clark

    2006-01-01

    We compared estimates of net primary production (NPP) from the MODIS satellite with estimates from a forest ecosystem process model (PnET-CN) and forest inventory and analysis (FIA) data for forest types of the mid-Atlantic region of the United States. The regional means were similar for the three methods and for the dominant oak? hickory forests in the region. However...

  9. Radioactivity of flour, wheat, bread improvers and dose estimates in Sudan

    International Nuclear Information System (INIS)

    Hamdan, Adam Mahana

    2015-10-01

    The steady rise in the use of isotopes and nuclear technology in various purposes in human life, both agro-industrial military, medical, may increase the chances of radioactive contamination that increases the exposure of ionizing radiation which raise awareness in increasing the need to know how to assess that exposure. Control of imported foodstuffs to ensure that not contaminated with radioactive materials is very important at this stage. The present study aims to investigating radioactivity in foodstuff consumed in Sudan to measure radionuclide in wheat flour, bread improvers specific objectives to measure radioactive contaminants and to estimate radiation dose from this consumption. The health impact of radionuclide ingestion from foodstuffs was evaluated by the committed effective doses determined in 30 samples of foodstuff. collected in the Port Sudan on the red sea, the radioactivity tracer of K-40, U-238 and Th-232 were measured by gamma ray spectrometry employing an using Nal (Ti) calibration process carried out for gamma spectrometry using MW652 as a reference source which recommended by International Atomic Energy Agency (IAEA) including source Cs-137 and Co-60 with two energy levels. The K-40 activity concentration in the flour samples, rang (303.07-40.48) (Bq/kg), 238U (4.81-1.95) (Bq/kg), Th-232 (7.60-1.61) Bq/kg) wheat samples range k-40 (250.62-27.22) (Bq/kg), U-238 (4.92-190) (Bq/kg), Th-232 (5.74-1.61) (Bq/kg) and bread improvers samples k-40 (68.60-13.61 (Bq/kg) U-238 (5.73-194) (Bq/kg). The total average effective dose for age (>17 years) was found in to flour be 2.35±7.12 mSv/y, 1.15±0.95 mSv/y, 1.65±2.02 mSv/y, the maximum dose values obtained were 6.01 mSv/y, 1.95 mSv/y, 1.57 mSv/y. The total average effective dose for age (>17 years) was found in to wheat 1.58±6.85 mSv/y 1.16±1.33 mSv/y, 0.48±1.14 mSv/y, the maximum dose values obtained were 4.14 mSv/y, 1.66 mSv/y, 0.99 mSv/y. The total average effective dose for age (>17 years) was

  10. Improved localisation of neoclassical tearing modes by combining multiple diagnostic estimates

    Science.gov (United States)

    Rapson, C. J.; Fischer, R.; Giannone, L.; Maraschek, M.; Reich, M.; Treutterer, W.; The ASDEX Upgrade Team

    2017-07-01

    Neoclassical tearing modes (NTMs) strongly degrade confinement in tokamaks, and are a leading cause of disruptions. They can be stabilised by targeted electron cyclotron current drive (ECCD), however the effectiveness of ECCD depends strongly on the accuracy or misalignment between ECCD and the NTM. The first step to ensure minimal misalignment is a good estimate of the NTM location. In previous NTM control experiments, three methods have been used independently to estimate the NTM location: the magnetic equilibrium, correlation between magnetic and spatially-resolved temperature fluctuations, and the amplitude response of the NTM to nearby ECCD. This submission describes an algorithm which has been designed to fuse these three estimates into one, taking into account many of the characteristics of each diagnostic. Although the method diverges from standard data fusion methods, results from simulation and experiment confirm that the algorithm achieves its stated goal of providing an estimate that is more reliable and accurate than any of the individual estimates.

  11. Improving volume loss estimates of the northwestern Greenland Ice Sheet 2002-2010

    DEFF Research Database (Denmark)

    Korsgaard, Niels Jákup; Khan, Shfaqat Abbas; Kjeldsen, Kristian Kjellerup

    Studies have been carried out using various methods to estimate the Greenland ice sheet mass balance. Remote sensing techniques used to determine the ice sheet volume includes airborne and satellite radar and laser methods and measurements of ice flow of outlet glaciers use InSAR satellite radar......) does not work on sloping surfaces and is affected by radar penetration into the snow. InSAR estimates require knowledge of outlet glacier thickness. GRACE has limited spatial resolution and is affected by mass variations not just from ice changes, but also from hydrologic and ocean mass variability...... and mass redistribution within the solid Earth. The accuracy of ice mass and ice volume estimates can be assessed by comparing results from different techniques. Here, we focus on volume loss estimates from ICESat, ATM and LVIS data. We estimate catchment-wide ice volume change in northwest Greenland...

  12. Laser facilitates vaccination

    Directory of Open Access Journals (Sweden)

    Ji Wang

    2016-01-01

    Full Text Available Development of novel vaccine deliveries and vaccine adjuvants is of great importance to address the dilemma that the vaccine field faces: to improve vaccine efficacy without compromising safety. Harnessing the specific effects of laser on biological systems, a number of novel concepts have been proposed and proved in recent years to facilitate vaccination in a safer and more efficient way. The key advantage of using laser technology in vaccine delivery and adjuvantation is that all processes are initiated by physical effects with no foreign chemicals administered into the body. Here, we review the recent advances in using laser technology to facilitate vaccine delivery and augment vaccine efficacy as well as the underlying mechanisms.

  13. Fuzzy-Estimation Control for Improvement Microwave Connection for Iraq Electrical Grid

    Science.gov (United States)

    Hoomod, Haider K.; Radi, Mohammed

    2018-05-01

    The demand for broadband wireless services is increasing day by day (as internet or radio broadcast and TV etc.) for this reason and optimal exploiting for this bandwidth may be other reasons indeed be there is problem in the communication channels. it’s necessary that exploiting the good part form this bandwidth. In this paper, we propose to use estimation technique for estimate channel availability in that moment and next one to know the error in the bandwidth channel for controlling the possibility data transferring through the channel. The proposed estimation based on the combination of the least Minimum square (LMS), Standard Kalman filter, and Modified Kalman filter. The error estimation in channel use as control parameter in fuzzy rules to adjusted the rate and size sending data through the network channel, and rearrangement the priorities of the buffered data (workstation control parameters, Texts, phone call, images, and camera video) for the worst cases of error in channel. The propose system is designed to management data communications through the channels connect among the Iraqi electrical grid stations. The proposed results show that the modified Kalman filter have a best result in time and noise estimation (0.1109 for 5% noise estimation to 0.3211 for 90% noise estimation) and the packets loss rate is reduced with ratio from (35% to 385%).

  14. Refining estimates of availability bias to improve assessments of the conservation status of an endangered dolphin.

    Science.gov (United States)

    Sucunza, Federico; Danilewicz, Daniel; Cremer, Marta; Andriolo, Artur; Zerbini, Alexandre N

    2018-01-01

    Estimation of visibility bias is critical to accurately compute abundance of wild populations. The franciscana, Pontoporia blainvillei, is considered the most threatened small cetacean in the southwestern Atlantic Ocean. Aerial surveys are considered the most effective method to estimate abundance of this species, but many existing estimates have been considered unreliable because they lack proper estimation of correction factors for visibility bias. In this study, helicopter surveys were conducted to determine surfacing-diving intervals of franciscanas and to estimate availability for aerial platforms. Fifteen hours were flown and 101 groups of 1 to 7 franciscanas were monitored, resulting in a sample of 248 surface-dive cycles. The mean surfacing interval and diving interval times were 16.10 seconds (SE = 9.74) and 39.77 seconds (SE = 29.06), respectively. Availability was estimated at 0.39 (SE = 0.01), a value 16-46% greater than estimates computed from diving parameters obtained from boats or from land. Generalized mixed-effects models were used to investigate the influence of biological and environmental predictors on the proportion of time franciscana groups are visually available to be seen from an aerial platform. These models revealed that group size was the main factor influencing the proportion at surface. The use of negatively biased estimates of availability results in overestimation of abundance, leads to overly optimistic assessments of extinction probabilities and to potentially ineffective management actions. This study demonstrates that estimates of availability must be computed from suitable platforms to ensure proper conservation decisions are implemented to protect threatened species such as the franciscana.

  15. Improving the peak power density estimation for the DNBR trip signal

    International Nuclear Information System (INIS)

    Moreira, Joao M. L.; Souza, Rose Mary G.P.

    2002-01-01

    The departure from nucleate boiling (DNB) core protection in PWR reactors is usually carried out through the over temperature trip or the instantaneous minimum DNB ratio (DNBR) trip. The protection is obtained through specialized correlations or fast digital computer simulators that infer the core power level, and local coolant thermal and flow conditions out of process variables furnished by the instrumentation. The power density distribution information is usually expressed in terms of F q , the power peak factor, and its location. F q , in its turn, can be determined through the control rod position or, more often, through the power axial offset (AO) F q =f (AO, control rod positions). The AO, defined as the difference between upper and lower long ion chambers signals, is supplied for each channel by separate sets of out-of-core detectors positioned 90 or 120 degrees apart in plan. The AO is given by AO=(S t -S b )/(S t +S b ) where S t and S b are the out-of-core signals from the top and the bottom sections, respectively. In current PWRs a large penalty is imposed to the result of the first equation, because of the difficult of inferring with good accuracy the peak factor from the AO obtained from the out-of-core instrumentation. This ends up reducing the plant capacity factor. In this work, the f function in the first equation, which correlates the power peak factor with the axial offset yielded by out-of-core detectors and control rod positions, is obtained through a combination of specific experiments in the IPEN/MB-01 zero-power reactor and calculation results. For improving the peak factor estimation, it is necessary to consider accurately the response of the out-of-core detectors to different power density distribution in the core. This task is not easily accomplished through calculation due to the difficulties involved in the necessary neutron transport treatment for the out-of-core detector responses

  16. An Improved Estimation of Regional Fractional Woody/Herbaceous Cover Using Combined Satellite Data and High-Quality Training Samples

    Directory of Open Access Journals (Sweden)

    Xu Liu

    2017-01-01

    Full Text Available Mapping vegetation cover is critical for understanding and monitoring ecosystem functions in semi-arid biomes. As existing estimates tend to underestimate the woody cover in areas with dry deciduous shrubland and woodland, we present an approach to improve the regional estimation of woody and herbaceous fractional cover in the East Asia steppe. This developed approach uses Random Forest models by combining multiple remote sensing data—training samples derived from high-resolution image in a tailored spatial sampling and model inputs composed of specific metrics from MODIS sensor and ancillary variables including topographic, bioclimatic, and land surface information. We emphasize that effective spatial sampling, high-quality classification, and adequate geospatial information are important prerequisites of establishing appropriate model inputs and achieving high-quality training samples. This study suggests that the optimal models improve estimation accuracy (NMSE 0.47 for woody and 0.64 for herbaceous plants and show a consistent agreement with field observations. Compared with existing woody estimate product, the proposed woody cover estimation can delineate regions with subshrubs and shrubs, showing an improved capability of capturing spatialized detail of vegetation signals. This approach can be applicable over sizable semi-arid areas such as temperate steppes, savannas, and prairies.

  17. A Novel Strategy of Ambiguity Correction for the Improved Faraday Rotation Estimator in Linearly Full-Polarimetric SAR Data

    Directory of Open Access Journals (Sweden)

    Jinhui Li

    2018-04-01

    Full Text Available Spaceborne synthetic aperture radar (SAR missions operating at low frequencies, such as L-band or P-band, are significantly influenced by the ionosphere. As one of the serious ionosphere effects, Faraday rotation (FR is a remarkable distortion source for the polarimetric SAR (PolSAR application. Various published FR estimators along with an improved one have been introduced to solve this issue, all of which are implemented by processing a set of PolSAR real data. The improved estimator exhibits optimal robustness based on performance analysis, especially in term of the system noise. However, all published estimators, including the improved estimator, suffer from a potential FR angle (FRA ambiguity. A novel strategy of the ambiguity correction for those FR estimators is proposed and shown as a flow process, which is divided into pixel-level and image-level correction. The former is not yet recognized and thus is considered in particular. Finally, the validation experiments show a prominent performance of the proposed strategy.

  18. A Novel Strategy of Ambiguity Correction for the Improved Faraday Rotation Estimator in Linearly Full-Polarimetric SAR Data.

    Science.gov (United States)

    Li, Jinhui; Ji, Yifei; Zhang, Yongsheng; Zhang, Qilei; Huang, Haifeng; Dong, Zhen

    2018-04-10

    Spaceborne synthetic aperture radar (SAR) missions operating at low frequencies, such as L-band or P-band, are significantly influenced by the ionosphere. As one of the serious ionosphere effects, Faraday rotation (FR) is a remarkable distortion source for the polarimetric SAR (PolSAR) application. Various published FR estimators along with an improved one have been introduced to solve this issue, all of which are implemented by processing a set of PolSAR real data. The improved estimator exhibits optimal robustness based on performance analysis, especially in term of the system noise. However, all published estimators, including the improved estimator, suffer from a potential FR angle (FRA) ambiguity. A novel strategy of the ambiguity correction for those FR estimators is proposed and shown as a flow process, which is divided into pixel-level and image-level correction. The former is not yet recognized and thus is considered in particular. Finally, the validation experiments show a prominent performance of the proposed strategy.

  19. Improvement of age estimation using amino acid racemization in a case of pink teeth.

    Science.gov (United States)

    Ohtani, S; Yamada, Y; Yamamoto, I

    1998-03-01

    Age was estimated from pink teeth using racemization of dentinal aspartic acid. Materials for identification were two lower second premolars. The body was determined to be that of a 40-year-old man; however, the age of the decedent had been estimated to be 29 and 30 years by the conventional method and 30 years from findings in the oral cavity. To clarify the cause of this difference, the powdered teeth were further washed in 0.01 mol/L hydrochloric acid. The racemization ratio (D/L ratio) of ordinary white teeth from persons of known age was slightly lower than that before washing, whereas that of the teeth used for identification was higher than before washing. The calculated age of the decedent using the racemization ratio of his teeth was between 36 and 37 years. These results suggest that age estimated from pink teeth is probably underestimated, but a more accurate age estimate can be obtained after adequate washing.

  20. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    Science.gov (United States)

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  1. Improving value of travel time savings estimation for more effective transportation project evaluation.

    Science.gov (United States)

    2012-12-01

    Estimates of value of time (VOT) and value of travel time savings (VTTS) are critical elements in benefitcost : analyses of transportation projects and in developing congestion pricing policies. In addition, : differences in VTTS among various modes ...

  2. Comparing welfare estimates across stated preference and uncertainty elicitation formats for air quality improvements in Nairobi, Kenya

    NARCIS (Netherlands)

    Ndambiri, H.; Brouwer, R.; Mungatana, E.

    2016-01-01

    The effect of preference uncertainty on estimated willingness to pay (WTP) is examined using identical payment cards and alternative uncertainty elicitation procedures in three split samples, focusing on air quality improvement in Nairobi. The effect of the stochastic payment card (SPC) and

  3. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach

    NARCIS (Netherlands)

    Ran, J.; Ditmar, P.G.; Klees, R.; Farahani, H.

    2017-01-01

    We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted

  4. Transmit/Receive Spatial Smoothing with Improved Effective Array Aperture for Angle and Mutual Coupling Estimation in Bistatic MIMO Radar

    Directory of Open Access Journals (Sweden)

    Haomiao Liu

    2016-01-01

    Full Text Available We proposed a transmit/receive spatial smoothing with improved effective aperture approach for angle and mutual coupling estimation in bistatic MIMO radar. Firstly, the noise in each channel is restrained, by exploiting its independency, in both the spatial domain and temporal domain. Then the augmented transmit and receive spatial smoothing matrices with improved effective aperture are obtained, by exploiting the Vandermonde structure of steering vector with uniform linear array. The DOD and DOA can be estimated by utilizing the unitary ESPRIT algorithm. Finally, the mutual coupling coefficients of both the transmitter and the receiver can be figured out with the estimated angles of DOD and DOA. Numerical examples are presented to verify the effectiveness of the proposed method.

  5. Incorporating movement patterns to improve survival estimates for juvenile bull trout

    Science.gov (United States)

    Bowerman, Tracy; Budy, Phaedra

    2012-01-01

    Populations of many fish species are sensitive to changes in vital rates during early life stages, but our understanding of the factors affecting growth, survival, and movement patterns is often extremely limited for juvenile fish. These critical information gaps are particularly evident for bull trout Salvelinus confluentus, a threatened Pacific Northwest char. We combined several active and passive mark–recapture and resight techniques to assess migration rates and estimate survival for juvenile bull trout (70–170 mm total length). We evaluated the relative performance of multiple survival estimation techniques by comparing results from a common Cormack–Jolly–Seber (CJS) model, the less widely used Barker model, and a simple return rate (an index of survival). Juvenile bull trout of all sizes emigrated from their natal habitat throughout the year, and thereafter migrated up to 50 km downstream. With the CJS model, high emigration rates led to an extreme underestimate of apparent survival, a combined estimate of site fidelity and survival. In contrast, the Barker model, which allows survival and emigration to be modeled as separate parameters, produced estimates of survival that were much less biased than the return rate. Estimates of age-class-specific annual survival from the Barker model based on all available data were 0.218±0.028 (estimate±SE) for age-1 bull trout and 0.231±0.065 for age-2 bull trout. This research demonstrates the importance of incorporating movement patterns into survival analyses, and we provide one of the first field-based estimates of juvenile bull trout annual survival in relatively pristine rearing conditions. These estimates can provide a baseline for comparison with future studies in more impacted systems and will help managers develop reliable stage-structured population models to evaluate future recovery strategies.

  6. Estimating habitat carrying capacity for migrating and wintering waterfowl: Considerations, pitfalls and improvements

    Science.gov (United States)

    Williams, Christopher; Dugger, Bruce D.; Brasher, Michael G.; Coluccy, John M.; Cramer, Dane M.; Eadie, John M.; Gray, Matthew J.; Hagy, Heath M.; Livolsi, Mark; McWilliams, Scott R.; Petrie, Matthew; Soulliere, Gregory J.; Tirpak, John M.; Webb, Elisabeth B.

    2014-01-01

    Population-based habitat conservation planning for migrating and wintering waterfowl in North America is carried out by habitat Joint Venture (JV) initiatives and is based on the premise that food can limit demography (i.e. food limitation hypothesis). Consequently, planners use bioenergetic models to estimate food (energy) availability and population-level energy demands at appropriate spatial and temporal scales, and translate these values into regional habitat objectives. While simple in principle, there are both empirical and theoretical challenges associated with calculating energy supply and demand including: 1) estimating food availability, 2) estimating the energy content of specific foods, 3) extrapolating site-specific estimates of food availability to landscapes for focal species, 4) applicability of estimates from a single species to other species, 5) estimating resting metabolic rate, 6) estimating cost of daily behaviours, and 7) estimating costs of thermoregulation or tissue synthesis. Most models being used are daily ration models (DRMs) whose set of simplifying assumptions are well established and whose use is widely accepted and feasible given the empirical data available to populate such models. However, DRMs do not link habitat objectives to metrics of ultimate ecological importance such as individual body condition or survival, and largely only consider food-producing habitats. Agent-based models (ABMs) provide a possible alternative for creating more biologically realistic models under some conditions; however, ABMs require different types of empirical inputs, many of which have yet to be estimated for key North American waterfowl. Decisions about how JVs can best proceed with habitat conservation would benefit from the use of sensitivity analyses that could identify the empirical and theoretical uncertainties that have the greatest influence on efforts to estimate habitat carrying capacity. Development of ABMs at

  7. Improving spatio-temporal model estimation of satellite-derived PM2.5 concentrations: Implications for public health

    Science.gov (United States)

    Barik, M. G.; Al-Hamdan, M. Z.; Crosson, W. L.; Yang, C. A.; Coffield, S. R.

    2017-12-01

    Satellite-derived environmental data, available in a range of spatio-temporal scales, are contributing to the growing use of health impact assessments of air pollution in the public health sector. Models developed using correlation of Moderate Resolution Imaging Spectrometer (MODIS) Aerosol Optical Depth (AOD) with ground measurements of fine particulate matter less than 2.5 microns (PM2.5) are widely applied to measure PM2.5 spatial and temporal variability. In the public health sector, associations of PM2.5 with respiratory and cardiovascular diseases are often investigated to quantify air quality impacts on these health concerns. In order to improve predictability of PM2.5 estimation using correlation models, we have included meteorological variables, higher-resolution AOD products and instantaneous PM2.5 observations into statistical estimation models. Our results showed that incorporation of high-resolution (1-km) Multi-Angle Implementation of Atmospheric Correction (MAIAC)-generated MODIS AOD, meteorological variables and instantaneous PM2.5 observations improved model performance in various parts of California (CA), USA, where single variable AOD-based models showed relatively weak performance. In this study, we further asked whether these improved models actually would be more successful for exploring associations of public health outcomes with estimated PM2.5. To answer this question, we geospatially investigated model-estimated PM2.5's relationship with respiratory and cardiovascular diseases such as asthma, high blood pressure, coronary heart disease, heart attack and stroke in CA using health data from the Centers for Disease Control and Prevention (CDC)'s Wide-ranging Online Data for Epidemiologic Research (WONDER) and the Behavioral Risk Factor Surveillance System (BRFSS). PM2.5 estimation from these improved models have the potential to improve our understanding of associations between public health concerns and air quality.

  8. Empirical observations offer improved estimates of forest floor carbon content across in the United States

    Science.gov (United States)

    Perry, C. H.; Domke, G. M.; Walters, B. F.; Smith, J. E.; Woodall, C. W.

    2014-12-01

    The Forest Inventory and Analysis (FIA) program of the United States Forest Service reports official estimates of national forest floor carbon (FFC) stocks and stock change to national and international parties, the US Environmental Protection Agency (USEPA) and the United Nations Framework Convention on Climate Change (UNFCCC), respectively. These estimates of national FFC stocks are derived from plot-level predictions of FFC density. We suspect the models used to predict plot-level FFC density are less than ideal for several reasons: (a) they are based upon local studies that may not reflect FFC dynamics at the national scale, (b) they are relatively insensitive to climate change, and (c) they reduce the natural variability of the data leading to misplaced confidence in the estimates. However, FIA has measured forest floor attributes since 2001 on a systematic 1/16th subset of a nation-wide array of inventory plots (7 800 of 125 000 plots). Here we address the efficacy of replacing plot-level model predictions with empirical observations of FFC density while assessing the impact of imputing FFC density values to the full plot network on national stock estimates. First, using an equivalence testing framework, we found model predictions of FFC density to differ significantly from the observations in all regions and forest types; the mean difference across all plots was 21 percent (1.81 Mg·ha-1). Furthermore, the model predictions were biased towards the lower end of extant FFC density observations, underestimating it while greatly truncating the range relative to the observations. Second, the optimal imputation approach (k-Nearest Neighbor, k-NN) resulted in values that were equivalent to observations of FFC density across a range of simulated missingness and maintained the high variability seen in the observations. We used the k-NN approach to impute FFC density values to the 94 percent of FIA inventory plots without soil measurements. Third, using the imputed

  9. An improved estimate of SU(4) symmetry mixing in light nuclei

    International Nuclear Information System (INIS)

    Haq, R.; Parikh, J.C.; Bhatt, K.H.

    1974-01-01

    The spectral distribution method of French has been very successful in determining ground state energies and mixing intensities of various irreps of a group near the ground state. For the SU(4) group these methods have been extensively used. The method incorporated actually estimates an upper limit for the mixing and lower amounts of mixing cannot be ruled out. This is beacuse the total variance sigmasup(2) which is composed of sigmasup(2) external and sigmasup(2) internal is used for estimating the amount of mixing. Whereas sigmasup(2) int gives rise to spreading of various irreps, it is only sigmasup(2) ext which leads to symmetry mixing. Better methods of estimating the mixing shall be discussed. (author)

  10. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  11. Characterization of particulate emissions from Australian open-cut coal mines: Toward improved emission estimates.

    Science.gov (United States)

    Richardson, Claire; Rutherford, Shannon; Agranovski, Igor

    2018-06-01

    Given the significance of mining as a source of particulates, accurate characterization of emissions is important for the development of appropriate emission estimation techniques for use in modeling predictions and to inform regulatory decisions. The currently available emission estimation methods for Australian open-cut coal mines relate primarily to total suspended particulates and PM 10 (particulate matter with an aerodynamic diameter available relating to the PM 2.5 (currently available emission estimation techniques, this paper presents results of sampling completed at three open-cut coal mines in Australia. The monitoring data demonstrate that the particulate size fraction varies for different mining activities, and that the region in which the mine is located influences the characteristics of the particulates emitted to the atmosphere. The proportion of fine particulates in the sample increased with distance from the source, with the coarse fraction being a more significant proportion of total suspended particulates close to the source of emissions. In terms of particulate composition, the results demonstrate that the particulate emissions are predominantly sourced from naturally occurring geological material, and coal comprises less than 13% of the overall emissions. The size fractionation exhibited by the sampling data sets is similar to that adopted in current Australian emission estimation methods but differs from the size fractionation presented in the U.S. Environmental Protection Agency methodology. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for

  12. Adaptive feedforward of estimated ripple improves the closed loop system performance significantly

    International Nuclear Information System (INIS)

    Kwon, S.; Regan, A.; Wang, Y.M.; Rohlev, A.S.

    1998-01-01

    The Low Energy Demonstration Accelerator (LEDA) being constructed at Los Alamos National Laboratory will serve as the prototype for the low energy section of Acceleration Production of Tritium (APT) accelerator. This paper addresses the problem of LLRF control system for LEDA. The authors propose an estimator of the ripple and its time derivative and a control law which is based on PID control and adaptive feedforward of estimated ripple. The control law reduces the effect of the deterministic cathode ripple that is due to high voltage power supply and achieves tracking of desired set points

  13. Improvement of least-squares collocation error estimates using local GOCE Tzz signal standard deviations

    DEFF Research Database (Denmark)

    Tscherning, Carl Christian

    2015-01-01

    outside the data area. On the other hand, a comparison of predicted quantities with observed values show that the error also varies depending on the local data standard deviation. This quantity may be (and has been) estimated using the GOCE second order vertical derivative, Tzz, in the area covered...... by the satellite. The ratio between the nearly constant standard deviations of a predicted quantity (e.g. in a 25° × 25° area) and the standard deviations of Tzz in smaller cells (e.g., 1° × 1°) have been used as a scale factor in order to obtain more realistic error estimates. This procedure has been applied...

  14. Some improvements in the estimation of 137Cs in urine by the AMP-chlorostannate method

    International Nuclear Information System (INIS)

    Kalaiselvan, S.; Prasad, M.V.R.

    1988-01-01

    An accurate and reliable method was developed for the estimation of radiocesium in urine. Initially cesium is adsorbed on ammonium phosphomolybdate (AMP) precipitate and separated by ion exchange from other contaminants. Cesium thus separated is estimated as cesium chlorostannate, Cs 2 SnCl 6 , from a 50 (v/v)% solution of concentrated HCl in ethyl alcohol. While the results are in good agreement with the values obtained by γ-spectrometry using a Marinelli beaker, the present method has a much lower detection limit. It is observed that the method has significant advantages over the methods available with respect to analysis time, accuracy and detection limits. (author) 10 refs.; 3 tabs

  15. THE IMPROVEMENT OF ESTIMATION TECHNIQUE FOR EFFECTIVENESS OF INVESTMENT PROJECTS ON WASTE UTILIZATION

    Directory of Open Access Journals (Sweden)

    V.V. Krivorotov

    2008-06-01

    Full Text Available The main tendencies of the waste products formation and recycling in the Russian Federation and in the Sverdlovsk region have been analyzed and the principal factors restraining the inclusion of anthropogenic formations into the economic circulation have been revealed in the work. A technical approach to the estimation of both ecological and economic integral efficiency of the recycling projects that, in autors, opinion, secures higher objectivity of this estimation as well as the validity of the made decisions on their realization.

  16. Timely disclosure of progress in long-term cancer survival: the boomerang method substantially improved estimates in a comparative study.

    Science.gov (United States)

    Brenner, Hermann; Jansen, Lina

    2016-02-01

    Monitoring cancer survival is a key task of cancer registries, but timely disclosure of progress in long-term survival remains a challenge. We introduce and evaluate a novel method, denoted "boomerang method," for deriving more up-to-date estimates of long-term survival. We applied three established methods (cohort, complete, and period analysis) and the boomerang method to derive up-to-date 10-year relative survival of patients diagnosed with common solid cancers and hematological malignancies in the United States. Using the Surveillance, Epidemiology and End Results 9 database, we compared the most up-to-date age-specific estimates that might have been obtained with the database including patients diagnosed up to 2001 with 10-year survival later observed for patients diagnosed in 1997-2001. For cancers with little or no increase in survival over time, the various estimates of 10-year relative survival potentially available by the end of 2001 were generally rather similar. For malignancies with strongly increasing survival over time, including breast and prostate cancer and all hematological malignancies, the boomerang method provided estimates that were closest to later observed 10-year relative survival in 23 of the 34 groups assessed. The boomerang method can substantially improve up-to-dateness of long-term cancer survival estimates in times of ongoing improvement in prognosis. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Improving Water Balance Estimation in the Nile by Combining Remote Sensing and Hydrological Modelling: a Template for Ungauged Basins

    Science.gov (United States)

    Gleason, C. J.; Wada, Y.; Wang, J.

    2017-12-01

    Declining gauging infrastructure and fractious water politics have decreased available information about river flows globally, especially in international river basins. Remote sensing and water balance modelling are frequently cited as a potential solutions, but these techniques largely rely on the same in decline gauge data to constrain or parameterize discharge estimates, thus creating a circular approach to estimating discharge inapplicable to ungauged basins. To address this, we here combine a discontinued gauge, remotely sensed discharge estimates made via at-many-stations hydraulic geometry (AMHG) and Landsat data, and the PCR-GLOBWB hydrological model to estimate discharge for an ungauged time period for the Lower Nile (1978-present). Specifically, we first estimate initial discharges from 86 Landsat images and AMHG (1984-2015), and then use these flow estimates to tune the hydrologic model. Our tuning methodology is purposefully simple and can be easily applied to any model without the need for calibration/parameterization. The resulting tuned modelled hydrograph shows large improvement in flow magnitude over previous modelled hydrographs, and validation of tuned monthly model output flows against the historical gauge yields an RMSE of 343 m3/s (33.7%). By contrast, the original simulation had an order-of-magnitude flow error. This improvement is substantial but not perfect: modelled flows have a one-to two-month wet season lag and a negative bias. More sophisticated model calibration and training (e.g. data assimilation) is needed to improve upon our results, however, our results achieved by coupling physical models and remote sensing is a promising first step and proof of concept toward future modelling of ungauged flows. This is especially true as massive cloud computing via Google Earth Engine makes our method easily applicable to any basin without current gauges. Finally, we purposefully do not offer prescriptive solutions for Nile management, and

  18. Proposed methodology for estimating the impact of highway improvements on urban air pollution.

    Science.gov (United States)

    1971-01-01

    The aim of this methodology is to indicate the expected change in ambient air quality in the vicinity of a highway improvement and in the total background level of urban air pollution resulting from the highway improvement. Both the jurisdiction in w...

  19. Robust Improvement in Estimation of a Covariance Matrix in an Elliptically Contoured Distribution Respect to Quadratic Loss Function

    Directory of Open Access Journals (Sweden)

    Z. Khodadadi

    2008-03-01

    Full Text Available Let S be matrix of residual sum of square in linear model Y = Aβ + e where matrix e is distributed as elliptically contoured with unknown scale matrix Σ. In present work, we consider the problem of estimating Σ with respect to squared loss function, L(Σˆ , Σ = tr(ΣΣˆ −1 −I 2 . It is shown that improvement of the estimators were obtained by James, Stein [7], Dey and Srivasan [1] under the normality assumption remains robust under an elliptically contoured distribution respect to squared loss function

  20. A web-based system to facilitate local, systematic quality improvement by multidisciplinary care teams: development and first experiences of CARDSS Online

    NARCIS (Netherlands)

    van Engen-Verheul, Mariëtte M.; van der Veer, Sabine N.; de Keizer, Nicolette F.; Tjon Sjoe Sjoe, Winston; van der Zwan, Eric P. A.; Peek, Niels

    2013-01-01

    Continuous monitoring and systematic improvement of quality have become increasingly common in healthcare. To support multidisciplinary care teams in improving their clinical performance using feedback on quality indicators, we developed the CARDSS Online system. This system supports (i) monitoring

  1. Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods

    Directory of Open Access Journals (Sweden)

    Humberto Muñoz

    2009-06-01

    Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best fit to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for finding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.

  2. The potential of spectral mixture analysis to improve the estimation accuracy of tropical forest biomass

    NARCIS (Netherlands)

    Basuki, T.M.; Skidmore, A.K.; Laake, van P.E.; Duren, van I.C.; Hussin, Y.A.

    2012-01-01

    A main limitation of pixel-based vegetation indices or reflectance values for estimating above-ground biomass is that they do not consider the mixed spectral components on the earth's surface covered by a pixel. In this research, we decomposed mixed reflectance in each pixel before developing models

  3. Calibrated Tully-Fisher relations for improved estimates of disc rotation velocities

    NARCIS (Netherlands)

    Reyes, R.; Mandelbaum, R.; Gunn, J. E.; Pizagno II, Jim; Lackner, C. N.

    2011-01-01

    In this paper, we derive scaling relations between photometric observable quantities and disc galaxy rotation velocity V-rot or Tully-Fisher relations (TFRs). Our methodology is dictated by our purpose of obtaining purely photometric, minimal-scatter estimators of V-rot applicable to large galaxy

  4. Improving estimates of numbers of children with severe acute malnutrition using cohort and survey data

    DEFF Research Database (Denmark)

    Isanaka, Sheila; Boundy, Ellen O neal; Grais, Rebecca F

    2016-01-01

    Severe acute malnutrition (SAM) is reported to affect 19 million children worldwide. However, this estimate is based on prevalence data from cross-sectional surveys and can be expected to miss some children affected by an acute condition such as SAM. The burden of acute conditions is more...

  5. Improved sampling for airborne surveys to estimate wildlife population parameters in the African Savannah

    NARCIS (Netherlands)

    Khaemba, W.; Stein, A.

    2002-01-01

    Parameter estimates, obtained from airborne surveys of wildlife populations, often have large bias and large standard errors. Sampling error is one of the major causes of this imprecision and the occurrence of many animals in herds violates the common assumptions in traditional sampling designs like

  6. Improved exposure estimation in soil screening and cleanup criteria for volatile organic chemicals.

    Science.gov (United States)

    DeVaull, George E

    2017-09-01

    Soil cleanup criteria define acceptable concentrations of organic chemical constituents for exposed humans. These criteria sum the estimated soil exposure over multiple pathways. Assumptions for ingestion, dermal contact, and dust exposure generally presume a chemical persists in surface soils at a constant concentration level for the entire exposure duration. For volatile chemicals, this is an unrealistic assumption. A calculation method is presented for surficial soil criteria that include volatile depletion of chemical for these uptake pathways. The depletion estimates compare favorably with measured concentration profiles and with field measurements of soil concentration. Corresponding volatilization estimates compare favorably with measured data for a wide range of volatile and semivolatile chemicals, including instances with and without the presence of a mixed-chemical residual phase. Selected examples show application of the revised factors in estimating screening levels for benzene in surficial soils. Integr Environ Assess Manag 2017;13:861-869. © 2017 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2017 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).

  7. Improved vertical streambed flux estimation using multiple diurnal temperature methods in series

    Science.gov (United States)

    Irvine, Dylan J.; Briggs, Martin A.; Cartwright, Ian; Scruggs, Courtney; Lautz, Laura K.

    2017-01-01

    Analytical solutions that use diurnal temperature signals to estimate vertical fluxes between groundwater and surface water based on either amplitude ratios (Ar) or phase shifts (Δϕ) produce results that rarely agree. Analytical solutions that simultaneously utilize Ar and Δϕ within a single solution have more recently been derived, decreasing uncertainty in flux estimates in some applications. Benefits of combined (ArΔϕ) methods also include that thermal diffusivity and sensor spacing can be calculated. However, poor identification of either Ar or Δϕ from raw temperature signals can lead to erratic parameter estimates from ArΔϕ methods. An add-on program for VFLUX 2 is presented to address this issue. Using thermal diffusivity selected from an ArΔϕ method during a reliable time period, fluxes are recalculated using an Ar method. This approach maximizes the benefits of the Ar and ArΔϕ methods. Additionally, sensor spacing calculations can be used to identify periods with unreliable flux estimates, or to assess streambed scour. Using synthetic and field examples, the use of these solutions in series was particularly useful for gaining conditions where fluxes exceeded 1 m/d.

  8. Improved estimation of hydraulic conductivity by combining stochastically simulated hydrofacies with geophysical data.

    Science.gov (United States)

    Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao

    2016-03-01

    Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.

  9. Improvement of Hyperemic Myocardial Oxygen Extraction Fraction Estimation By A Diffusion Prepared Sequence

    Science.gov (United States)

    McCommis, Kyle S.; Koktzoglou, Ioannis; Zhang, Haosen; Goldstein, Thomas A.; Northrup, Benjamin E.; Li, Debiao; Gropler, Robert J.; Zheng, Jie

    2010-01-01

    Myocardial oxygen extraction fraction (OEF) during hyperemia can be estimated using a double-inversion-recovery (DIR) prepared T2-weighted black-blood sequence. Severe irregular ECG-triggering due to elevated heart rate and/or arrhythmias may render it difficult to adequately suppress the flowing left ventricle blood signal and thus potentially cause errors in the estimates of myocardial OEF. Thus, the goal of this study was to evaluate another black-blood technique, a diffusion-weighted (DW)-prepared TSE sequence for its ability to determine regional myocardial OEF during hyperemia. Control dogs and dogs with acute coronary artery stenosis were imaged with both the DIR- and DW-prepared TSE sequences at rest and during either dipyridamole or dobutamine hyperemia. Validation of MRI OEF estimates was performed using blood sampling from the artery and coronary sinus in control dogs. The two methods showed comparable correlations with blood sampling results (R2 = 0.9). Similar OEF estimations for all dogs were observed except for the group of dogs with severe coronary stenosis during dobutamine stress. In these dogs, the DW method provided more physiologically reasonable OEF (hyperemic OEF = 0.75 ± 0.08 vs resting OEF of 0.6) than the DIR method (hyperemic OEF = 0.56 ± 0.10). DW-preparation may be a valuable alternative for more accurate oxygenation measurements during irregular ECG-triggering. PMID:20512871

  10. Improved Stewart platform state estimation using inertial and actuator position measurements

    NARCIS (Netherlands)

    MiletoviC, I.; Pool, D.M.; Stroosma, O.; van Paassen, M.M.; Chu, Q.

    2017-01-01

    Accurate and reliable estimation of the kinematic state of a six degrees-of-freedom Stewart platform is a problem of interest in various engineering disciplines. Particularly so in the area of flight simulation, where the Stewart platform is in widespread use for the generation of motion similar

  11. Using convolutional decoding to improve time delay and phase estimation in digital communications

    Science.gov (United States)

    Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  12. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    Science.gov (United States)

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  13. Does a more sophisticated storm erosion model improve probabilistic erosion estimates?

    NARCIS (Netherlands)

    Ranasinghe, R.W.M.R.J.B.; Callaghan, D.; Roelvink, D.

    2013-01-01

    The dependency between the accuracy/uncertainty of storm erosion exceedance estimates obtained via a probabilistic model and the level of sophistication of the structural function (storm erosion model) embedded in the probabilistic model is assessed via the application of Callaghan et al.'s (2008)

  14. Improved arrival-date estimates of Arctic-breeding Dunlin (Calidris alpina arcticola)

    Science.gov (United States)

    Doll, Andrew C.; Lanctot, Richard B.; Stricker, Craig A.; Yezerinac, Stephen M.; Wunder, Michael B.

    2015-01-01

    The use of stable isotopes in animal ecology depends on accurate descriptions of isotope dynamics within individuals. The prevailing assumption that laboratory-derived isotopic parameters apply to free-living animals is largely untested. We used stable carbon isotopes (δ13C) in whole blood from migratory Dunlin (Calidris alpina arcticola) to estimate an in situ turnover rate and individual diet-switch dates. Our in situ results indicated that turnover rates were higher in free-living birds, in comparison to the results of an experimental study on captive Dunlin and estimates derived from a theoretical allometric model. Diet-switch dates from all 3 methods were then used to estimate arrival dates to the Arctic; arrival dates calculated with the in situ turnover rate were later than those with the other turnover-rate estimates, substantially so in some cases. These later arrival dates matched dates when local snow conditions would have allowed Dunlin to settle, and agreed with anticipated arrival dates of Dunlin tracked with light-level geolocators. Our study presents a novel method for accurately estimating arrival dates for individuals of migratory species in which return dates are difficult to document. This may be particularly appropriate for species in which extrinsic tracking devices cannot easily be employed because of cost, body size, or behavioral constraints, and in habitats that do not allow individuals to be detected easily upon first arrival. Thus, this isotopic method offers an exciting alternative approach to better understand how species may be altering their arrival dates in response to changing climatic conditions.

  15. Improved OCV Model of a Li-Ion NMC Battery for Online SOC Estimation Using the Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Ines Baccouche

    2017-05-01

    Full Text Available Accurate modeling of the nonlinear relationship between the open circuit voltage (OCV and the state of charge (SOC is required for adaptive SOC estimation during the lithium-ion (Li-ion battery operation. Online SOC estimation should meet several constraints, such as the computational cost, the number of parameters, as well as the accuracy of the model. In this paper, these challenges are considered by proposing an improved simplified and accurate OCV model of a nickel manganese cobalt (NMC Li-ion battery, based on an empirical analytical characterization approach. In fact, composed of double exponential and simple quadratic functions containing only five parameters, the proposed model accurately follows the experimental curve with a minor fitting error of 1 mV. The model is also valid at a wide temperature range and takes into account the voltage hysteresis of the OCV. Using this model in SOC estimation by the extended Kalman filter (EKF contributes to minimizing the execution time and to reducing the SOC estimation error to only 3% compared to other existing models where the estimation error is about 5%. Experiments are also performed to prove that the proposed OCV model incorporated in the EKF estimator exhibits good reliability and precision under various loading profiles and temperatures.

  16. Multinomial N-mixture models improve the applicability of electrofishing for developing population estimates of stream-dwelling Smallmouth Bass

    Science.gov (United States)

    Mollenhauer, Robert; Brewer, Shannon K.

    2017-01-01

    Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the

  17. Improvements on a patient-specific dose estimation system in nuclear medicine examination

    International Nuclear Information System (INIS)

    Chuang, K. S.; Lu, J. C.; Lin, H. H.; Dong, S. L.; Yang, H. J.; Shih, C. T.; Lin, C. H.; Yao, W. J.; Ni, Y. C.; Jan, M. L.; Chang, S. J.

    2014-01-01

    The purpose of this paper is to develop a patient-specific dose estimation system in nuclear medicine examination. A dose deposition routine to store the deposited energy of the photons during their flights was embedded in the widely used SimSET Monte Carlo code and a user-friendly interface for reading PET and CT images was developed. Dose calculated on ORNL phantom was used to validate the accuracy of this system. The ratios of S value for 99m Tc, 18 F and 131 I computed by this system to those obtained with OLINDA for various organs were ranged from 0.93 to 1.18, which were comparable to that obtained from MCNPX2.6 code (0.88-1.22). Our system developed provides opportunity for tumor dose estimation which cannot be known from the MIRD. The radiation dose can provide useful information in the amount of radioisotopes to be administered in radioimmunotherapy. (authors)

  18. Improving Streamflow Simulation in Gaged and Ungaged Areas Using a Multi-Model Synthesis Combined with Remotely-Sensed Data and Estimates of Uncertainty

    Science.gov (United States)

    Lafontaine, J.; Hay, L.

    2015-12-01

    The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). More than 1,700 gaged watersheds across the CONUS were modeled to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models with remotely-sensed data products (i.e. - snow water equivalent) and estimates of uncertainty. Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison. As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. - snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve simulations of streamflow for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of simulated and measured information for model development and calibration at a given location of interest. In addition, these calibration strategies have been developed to be flexible so that new data products or simulated information can be assimilated. This analysis provides a foundation to understand how well models work when streamflow data is either not available or is limited and could be used to further inform hydrologic model parameter development for ungaged areas.

  19. DeepQA: Improving the estimation of single protein model quality with deep belief networks

    OpenAIRE

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-01-01

    Background Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. Results We introduce a novel single-model quality assessment method DeepQA based on deep belie...

  20. Improving filtering and prediction of spatially extended turbulent systems with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates

  1. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    Science.gov (United States)

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  2. $\\Upsilon\\overline{B}B$ couplings, slope of the Isgur-Wise function and improved estimate of $V_{cb}$

    CERN Document Server

    Narison, Stéphan

    1994-01-01

    We estimate the sum of the \\Upsilon \\bar BB couplings using QCD Spectral Sum Rules (QSSR). Our result implies the phenomenological bound \\xi'(vv'=1) \\geq -1.04 for the slope of the Isgur-Wise function. An analytic estimate of the (physical) slope to two loops within QSSR leads to the accurate value \\xi'(vv'=1) \\simeq -(1.00 \\pm 0.02) due to the (almost) complete cancellations between the perturbative and non-perturbative corrections at the stability points. Then, we deduce, from the present data, the improved estimate \\vert V_{cb} \\vert \\simeq \\ga 1.48 \\mbox{ps}/\\tau_B \\dr ^{1/2}(37.3 \\pm 1.2 \\pm 1.4)\\times 10^{-3} where the first error comes from the data analysis and the second one from the different model parametrizations of the Isgur-Wise function.

  3. Using GRAPPA to improve autocalibrated coil sensitivity estimation for the SENSE family of parallel imaging reconstruction algorithms.

    Science.gov (United States)

    Hoge, W Scott; Brooks, Dana H

    2008-08-01

    Two strategies are widely used in parallel MRI to reconstruct subsampled multicoil image data. SENSE and related methods employ explicit receiver coil spatial response estimates to reconstruct an image. In contrast, coil-by-coil methods such as GRAPPA leverage correlations among the acquired multicoil data to reconstruct missing k-space lines. In self-referenced scenarios, both methods employ Nyquist-rate low-frequency k-space data to identify the reconstruction parameters. Because GRAPPA does not require explicit coil sensitivities estimates, it needs considerably fewer autocalibration signals than SENSE. However, SENSE methods allow greater opportunity to control reconstruction quality though regularization and thus may outperform GRAPPA in some imaging scenarios. Here, we employ GRAPPA to improve self-referenced coil sensitivity estimation in SENSE and related methods using very few auto-calibration signals. This enables one to leverage each methods' inherent strength and produce high quality self-referenced SENSE reconstructions. (c) 2008 Wiley-Liss, Inc.

  4. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    Directory of Open Access Journals (Sweden)

    Anbang Zhao

    2017-02-01

    Full Text Available In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  5. An improved geographically weighted regression model for PM2.5 concentration estimation in large areas

    Science.gov (United States)

    Zhai, Liang; Li, Shuang; Zou, Bin; Sang, Huiyong; Fang, Xin; Xu, Shan

    2018-05-01

    Considering the spatial non-stationary contributions of environment variables to PM2.5 variations, the geographically weighted regression (GWR) modeling method has been using to estimate PM2.5 concentrations widely. However, most of the GWR models in reported studies so far were established based on the screened predictors through pretreatment correlation analysis, and this process might cause the omissions of factors really driving PM2.5 variations. This study therefore developed a best subsets regression (BSR) enhanced principal component analysis-GWR (PCA-GWR) modeling approach to estimate PM2.5 concentration by fully considering all the potential variables' contributions simultaneously. The performance comparison experiment between PCA-GWR and regular GWR was conducted in the Beijing-Tianjin-Hebei (BTH) region over a one-year-period. Results indicated that the PCA-GWR modeling outperforms the regular GWR modeling with obvious higher model fitting- and cross-validation based adjusted R2 and lower RMSE. Meanwhile, the distribution map of PM2.5 concentration from PCA-GWR modeling also clearly depicts more spatial variation details in contrast to the one from regular GWR modeling. It can be concluded that the BSR enhanced PCA-GWR modeling could be a reliable way for effective air pollution concentration estimation in the coming future by involving all the potential predictor variables' contributions to PM2.5 variations.

  6. An Improved Fuzzy Based Missing Value Estimation in DNA Microarray Validated by Gene Ranking

    Directory of Open Access Journals (Sweden)

    Sujay Saha

    2016-01-01

    Full Text Available Most of the gene expression data analysis algorithms require the entire gene expression matrix without any missing values. Hence, it is necessary to devise methods which would impute missing data values accurately. There exist a number of imputation algorithms to estimate those missing values. This work starts with a microarray dataset containing multiple missing values. We first apply the modified version of the fuzzy theory based existing method LRFDVImpute to impute multiple missing values of time series gene expression data and then validate the result of imputation by genetic algorithm (GA based gene ranking methodology along with some regular statistical validation techniques, like RMSE method. Gene ranking, as far as our knowledge, has not been used yet to validate the result of missing value estimation. Firstly, the proposed method has been tested on the very popular Spellman dataset and results show that error margins have been drastically reduced compared to some previous works, which indirectly validates the statistical significance of the proposed method. Then it has been applied on four other 2-class benchmark datasets, like Colorectal Cancer tumours dataset (GDS4382, Breast Cancer dataset (GSE349-350, Prostate Cancer dataset, and DLBCL-FL (Leukaemia for both missing value estimation and ranking the genes, and the results show that the proposed method can reach 100% classification accuracy with very few dominant genes, which indirectly validates the biological significance of the proposed method.

  7. Exploiting magnetic resonance angiography imaging improves model estimation of BOLD signal.

    Directory of Open Access Journals (Sweden)

    Zhenghui Hu

    Full Text Available The change of BOLD signal relies heavily upon the resting blood volume fraction ([Formula: see text] associated with regional vasculature. However, existing hemodynamic data assimilation studies pretermit such concern. They simply assign the value in a physiologically plausible range to get over ill-conditioning of the assimilation problem and fail to explore actual [Formula: see text]. Such performance might lead to unreliable model estimation. In this work, we present the first exploration of the influence of [Formula: see text] on fMRI data assimilation, where actual [Formula: see text] within a given cortical area was calibrated by an MR angiography experiment and then was augmented into the assimilation scheme. We have investigated the impact of [Formula: see text] on single-region data assimilation and multi-region data assimilation (dynamic cause modeling, DCM in a classical flashing checkerboard experiment. Results show that the employment of an assumed [Formula: see text] in fMRI data assimilation is only suitable for fMRI signal reconstruction and activation detection grounded on this signal, and not suitable for estimation of unobserved states and effective connectivity study. We thereby argue that introducing physically realistic [Formula: see text] in the assimilation process may provide more reliable estimation of physiological information, which contributes to a better understanding of the underlying hemodynamic processes. Such an effort is valuable and should be well appreciated.

  8. The use of best estimate codes to improve the simulation in real time

    International Nuclear Information System (INIS)

    Rivero, N.; Esteban, J. A.; Lenhardt, G.

    2007-01-01

    Best estimate codes are assumed to be the technology solution providing the most realistic and accurate response. Best estimate technology provides a complementary solution to the conservative simulation technology usually applied to determine plant safety margins and perform security related studies. Tecnatom in the early 90's, within the MAS project, pioneered the initiative to implement best estimate code in its training simulators. Result of this project was the implementation of the first six-equations thermal hydraulic code worldwide (TRAC R T), running in a training environment. To meet real time and other specific training requirements, it was necessary to overcome important difficulties. Tecnatom has just adapted the Global Nuclear Fuel core Design code: PANAC 11, and is about to complete the General Electric TRACG04 thermal hydraulic code adaptation. This technology features a unique solution for nuclear plants aiming at providing the highest fidelity in simulation, enabling to consider the simulator as a multipurpose: engineering and training, simulation platform. Besides, a visual environment designed to optimize the models life cycle, covering both pre and post-processing activities, is in its late development phase. (Author)

  9. Improved remote gaze estimation using corneal reflection-adaptive geometric transforms

    Science.gov (United States)

    Ma, Chunfei; Baek, Seung-Jin; Choi, Kang-A.; Ko, Sung-Jea

    2014-05-01

    Recently, the remote gaze estimation (RGE) technique has been widely applied to consumer devices as a more natural interface. In general, the conventional RGE method estimates a user's point of gaze using a geometric transform, which represents the relationship between several infrared (IR) light sources and their corresponding corneal reflections (CRs) in the eye image. Among various methods, the homography normalization (HN) method achieves state-of-the-art performance. However, the geometric transform of the HN method requiring four CRs is infeasible for the case when fewer than four CRs are available. To solve this problem, this paper proposes a new RGE method based on three alternative geometric transforms, which are adaptive to the number of CRs. Unlike the HN method, the proposed method not only can operate with two or three CRs, but can also provide superior accuracy. To further enhance the performance, an effective error correction method is also proposed. By combining the introduced transforms with the error-correction method, the proposed method not only provides high accuracy and robustness for gaze estimation, but also allows for a more flexible system setup with a different number of IR light sources. Experimental results demonstrate the effectiveness of the proposed method.

  10. Carbon Footprint estimation for a Sustainable Improvement of Supply Chains: State of the Art

    Directory of Open Access Journals (Sweden)

    Pilar Cordero

    2013-07-01

    Full Text Available Purpose: This paper examines the current methodologies and approaches developed to estimate carbon footprint in supply chains and the studies existing in the literature review about the application of these methodologies and other new approaches proposed by some authors.Design/methodology/approach: Literature review about methodologies developed by some authors for determining greenhouse gases emissions throughout the supply chain of a given sector or organization.Findings and Originality/value: Due to its usefulness for the design and management of a sustainable supply chain management, methodologies for calculating carbon footprint across the supply chain are recommended by many authors not only to reduce GHG emissions but also to optimize it in a cost-effective manner. Although these approaches are in first stages of development and the literature is scarce, different methodologies for estimating CF emissions which include EIO analysis models and standardized methods and guidance have been developed, some of them applicable to supply chains especially methodologies for calculating CF of a specific economic sector supply chain in a territory or country and for calculating CF of an organization applicable to the estimation of GHG emissions of a specific company supply chain.

  11. Improved protocol and data analysis for accelerated shelf-life estimation of solid dosage forms.

    Science.gov (United States)

    Waterman, Kenneth C; Carella, Anthony J; Gumkowski, Michael J; Lukulay, Patrick; MacDonald, Bruce C; Roy, Michael C; Shamblin, Sheri L

    2007-04-01

    To propose and test a new accelerated aging protocol for solid-state, small molecule pharmaceuticals which provides faster predictions for drug substance and drug product shelf-life. The concept of an isoconversion paradigm, where times in different temperature and humidity-controlled stability chambers are set to provide a critical degradant level, is introduced for solid-state pharmaceuticals. Reliable estimates for temperature and relative humidity effects are handled using a humidity-corrected Arrhenius equation, where temperature and relative humidity are assumed to be orthogonal. Imprecision is incorporated into a Monte-Carlo simulation to propagate the variations inherent in the experiment. In early development phases, greater imprecision in predictions is tolerated to allow faster screening with reduced sampling. Early development data are then used to design appropriate test conditions for more reliable later stability estimations. Examples are reported showing that predicted shelf-life values for lower temperatures and different relative humidities are consistent with the measured shelf-life values at those conditions. The new protocols and analyses provide accurate and precise shelf-life estimations in a reduced time from current state of the art.

  12. Use of Multiple Imputation Method to Improve Estimation of Missing Baseline Serum Creatinine in Acute Kidney Injury Research

    Science.gov (United States)

    Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.

    2013-01-01

    Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; Pcreatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980

  13. Performance improvement of coherent free-space optical communication with quadrature phase-shift keying modulation using digital phase estimation.

    Science.gov (United States)

    Li, Xueliang; Geng, Tianwen; Ma, Shuang; Li, Yatian; Gao, Shijie; Wu, Zhiyong

    2017-06-01

    The performance of coherent free-space optical (CFSO) communication with phase modulation is limited by both phase fluctuations and intensity scintillations induced by atmospheric turbulence. To improve the system performance, one effective way is to use digital phase estimation. In this paper, a CFSO communication system with quadrature phase-shift keying modulation is studied. With consideration of the effects of log-normal amplitude fluctuations and Gauss phase fluctuations, a two-stage Mth power carrier phase estimation (CPE) scheme is proposed. The simulation results show that the phase noise can be suppressed greatly by this scheme, and the system symbol error rate performance with the two-stage Mth power CPE can be three orders lower than that of the single-stage Mth power CPE. Therefore, the two-stage CPE we proposed can contribute to the performance improvements of the CFSO communication system and has determinate guidance sense to its actual application.

  14. Improving global fire carbon emissions estimates by combining moderate resolution burned area and active fire observations

    Science.gov (United States)

    Randerson, J. T.; Chen, Y.; Giglio, L.; Rogers, B. M.; van der Werf, G.

    2011-12-01

    In several important biomes, including croplands and tropical forests, many small fires exist that have sizes that are well below the detection limit for the current generation of burned area products derived from moderate resolution spectroradiometers. These fires likely have important effects on greenhouse gas and aerosol emissions and regional air quality. Here we developed an approach for combining 1km thermal anomalies (active fires; MOD14A2) and 500m burned area observations (MCD64A1) to estimate the prevalence of these fires and their likely contribution to burned area and carbon emissions. We first estimated active fires within and outside of 500m burn scars in 0.5 degree grid cells during 2001-2010 for which MCD64A1 burned area observations were available. For these two sets of active fires we then examined mean fire radiative power (FRP) and changes in enhanced vegetation index (EVI) derived from 16-day intervals immediately before and after each active fire observation. To estimate the burned area associated with sub-500m fires, we first applied burned area to active fire ratios derived solely from within burned area perimeters to active fires outside of burn perimeters. In a second step, we further modified our sub-500m burned area estimates using EVI changes from active fires outside and within of burned areas (after subtracting EVI changes derived from control regions). We found that in northern and southern Africa savanna regions and in Central and South America dry forest regions, the number of active fires outside of MCD64A1 burned areas increased considerably towards the end of the fire season. EVI changes for active fires outside of burn perimeters were, on average, considerably smaller than EVI changes associated with active fires inside burn scars, providing evidence for burn scars that were substantially smaller than the 25 ha area of a single 500m pixel. FRP estimates also were lower for active fires outside of burn perimeters. In our

  15. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    Science.gov (United States)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and

  16. Improved dose–volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    International Nuclear Information System (INIS)

    Cheng Lishui; Hobbs, Robert F; Sgouros, George; Frey, Eric C; Segars, Paul W

    2013-01-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose–volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator–detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  17. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    Science.gov (United States)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  18. An improved analysis of gravity drainage experiments for estimating the unsaturated soil hydraulic functions

    Science.gov (United States)

    Sisson, James B.; van Genuchten, Martinus Th.

    1991-04-01

    The unsaturated hydraulic properties are important parameters in any quantitative description of water and solute transport in partially saturated soils. Currently, most in situ methods for estimating the unsaturated hydraulic conductivity (K) are based on analyses that require estimates of the soil water flux and the pressure head gradient. These analyses typically involve differencing of field-measured pressure head (h) and volumetric water content (θ) data, a process that can significantly amplify instrumental and measurement errors. More reliable methods result when differencing of field data can be avoided. One such method is based on estimates of the gravity drainage curve K'(θ) = dK/dθ which may be computed from observations of θ and/or h during the drainage phase of infiltration drainage experiments assuming unit gradient hydraulic conditions. The purpose of this study was to compare estimates of the unsaturated soil hydraulic functions on the basis of different combinations of field data θ, h, K, and K'. Five different data sets were used for the analysis: (1) θ-h, (2) K-θ, (3) K'-θ (4) K-θ-h, and (5) K'-θ-h. The analysis was applied to previously published data for the Norfolk, Troup, and Bethany soils. The K-θ-h and K'-θ-h data sets consistently produced nearly identical estimates of the hydraulic functions. The K-θ and K'-θ data also resulted in similar curves, although results in this case were less consistent than those produced by the K-θ-h and K'-θ-h data sets. We conclude from this study that differencing of field data can be avoided and hence that there is no need to calculate soil water fluxes and pressure head gradients from inherently noisy field-measured θ and h data. The gravity drainage analysis also provides results over a much broader range of hydraulic conductivity values than is possible with the more standard instantaneous profile analysis, especially when augmented with independently measured soil water retention data.

  19. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    Science.gov (United States)

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  20. Estimated capacity of object files in visual short-term memory is not improved by retrieval cueing.

    Science.gov (United States)

    Saiki, Jun; Miyatsuji, Hirofumi

    2009-03-23

    Visual short-term memory (VSTM) has been claimed to maintain three to five feature-bound object representations. Some results showing smaller capacity estimates for feature binding memory have been interpreted as the effects of interference in memory retrieval. However, change-detection tasks may not properly evaluate complex feature-bound representations such as triple conjunctions in VSTM. To understand the general type of feature-bound object representation, evaluation of triple conjunctions is critical. To test whether interference occurs in memory retrieval for complete object file representations in a VSTM task, we cued retrieval in novel paradigms that directly evaluate the memory for triple conjunctions, in comparison with a simple change-detection task. In our multiple object permanence tracking displays, observers monitored for a switch in feature combination between objects during an occlusion period, and we found that a retrieval cue provided no benefit with the triple conjunction tasks, but significant facilitation with the change-detection task, suggesting that low capacity estimates of object file memory in VSTM reflect a limit on maintenance, not retrieval.

  1. Using river distance and existing hydrography data can improve the geostatistical estimation of fish tissue mercury at unsampled locations.

    Science.gov (United States)

    Money, Eric S; Sackett, Dana K; Aday, D Derek; Serre, Marc L

    2011-09-15

    Mercury in fish tissue is a major human health concern. Consumption of mercury-contaminated fish poses risks to the general population, including potentially serious developmental defects and neurological damage in young children. Therefore, it is important to accurately identify areas that have the potential for high levels of bioaccumulated mercury. However, due to time and resource constraints, it is difficult to adequately assess fish tissue mercury on a basin wide scale. We hypothesized that, given the nature of fish movement along streams, an analytical approach that takes into account distance traveled along these streams would improve the estimation accuracy for fish tissue mercury in unsampled streams. Therefore, we used a river-based Bayesian Maximum Entropy framework (river-BME) for modern space/time geostatistics to estimate fish tissue mercury at unsampled locations in the Cape Fear and Lumber Basins in eastern North Carolina. We also compared the space/time geostatistical estimation using river-BME to the more traditional Euclidean-based BME approach, with and without the inclusion of a secondary variable. Results showed that this river-based approach reduced the estimation error of fish tissue mercury by more than 13% and that the median estimate of fish tissue mercury exceeded the EPA action level of 0.3 ppm in more than 90% of river miles for the study domain.

  2. Improving incidence estimation in practice-based sentinel surveillance networks using spatial variation in general practitioner density

    Directory of Open Access Journals (Sweden)

    Cécile Souty

    2016-11-01

    Full Text Available Abstract Background In surveillance networks based on voluntary participation of health-care professionals, there is little choice regarding the selection of participants’ characteristics. External information about participants, for example local physician density, can help reduce bias in incidence estimates reported by the surveillance network. Methods There is an inverse association between the number of reported influenza-like illness (ILI cases and local general practitioners (GP density. We formulated and compared estimates of ILI incidence using this relationship. To compare estimates, we simulated epidemics using a spatially explicit disease model and their observation by surveillance networks with different characteristics: random, maximum coverage, largest cities, etc. Results In the French practice-based surveillance network – the “Sentinelles” network – GPs reported 3.6% (95% CI [3;4] less ILI cases as local GP density increased by 1 GP per 10,000 inhabitants. Incidence estimates varied markedly depending on scenarios for participant selection in surveillance. Yet accounting for change in GP density for participants allowed reducing bias. Applied on data from the Sentinelles network, changes in overall incidence ranged between 1.6 and 9.9%. Conclusions Local GP density is a simple measure that provides a way to reduce bias in estimating disease incidence in general practice. It can contribute to improving disease monitoring when it is not possible to choose the characteristics of participants.

  3. Improved accuracy in the estimation of blood velocity vectors using matched filtering

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Gori, P.

    2000-01-01

    the flow and the ultrasound beam (30, 45, 60, and 90 degrees). The parabolic flow has a peak velocity of 0.5 m/s and the pulse repetition frequency is 3.5 kHz. Simulating twenty emissions and calculating the cross-correlation using four pulse-echo lines for each estimate, the parabolic flow profile...... is found with a standard deviation of 0.014 m/s at 45 degrees (corresponding to an accuracy of 2.8%) and 0.022 m/s (corresponding to an accuracy of 4.4%) at 90 degrees, which is transverse to the ultrasound beam....

  4. Outlier treatment for improving parameter estimation of group contribution based models for upper flammability limit

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    2015-01-01

    Flammability data is needed to assess the risk of fire and explosions. This study presents a new group contribution (GC) model to predict the upper flammability limit UFL oforganic chemicals. Furthermore, it provides a systematic method for outlier treatment inorder to improve the parameter...

  5. Improved estimation of heavy rainfall by weather radar after reflectivity correction and accounting for raindrop size distribution variability

    Science.gov (United States)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2015-04-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z-R) and radar reflectivity-specific attenuation (Z-k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the

  6. Addressing the Issue of Microplastics in the Wake of the Microbead-Free Waters Act-A New Standard Can Facilitate Improved Policy.

    Science.gov (United States)

    McDevitt, Jason P; Criddle, Craig S; Morse, Molly; Hale, Robert C; Bott, Charles B; Rochman, Chelsea M

    2017-06-20

    The United States Microbead-Free Waters Act was signed into law in December 2015. It is a bipartisan agreement that will eliminate one preventable source of microplastic pollution in the United States. Still, the bill is criticized for being too limited in scope, and also for discouraging the development of biodegradable alternatives that ultimately are needed to solve the bigger issue of plastics in the environment. Due to a lack of an acknowledged, appropriate standard for environmentally safe microplastics, the bill banned all plastic microbeads in selected cosmetic products. Here, we review the history of the legislation and how it relates to the issue of microplastic pollution in general, and we suggest a framework for a standard (which we call "Ecocyclable") that includes relative requirements related to toxicity, bioaccumulation, and degradation/assimilation into the natural carbon cycle. We suggest that such a standard will facilitate future regulation and legislation to reduce pollution while also encouraging innovation of sustainable technologies.

  7. Improving head and body pose estimation through semi-supervised manifold alignment

    KAUST Repository

    Heili, Alexandre

    2014-10-27

    In this paper, we explore the use of a semi-supervised manifold alignment method for domain adaptation in the context of human body and head pose estimation in videos. We build upon an existing state-of-the-art system that leverages on external labelled datasets for the body and head features, and on the unlabelled test data with weak velocity labels to do a coupled estimation of the body and head pose. While this previous approach showed promising results, the learning of the underlying manifold structure of the features in the train and target data and the need to align them were not explored despite the fact that the pose features between two datasets may vary according to the scene, e.g. due to different camera point of view or perspective. In this paper, we propose to use a semi-supervised manifold alignment method to bring the train and target samples closer within the resulting embedded space. To this end, we consider an adaptation set from the target data and rely on (weak) labels, given for example by the velocity direction whenever they are reliable. These labels, along with the training labels are used to bias the manifold distance within each manifold and to establish correspondences for alignment.

  8. Improving risk estimates of runoff producing areas: formulating variable source areas as a bivariate process.

    Science.gov (United States)

    Cheng, Xiaoya; Shaw, Stephen B; Marjerison, Rebecca D; Yearick, Christopher D; DeGloria, Stephen D; Walter, M Todd

    2014-05-01

    Predicting runoff producing areas and their corresponding risks of generating storm runoff is important for developing watershed management strategies to mitigate non-point source pollution. However, few methods for making these predictions have been proposed, especially operational approaches that would be useful in areas where variable source area (VSA) hydrology dominates storm runoff. The objective of this study is to develop a simple approach to estimate spatially-distributed risks of runoff production. By considering the development of overland flow as a bivariate process, we incorporated both rainfall and antecedent soil moisture conditions into a method for predicting VSAs based on the Natural Resource Conservation Service-Curve Number equation. We used base-flow immediately preceding storm events as an index of antecedent soil wetness status. Using nine sub-basins of the Upper Susquehanna River Basin, we demonstrated that our estimated runoff volumes and extent of VSAs agreed with observations. We further demonstrated a method for mapping these areas in a Geographic Information System using a Soil Topographic Index. The proposed methodology provides a new tool for watershed planners for quantifying runoff risks across watersheds, which can be used to target water quality protection strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. E-Model MOS Estimate Precision Improvement and Modelling of Jitter Effects

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2012-01-01

    Full Text Available This paper deals with the ITU-T E-model, which is used for non-intrusive MOS VoIP call quality estimation on IP networks. The pros of E-model are computational simplicity and usability on real-time traffic. The cons, as shown in our previous work, are the inability of E-model to reflect effects of network jitter present on real traffic flows and jitter-buffer behavior on end user devices. These effects are visible mostly on traffic over WAN, internet and radio networks and cause the E-model MOS call quality estimate to be noticeably too optimistic. In this paper, we propose a modification to E-model using previously proposed Pplef (effective packet loss using jitter and jitter-buffer model based on Pareto/D/1/K system. We subsequently perform optimization of newly added parameters reflecting jitter effects into E-model by using PESQ intrusive measurement method as a reference for selected audio codecs. Function fitting and parameter optimization is performed under varying delay, packet loss, jitter and different jitter-buffer sizes for both, correlated and uncorrelated long-tailed network traffic.

  10. Improving Accuracy and Temporal Resolution of Learning Curve Estimation for within- and across-Session Analysis

    Science.gov (United States)

    Tabelow, Karsten; König, Reinhard; Polzehl, Jörg

    2016-01-01

    Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning. PMID:27303809

  11. Improving reliability of state estimation programming and computing suite based on analyzing a fault tree

    Directory of Open Access Journals (Sweden)

    Kolosok Irina

    2017-01-01

    Full Text Available Reliable information on the current state parameters obtained as a result of processing the measurements from systems of the SCADA and WAMS data acquisition and processing through methods of state estimation (SE is a condition that enables to successfully manage an energy power system (EPS. SCADA and WAMS systems themselves, as any technical systems, are subject to failures and faults that lead to distortion and loss of information. The SE procedure enables to find erroneous measurements, therefore, it is a barrier for the distorted information to penetrate into control problems. At the same time, the programming and computing suite (PCS implementing the SE functions may itself provide a wrong decision due to imperfection of the software algorithms and errors. In this study, we propose to use a fault tree to analyze consequences of failures and faults in SCADA and WAMS and in the very SE procedure. Based on the analysis of the obtained measurement information and on the SE results, we determine the state estimation PCS fault tolerance level featuring its reliability.

  12. Assimilation of ice and water observations from SAR imagery to improve estimates of sea ice concentration

    Directory of Open Access Journals (Sweden)

    K. Andrea Scott

    2015-09-01

    Full Text Available In this paper, the assimilation of binary observations calculated from synthetic aperture radar (SAR images of sea ice is investigated. Ice and water observations are obtained from a set of SAR images by thresholding ice and water probabilities calculated using a supervised maximum likelihood estimator (MLE. These ice and water observations are then assimilated in combination with ice concentration from passive microwave imagery for the purpose of estimating sea ice concentration. Due to the fact that the observations are binary, consisting of zeros and ones, while the state vector is a continuous variable (ice concentration, the forward model used to map the state vector to the observation space requires special consideration. Both linear and non-linear forward models were investigated. In both cases, the assimilation of SAR data was able to produce ice concentration analyses in closer agreement with image analysis charts than when assimilating passive microwave data only. When both passive microwave and SAR data are assimilated, the bias between the ice concentration analyses and the ice concentration from ice charts is 19.78%, as compared to 26.72% when only passive microwave data are assimilated. The method presented here for the assimilation of SAR data could be applied to other binary observations, such as ice/water information from visual/infrared sensors.

  13. Improving Accuracy and Temporal Resolution of Learning Curve Estimation for within- and across-Session Analysis.

    Directory of Open Access Journals (Sweden)

    Matthias Deliano

    Full Text Available Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning.

  14. Improving Estimation of Evapotranspiration under Water-Limited Conditions Based on SEBS and MODIS Data in Arid Regions

    Directory of Open Access Journals (Sweden)

    Chunlin Huang

    2015-12-01

    Full Text Available This study proposes a method for improving the estimation of surface turbulent fluxes in surface energy balance system (SEBS model under water stress conditions using MODIS data. The normalized difference water index (NDWI as an indicator of water stress is integrated into SEBS. To investigate the feasibility of the new approach, the desert-oasis region in the middle reaches of the Heihe River Basin (HRB is selected as the study area. The proposed model is calibrated with meteorological and flux data over 2008–2011 at the Yingke station and is verified with data from 16 stations of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER project in 2012. The results show that soil moisture significantly affects evapotranspiration (ET under water stress conditions in the study area. Adding the NDWI in SEBS can significantly improve the estimations of surface turbulent fluxes in water-limited regions, especially for spare vegetation cover area. The daily ET maps generated by the new model also show improvements in drylands with low ET values. This study demonstrates that integrating the NDWI into SEBS as an indicator of water stress is an effective way to improve the assessment of the regional ET in semi-arid and arid regions.

  15. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, Cetin; Williams, Brian; McClure, Patrick; Nelson, Ralph A.

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M and S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for

  16. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  17. Modeling the distribution of colonial species to improve estimation of plankton concentration in ballast water

    Science.gov (United States)

    Rajakaruna, Harshana; VandenByllaardt, Julie; Kydd, Jocelyn; Bailey, Sarah

    2018-03-01

    The International Maritime Organization (IMO) has set limits on allowable plankton concentrations in ballast water discharge to minimize aquatic invasions globally. Previous guidance on ballast water sampling and compliance decision thresholds was based on the assumption that probability distributions of plankton are Poisson when spatially homogenous, or negative binomial when heterogeneous. We propose a hierarchical probability model, which incorporates distributions at the level of particles (i.e., discrete individuals plus colonies per unit volume) and also within particles (i.e., individuals per particle) to estimate the average plankton concentration in ballast water. We examined the performance of the models using data for plankton in the size class ≥ 10 μm and test ballast water compliance using the above models.

  18. E-model MOS Estimate Improvement through Jitter Buffer Packet Loss Modelling

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2011-01-01

    Full Text Available Proposed article analyses dependence of MOS as a voice call quality (QoS measure estimated through ITU-T E-model under real network conditions with jitter. In this paper, a method of jitter effect is proposed. Jitter as voice packet time uncertainty appears as increased packet loss caused by jitter memory buffer under- or overflow. Jitter buffer behaviour at receiver’s side is modelled as Pareto/D/1/K system with Pareto-distributed packet interarrival times and its performance is experimentally evaluated by using statistic tools. Jitter buffer stochastic model is then incorporated into E-model in an additive manner accounting for network jitter effects via excess packet loss complementing measured network packet loss. Proposed modification of E-model input parameter adds two degrees of freedom in modelling: network jitter and jitter buffer size.

  19. Improved best estimate plus uncertainty methodology, including advanced validation concepts, to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, C.; Williams, B.; Hemez, F.; Atamturktur, S.H.; McClure, P.

    2011-01-01

    Research highlights: → The best estimate plus uncertainty methodology (BEPU) is one option in the licensing of nuclear reactors. → The challenges for extending the BEPU method for fuel qualification for an advanced reactor fuel are primarily driven by schedule, the need for data, and the sufficiency of the data. → In this paper we develop an extended BEPU methodology that can potentially be used to address these new challenges in the design and licensing of advanced nuclear reactors. → The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. → The methodology includes a formalism to quantify an adequate level of validation (predictive maturity) with respect to existing data, so that required new testing can be minimized, saving cost by demonstrating that further testing will not enhance the quality of the predictive tools. - Abstract: Many evolving nuclear energy technologies use advanced predictive multiscale, multiphysics modeling and simulation (M and S) capabilities to reduce the cost and schedule of design and licensing. Historically, the role of experiments has been as a primary tool for the design and understanding of nuclear system behavior, while M and S played the subordinate role of supporting experiments. In the new era of multiscale, multiphysics computational-based technology development, this role has been reversed. The experiments will still be needed, but they will be performed at different scales to calibrate and validate the models leading to predictive simulations for design and licensing. Minimizing the required number of validation experiments produces cost and time savings. The use of multiscale, multiphysics models introduces challenges in validating these predictive tools - traditional methodologies will have to be modified to address these challenges. This paper gives the basic aspects of a methodology that can potentially be used to address these new challenges in

  20. The role of interior watershed processes in improving parameter estimation and performance of watershed models.

    Science.gov (United States)

    Yen, Haw; Bailey, Ryan T; Arabi, Mazdak; Ahmadi, Mehdi; White, Michael J; Arnold, Jeffrey G

    2014-09-01

    Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the large number of parameters at the disposal of these models, circumstances may arise in which excellent global results are achieved using inaccurate magnitudes of these "intra-watershed" responses. When used for scenario analysis, a given model hence may inaccurately predict the global, in-stream effect of implementing land-use practices at the interior of the watershed. In this study, data regarding internal watershed behavior are used to constrain parameter estimation to maintain realistic intra-watershed responses while also matching available in-stream monitoring data. The methodology is demonstrated for the Eagle Creek Watershed in central Indiana. Streamflow and nitrate (NO) loading are used as global in-stream comparisons, with two process responses, the annual mass of denitrification and the ratio of NO losses from subsurface and surface flow, used to constrain parameter estimation. Results show that imposing these constraints not only yields realistic internal watershed behavior but also provides good in-stream comparisons. Results further demonstrate that in the absence of incorporating intra-watershed constraints, evaluation of nutrient abatement strategies could be misleading, even though typical performance criteria are satisfied. Incorporating intra-watershed responses yields a watershed model that more accurately represents the observed behavior of the system and hence a tool that can be used with confidence in scenario evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  1. Improved estimation of electricity demand function by integration of fuzzy system and data mining approach

    International Nuclear Information System (INIS)

    Azadeh, A.; Saberi, M.; Ghaderi, S.F.; Gitiforouz, A.; Ebrahimipour, V.

    2008-01-01

    This study presents an integrated fuzzy system, data mining and time series framework to estimate and predict electricity demand for seasonal and monthly changes in electricity consumption especially in developing countries such as China and Iran with non-stationary data. Furthermore, it is difficult to model uncertain behavior of energy consumption with only conventional fuzzy system or time series and the integrated algorithm could be an ideal substitute for such cases. To construct fuzzy systems, a rule base is needed. Because a rule base is not available, for the case of demand function, look up table which is one of the extracting rule methods is used to extract the rule base. This system is defined as FLT. Also, decision tree method which is a data mining approach is similarly utilized to extract the rule base. This system is defined as FDM. Preferred time series model is selected from linear (ARMA) and nonlinear model. For this, after selecting preferred ARMA model, McLeod-Li test is applied to determine nonlinearity condition. When, nonlinearity condition is satisfied, preferred nonlinear model is selected and compare with preferred ARMA model and finally one of this is selected as time series model. At last, ANOVA is used for selecting preferred model from fuzzy models and time series model. Also, the impact of data preprocessing and postprocessing on the fuzzy system performance is considered by the algorithm. In addition, another unique feature of the proposed algorithm is utilization of autocorrelation function (ACF) to define input variables, whereas conventional methods which use trial and error method. Monthly electricity consumption of Iran from 1995 to 2005 is considered as the case of this study. The MAPE estimation of genetic algorithm (GA), artificial neural network (ANN) versus the proposed algorithm shows the appropriateness of the proposed algorithm

  2. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    Science.gov (United States)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  3. An Improved Method for Estimating Water-Mass Ventilation Age from Radiocarbon Measurements

    Science.gov (United States)

    Devries, T. J.; Primeau, F. W.

    2009-12-01

    Paleoceanographic data can help to constrain the state of the past ocean circulation. One critical quantity that can be constrained by paleoceanographic data is the ventilation age, which measures the vigor of the ocean circulation. Paleoceanographers often use radiocarbon data to estimate paleo-ventilation ages by calculating either the benthic-planktonic (B-P) age difference, or the so-called “projection” age. However, recent studies have shown that neither of these calculations yield correct estimates of ventilation age, due to fluctuations in atmospheric radiocarbon content and mixing processes in the ocean. Here we propose a new method for more accurately inferring paleo-ventilation ages based on radiocarbon data. Our method makes use of a model that uses parameterized transfer functions to simulate the effects of circulation and mixing in the ocean. We show how this model can be used in a Bayesian framework to infer a ventilation age from a paired radiocarbon- and calendar-age measurement. The Bayesian framework allows us to quantify the uncertainty in the inferred ventilation age due to uncertainty in the data, as well as uncertainty in the assumptions made in the model itself. We applied this framework to previously published radiocarbon data from the deep North Pacific spanning 10 000 to 20 000 years before present. Ventilation ages inferred using our method are significantly different from the B-P ages or projection ages calculated from the same data. Furthermore, our analysis suggests that the uncertainty of the ventilation ages is on the order of 400-500 years, and that the main sources of uncertainty are uncertainty in the age of surface source waters and in the true calendar age of the radiocarbon data. Our results do not show a clear change in the ventilation age of deep North Pacific waters during the last deglaciation.

  4. An Improved Approach to Estimate Methane Emissions from Coal Mining in China.

    Science.gov (United States)

    Zhu, Tao; Bian, Wenjing; Zhang, Shuqing; Di, Pingkuan; Nie, Baisheng

    2017-11-07

    China, the largest coal producer in the world, is responsible for over 50% of the total global methane (CH 4 ) emissions from coal mining. However, the current emission inventory of CH4 from coal mining has large uncertainties because of the lack of localized emission factors (EFs). In this study, province-level CH4 EFs from coal mining in China were developed based on the data analysis of coal production and corresponding discharged CH4 emissions from 787 coal mines distributed in 25 provinces with different geological and operation conditions. Results show that the spatial distribution of CH 4 EFs is highly variable with values as high as 36 m3/t and as low as 0.74 m3/t. Based on newly developed CH 4 EFs and activity data, an inventory of the province-level CH4 emissions was built for 2005-2010. Results reveal that the total CH 4 emissions in China increased from 11.5 Tg in 2005 to 16.0 Tg in 2010. By constructing a gray forecasting model for CH 4 EFs and a regression model for activity, the province-level CH 4 emissions from coal mining in China are forecasted for the years of 2011-2020. The estimates are compared with other published inventories. Our results have a reasonable agreement with USEPA's inventory and are lower by a factor of 1-2 than those estimated using the IPCC default EFs. This study could help guide CH 4 mitigation policies and practices in China.

  5. An Improved PID Algorithm Based on Insulin-on-Board Estimate for Blood Glucose Control with Type 1 Diabetes.

    Science.gov (United States)

    Hu, Ruiqiang; Li, Chengwei

    2015-01-01

    Automated closed-loop insulin infusion therapy has been studied for many years. In closed-loop system, the control algorithm is the key technique of precise insulin infusion. The control algorithm needs to be designed and validated. In this paper, an improved PID algorithm based on insulin-on-board estimate is proposed and computer simulations are done using a combinational mathematical model of the dynamics of blood glucose-insulin regulation in the blood system. The simulation results demonstrate that the improved PID algorithm can perform well in different carbohydrate ingestion and different insulin sensitivity situations. Compared with the traditional PID algorithm, the control performance is improved obviously and hypoglycemia can be avoided. To verify the effectiveness of the proposed control algorithm, in silico testing is done using the UVa/Padova virtual patient software.

  6. Geo-social media as a proxy for hydrometeorological data for streamflow estimation and to improve flood monitoring

    Science.gov (United States)

    Restrepo-Estrada, Camilo; de Andrade, Sidgley Camargo; Abe, Narumi; Fava, Maria Clara; Mendiondo, Eduardo Mario; de Albuquerque, João Porto

    2018-02-01

    Floods are one of the most devastating types of worldwide disasters in terms of human, economic, and social losses. If authoritative data is scarce, or unavailable for some periods, other sources of information are required to improve streamflow estimation and early flood warnings. Georeferenced social media messages are increasingly being regarded as an alternative source of information for coping with flood risks. However, existing studies have mostly concentrated on the links between geo-social media activity and flooded areas. Thus, there is still a gap in research with regard to the use of social media as a proxy for rainfall-runoff estimations and flood forecasting. To address this, we propose using a transformation function that creates a proxy variable for rainfall by analysing geo-social media messages and rainfall measurements from authoritative sources, which are later incorporated within a hydrological model for streamflow estimation. We found that the combined use of official rainfall values with the social media proxy variable as input for the Probability Distributed Model (PDM), improved streamflow simulations for flood monitoring. The combination of authoritative sources and transformed geo-social media data during flood events achieved a 71% degree of accuracy and a 29% underestimation rate in a comparison made with real streamflow measurements. This is a significant improvement on the respective values of 39% and 58%, achieved when only authoritative data were used for the modelling. This result is clear evidence of the potential use of derived geo-social media data as a proxy for environmental variables for improving flood early-warning systems.

  7. Maximum wind radius estimated by the 50 kt radius: